ayaansid8 on 
Wednesday, 1 July 2015
Tuesday, 30 June 2015
Sunday, 15 March 2015
Automotive
NVIDIA DRIVE™ PX
Tegra X1 delivers an astonishing 1.3 gigapixels/second throughput – enough to handle 12 two-Megapixel cameras at frame rates up to 60 fps for some cameras. It is equipped with 10 GB of DRAM memory and combines surround Computer Vision (CV) technology, extensive deep learning training, and over-the-air updates to transform how cars see, think, and learn. - See more at: http://www.nvidia.com/object/drive-px.html#sthash.cs0AuBEA.dpuf
AUTO-PILOT CAR COMPUTER
The DRIVE PX platform is based on the NVIDIA® Tegra® X1 processor, enabling smarter, more sophisticated advanced driver assistance systems (ADAS) and paving the way for the autonomous car.Tegra X1 delivers an astonishing 1.3 gigapixels/second throughput – enough to handle 12 two-Megapixel cameras at frame rates up to 60 fps for some cameras. It is equipped with 10 GB of DRAM memory and combines surround Computer Vision (CV) technology, extensive deep learning training, and over-the-air updates to transform how cars see, think, and learn. - See more at: http://www.nvidia.com/object/drive-px.html#sthash.cs0AuBEA.dpuf
DEEP LEARNING COMPUTER VISION
Conventional ADAS technology today can detect some objects, do basic classification, alert the driver, and in some cases, stop the vehicle. DRIVE PX takes this to the next level with the ability to differentiate an ambulance from a delivery truck or a parked car from one about to pull into traffic. The system can now inform the driver, not just get their attention with a warning. The car is not just sensing, but interpreting what is taking place around it—an essential capability for auto-piloted driving.SURROUND VISION WITH ADVANCED RENDERING
Conventional surround view systems show the driver a virtual view of the area around the car, but often have poor image quality due to warping effects from the fisheye camera lenses. DRIVE PX uses sophisticated structure-from-motion (SFM) and advanced stitching for better image rendering and reduced "ghosting", such as where a line on the pavement can appear in two places at once. Powerful graphics enable DRIVE PX to render a virtual car in the view with high-detail models and realistic lighting effects, so you see what looks like your car, rather than a generic or toy model.SELF PARKING CAPABILITIES
For a car to park itself, it needs to build a 3D map of nearby objects in real time. DRIVE PX delivers the massive processing power to enable techniques like structure-from-motion (SFM) and simultaneous localization and mapping (SLAM) from four surround-view cameras that cover the immediate area around the car. Additional cameras allow for greater distance coverage in forward and cross-traffic viewpoints.THE POWER OF THE TEGRA X1 MOBILE SUPERCHIP
NVIDIA DRIVE PX is powered by dual Tegra X1 mobile superchips for exceptional performance and safety. Each Tegra X1 is capable of one TeraFLOPS of processing power and includes a powerful, energy-efficient GPU, a quad-core ARM® v8 CPU, and dedicated audio, video, and image processors. For camera processing, Tegra X1 delivers an astonishing 1.3 gigapixels/second throughput – enough to handle 12 two-Megapixel cameras at frame rates up to 60 fps for some cameras. The Tegra X1 processor also supports NVIDIA CUDA, bringing breakthrough supercomputing and deep learning processing to cars.Coming by 2023, an exascale supercomputer in the U.S.
The ALMA correlator, one of the most powerful supercomputers in the world, has now been fully installed and tested at its remote, high altitude site in the Andes of northern Chile. This wide-angle view shows some of the racks of the correlator in the ALMA Array Operations Site Technical Building. This photograph shows one of four quadrants of the correlator. The full system has four identical quadrants, with over 134 million processors, performing up to 17 quadrillion operations per second. [December 2012]
Credit: European Southern Observatory (ESO)
NEW ORLEANS -- The U.S. has set 2023 as the target date for producing the next great leap in supercomputing, if its plans aren't thwarted by two presidential and four Congressional elections between now and theIt may seem odd to note the role of politics in a story about supercomputing. But as these systems get more complex -- and expensive -- they compete for science dollars from a Congress unafraid of cutting science funding.
That political reality has frustrated the supercomputing community, and prompted an effort at this year's big supercomputing conference, SC14, here to educate researchers on the need to sell the benefits of supercomputing to a broader audience.
The theme of this year's conference: "HPC matters."
Supercomputing funding efforts in the U.S. are getting a boost from rising global competition from Europe, Japan and China, which now has the world's faster supercomputer. The U.S. Department of Energy last week announced$325 million for two 150-petaflop systems from IBM, with an option on one system to build it out to 300 petaflops.
Dave Turek, vice president of technical computing at IBM, said these systems have the architectural capability to support 500 petaflops, or a half of an exaflop.
One exaflop equals one quintillion (a quintillion is 1 followed by 18 zeros) calculations per second. It is the next great goal in supercomputing that followed the U.S. achievement in 2008 of reaching one petaflop, or 1,000 teraflops, on a system built by IBM. A petaflop equals one quadrillion (1 followed by 15 zeros) calculations per second.
The 2023 date "is when we are going to have an exascale system," William Harrod, Research Division Director for DOE's Advanced Scientific Computing Research program, said in an interview. While the U.S. has spent about $300 million so far on the next generation of systems, that's a "low level," said Harrod.
Congress will have to approve more funding to advance research to meet the development timelines, he said. And while congressional support "looks good today," Harrod isn't predicting the future.
The technical challenges to building an exascale system are many. They include solving software problems to enable parallelism across what may be hundreds of thousands of compute cores; dealing with reliability and resiliency needs in an environment that will see ongoing core failures; and energy efficiency
That last issue, energy efficiency, gets a lot of attention. For every megawatt of power, the annual cost is roughly $1 million. The 150-petaflop systems DOE has planned for 2017 will operate at about 10 MW.
The top researchers internationally acknowledge that there is competition to reach exascale, but there's also an understanding that software stack development is so complex that international cooperation is needed.
Although the Europeans are operating on a time frame that may be similar to the U.S., Japan had earlier announced a goal to reach exascale by 2020. But Akinori Yonezawa, deputy director at the Riken Advanced Institute for Computational Science, said, in an interview Tuesday, that the goal is to now build a 200- to 600-petaflop system by 2020, not an exascale system.
Last month, Riken selected Fujitsu to develop the basic design for this system.
In 2008, the first U.S. petascale system came from by IBM. If Moore's Law still applied to high performance computing, the U.S. should reach exascale by 2018. But it became clear early on that the technical issues were too great to meet that date.
Exascale won't necessarily be an easy thing to agree on.
An exascale system can be built today by just connection "a gazillion" GPUs at it, said IBM's Turek. "The question is what will work on? What will it support?" he said.
Today, the Linpack benchmark, which measures a system's floating point rate of execution, is widely used to determine capability and ranking on the Top 500 supercomputer list. But for an exascale system, Turek said, a more useful metric may be application performance: how much improvement is the system delivering for a real-world use.
Turek said the DOE systems IBM is building are a stepping stone to exascale. "It's a vehicle to mitigate risk, because we know there is a tremendous amount of learning and innovation that needs to take place," he said.
Subscribe to:
Comments (Atom)

