Wednesday, 1 July 2015

Dailymotion badge

Sunday, 15 March 2015

Automotive

NVIDIA DRIVE™ PX

AUTO-PILOT CAR COMPUTER

The DRIVE PX platform is based on the NVIDIA® Tegra® X1 processor, enabling smarter, more sophisticated advanced driver assistance systems (ADAS) and paving the way for the autonomous car. 
Tegra X1 delivers an astonishing 1.3 gigapixels/second throughput – enough to handle 12 two-Megapixel cameras at frame rates up to 60 fps for some cameras. It is equipped with 10 GB of DRAM memory and combines surround Computer Vision (CV) technology, extensive deep learning training, and over-the-air updates to transform how cars see, think, and learn. - See more at: http://www.nvidia.com/object/drive-px.html#sthash.cs0AuBEA.dpuf

DEEP LEARNING COMPUTER VISION

Conventional ADAS technology today can detect some objects, do basic classification, alert the driver, and in some cases, stop the vehicle. DRIVE PX takes this to the next level with the ability to differentiate an ambulance from a delivery truck or a parked car from one about to pull into traffic. The system can now inform the driver, not just get their attention with a warning. The car is not just sensing, but interpreting what is taking place around it—an essential capability for auto-piloted driving. 

SURROUND VISION WITH ADVANCED RENDERING

Conventional surround view systems show the driver a virtual view of the area around the car, but often have poor image quality due to warping effects from the fisheye camera lenses. DRIVE PX uses sophisticated structure-from-motion (SFM) and advanced stitching for better image rendering and reduced "ghosting", such as where a line on the pavement can appear in two places at once. Powerful graphics enable DRIVE PX to render a virtual car in the view with high-detail models and realistic lighting effects, so you see what looks like your car, rather than a generic or toy model.

SELF PARKING CAPABILITIES

For a car to park itself, it needs to build a 3D map of nearby objects in real time. DRIVE PX delivers the massive processing power to enable techniques like structure-from-motion (SFM) and simultaneous localization and mapping (SLAM) from four surround-view cameras that cover the immediate area around the car. Additional cameras allow for greater distance coverage in forward and cross-traffic viewpoints. 

THE POWER OF THE TEGRA X1 MOBILE SUPERCHIP

NVIDIA DRIVE PX is powered by dual Tegra X1 mobile superchips for exceptional performance and safety. Each Tegra X1 is capable of one TeraFLOPS of processing power and includes a powerful, energy-efficient GPU, a quad-core ARM® v8 CPU, and dedicated audio, video, and image processors. For camera processing, Tegra X1 delivers an astonishing 1.3 gigapixels/second throughput – enough to handle 12 two-Megapixel cameras at frame rates up to 60 fps for some cameras. The Tegra X1 processor also supports NVIDIA CUDA, bringing breakthrough supercomputing and deep learning processing to cars. 

Coming by 2023, an exascale supercomputer in the U.S.

The ALMA correlator, one of the most powerful supercomputers in the world, has now been fully installed and tested at its remote, high altitude site in the Andes of northern Chile. This wide-angle view shows some of the racks of the correlator in the ALMA Array Operations Site Technical Building. This photograph shows one of four quadrants of the correlator. The full system has four identical quadrants, with over 134 million processors, performing up to 17 quadrillion operations per second. [December 2012]
 Credit: European Southern Observatory (ESO)
NEW ORLEANS -- The U.S. has set 2023 as the target date for producing the next great leap in supercomputing, if its plans aren't thwarted by two presidential and four Congressional elections between now and theIt may seem odd to note the role of politics in a story about supercomputing. But as these systems get more complex -- and expensive -- they compete for science dollars from a Congress unafraid of cutting science funding.
That political reality has frustrated the supercomputing community, and prompted an effort at this year's big supercomputing conference, SC14, here to educate researchers on the need to sell the benefits of supercomputing to a broader audience.
The theme of this year's conference: "HPC matters."


Supercomputing funding efforts in the U.S. are getting a boost from rising global competition from Europe, Japan and China, which now has the world's faster supercomputer. The U.S. Department of Energy last week announced$325 million for two 150-petaflop systems from IBM, with an option on one system to build it out to 300 petaflops.
Dave Turek, vice president of technical computing at IBM, said these systems have the architectural capability to support 500 petaflops, or a half of an exaflop.
One exaflop equals one quintillion (a quintillion is 1 followed by 18 zeros) calculations per second. It is the next great goal in supercomputing that followed the U.S. achievement in 2008 of reaching one petaflop, or 1,000 teraflops, on a system built by IBM. A petaflop equals one quadrillion (1 followed by 15 zeros) calculations per second.
The 2023 date "is when we are going to have an exascale system," William Harrod, Research Division Director for DOE's Advanced Scientific Computing Research program, said in an interview. While the U.S. has spent about $300 million so far on the next generation of systems, that's a "low level," said Harrod.
Congress will have to approve more funding to advance research to meet the development timelines, he said. And while congressional support "looks good today," Harrod isn't predicting the future.
The technical challenges to building an exascale system are many. They include solving software problems to enable parallelism across what may be hundreds of thousands of compute cores; dealing with reliability and resiliency needs in an environment that will see ongoing core failures; and energy efficiency
That last issue, energy efficiency, gets a lot of attention. For every megawatt of power, the annual cost is roughly $1 million. The 150-petaflop systems DOE has planned for 2017 will operate at about 10 MW.
The top researchers internationally acknowledge that there is competition to reach exascale, but there's also an understanding that software stack development is so complex that international cooperation is needed.
Although the Europeans are operating on a time frame that may be similar to the U.S., Japan had earlier announced a goal to reach exascale by 2020. But Akinori Yonezawa, deputy director at the Riken Advanced Institute for Computational Science, said, in an interview Tuesday, that the goal is to now build a 200- to 600-petaflop system by 2020, not an exascale system.
Last month, Riken selected Fujitsu to develop the basic design for this system.
In 2008, the first U.S. petascale system came from by IBM. If Moore's Law still applied to high performance computing, the U.S. should reach exascale by 2018. But it became clear early on that the technical issues were too great to meet that date.
Exascale won't necessarily be an easy thing to agree on.
An exascale system can be built today by just connection "a gazillion" GPUs at it, said IBM's Turek. "The question is what will work on? What will it support?" he said.
Today, the Linpack benchmark, which measures a system's floating point rate of execution, is widely used to determine capability and ranking on the Top 500 supercomputer list. But for an exascale system, Turek said, a more useful metric may be application performance: how much improvement is the system delivering for a real-world use.
Turek said the DOE systems IBM is building are a stepping stone to exascale. "It's a vehicle to mitigate risk, because we know there is a tremendous amount of learning and innovation that needs to take place," he said.

Monday, 24 November 2014

A smart contact lens for diabetes sufferers

Globally, an estimated 285 million people have diabetes – a chronic disease that occurs when the pancreas does not produce enough insulin, or when the body cannot effectively use the insulin it produces. Its incidence is growing rapidly, and by 2030, the number of cases is predicted to almost double. By 2050, as many as one in three U.S. adults could be affected if current trends continue.

To keep their blood sugar levels under control, sufferers need to constantly monitor themselves. This can involve pricking their finger to get a blood sample, two to four times per day. For many people, managing this condition is therefore a painful and disruptive process.
To address this problem, Internet giant Google has announced it is developing a smart contact lens. This wearable tech will measure glucose levels in tears, using a tiny wireless chip and miniaturized
sensor, embedded between two layers of soft contact lens material. When glucose levels fall below a certain threshold, tiny LED lights will activate themselves to function as a warning system for the wearer.
Google admits it is still "early days" for this technology, but there is clearly great potential for improving the lives of diabetes sufferers around the world. To achieve their goal, they intend to partner with other technology companies who have previous experience of bringing products like this to market. 

Saturday, 25 October 2014

Maxwell notebook

GAMING

Maxwell Comes to Notebooks

NVIDIA recently unleashed an onslaught on the gaming world, an onslaught named Maxwell. We launched the new graphics architecture during GAME24, an unprecedented 24-hour celebration of gaming. And it blew away gamers across the globe.
We held onto one big secret, which we’re revealing today: the introduction of the GeForce GTX 970M and GeForce GTX 980M notebook GPUs.
Maxwell, the company’s 10th-generation GPU architecture, is undeniably the world’s most advanced. It solves some of the most complex lighting and graphics challenges in visual computing. And it does so with twice the energy efficiency of the previous generation. It’s a combination that will pay huge dividends in notebooks.
Why?
A Quick History Lesson
Let’s start with some history. NVIDIA’s 8th-generation GPU architecture, Fermi, delivered about 40% of the desktop equivalent in 2010. Kepler, our 9th generation GPU, launched in 2012, closed the gap to 60%, giving gamers 1080p resolution and “ultra” settings for the first time in a notebook.
With Maxwell, that gap shrinks to 80% of the desktop equivalent and pushes the resolution well beyond 1080p. It’s an astonishing achievement when you compare the thermal and power differences in a desktop tower and a notebook chassis.

Just like the generations preceding it, GeForce GTX 980M is the world’s fastest notebook GPU, a title NVIDIA has held for a long time. But how fast is it?
Maxwell doubles performance compared with the first Kepler notebook GPUs on “video card killers” like Battlefield 4 and Metro: Last Light. We’re pushing playable resolution to 2500×1400+ at ultra settings. But most notebooks don’t have a native resolution that high, and this is where NVIDIA gives you more than just killer frame rates.

DSR Delivers 4K-Quality Resolution
The GeForce GTX 980 and GTX 970 GPUs deliver a higher fidelity gaming experience even on standard 1080p display. Maxwell’s Dynamic Super Resolution (DSR) technology can render games at 4K or other high-end resolutions. Then they’re scaled down to the native resolution on the notebook’s display. The results are an image that is much higher quality than one rendering directly to 1080p.
BatteryBoost Gets Better
A second ask from notebook gamers is the ability to untether from the wall socket and really game on battery. We’re addressing this with our next evolution of NVIDIA BatteryBoost. Instead of your notebook pushing every component to its max, BatteryBoost sets a maximum frame rate from 30 to 60 FPS. The driver-level governor takes over from there, running all your system components including CPU, GPU and memory at peak efficiency. All while maintaining a smooth, playable experience.
We’ve also made big improvements to BatteryBoost in the six months since its launch. The first thing you’ll notice is many more systems achieving playable frame rates on battery. This was the result of collaboration with OEMs to enhance on-battery performance.
Another big update is an improved governor to enhance battery savings. We also added features to GeForce Experience, allowing gamers to set specific game settings for use while on battery, along with a one-click optimize-for-battery button.
Anti-Aliasing Gets Amped
GeForce GTX 980M and 970M GPUs also get all the same cool technology that their desktop counterparts get. That means 30% more AA performance at the same quality with NVIDIA Multi-Frame Anti-Aliasing (MFAA).
They also support Voxel Global Illumination (VXGI) technology, which better depicts indirect lighting – including diffuse lighting, specular lighting and reflections. This enables gaming GPUs to deliver real-time dynamic global illumination for the first time.
All the features, performance and efficiency combine to make Maxwell the world’s most advanced GPU architecture. Over a dozen SKUs are now available with GeForce GTX 980M and 970M.
MSI has the GT72, GS70 and GS60 models. Asus is offering the G751. Gigabyte has the Aurus X7 and P35 models. Boutique vendors like AVADirect. MainGear and OriginPC are also selling gaming powerhouses with these new GPUs.
Check with OEMs in your region for exact shipping dates of their GeForce-based notebooks. For more information on notebook GPUs that feature the Maxwell architecture, visit NVIDIA’s web site.
-

NVIDIA® GPU

KEY INNOVATIONS

  • NVIDIA® GPU with up to 72 custom cores - Enjoy unique mobile device innovations in photography, media, gaming, and web—including High Dynamic Range (HDR) imaging, WebGL, and HTML5.
  • Quad-Core ARM Processor - NVIDIA Tegra 4 processor harnesses ARM's most advanced CPU cores ever, plus a second-generation battery-saver core, to deliver record levels of performance and battery life. The ARM Cortex-A15 CPU is the engine behind Tegra 4, while Tegra 4i is powered by the new ARM Cortex-A9 r4 CPU—which was defined by ARM with help from NVIDIA – and the most efficient CPU core in its class.Variable SMP - NVIDIA Variable SMP architecture enables four performance cores to be used for max burst, when needed, with each core independently and automatically enabled and disabled based on workload. The single battery-saver core handles low-power tasks like active standby, music, and video playback, and is fully transparent to the OS and applications.Computational Photography Camera - Never miss that "once in a lifetime" shot while you turn on features like HDR, then try to hold still enough to properly capture the scene. The innovative new computational photography mobile architecture fuses together the processing power of the CPU, GPU, and ISP to dramatically enhance mobile imaging. This enables the first Always-on HDR camera with features like live HDR preview, instant HDR photos, HDR video, HDR burst, HDR Flash, and HDR Panorama, as well as the first phone-based Tap-to-Track feature.NVIDIA i500 LTE Modem - The i500 is a full LTE modem and supports any Tegra powered device as a separate but complementary chipset. Tegra 4i is a single-chip processor that delivers a full application processor, and integrates an optimized version of the i500 modem.