Nvidia's GPU Technology Conference Keynote Liveblog

While we're all well aware that modern GPUs are great for playing Crysis (or in many cases today, Diablo III), graphics processors have so much power for number crunching that they have many other uses for Serious Business. We're in San Jose at Nvidia's GPU Technology Conference, where the focus is on developers and business uses for the graphics technology that many of us run inside our gaming rigs.

Nvidia co-founder and CEO Jen-Hsun Huang will be delivering the conference's keynote speech at 10:30 a.m. local time (1:30 p.m. EDT), and we'll be there to give you the blow-by-blow. Tune in then to follow along with us!

10:25 - We're seated and waiting for the keynote to start!

10:31 - Running a tad late? At least the MacBook and Ultrabook Asus Transformer Prime are ready to rock.

10:36 - The hall is nearing capacity... any time now.

10:42 - It's starting! Words are flashing up on the giant screen behind about all the alt. uses for a GPU, such as for driving on the moon, or curing Alzheimer's. Basically, way more than just games, and many more important.

10:45 - Jen-Hsun Huang is on stage and he's giving a rundown of what he's going to talk about. Some of that includes something new. He's giving a rundown of the progress of CUDA. He says about 1 download a second of CUDA. Now in 35 supercomputers. He hopes that the top supercomputer will be running CUDA.

 

10:47 - He's very proud of the inroads that CUDA has made in education systems, and now he's highlighting the number of technical papers published. CUDA has more papers published on it than Hadoop and OpenMP.

 

10:50 - Back in 2007, Nvidia said that it had the only GPU at the Supercomputing expo in Reno, NV. Now he boasts that GPUs and CUDA are everywhere.

 

10:55 - Today will be about Kepler, confirms Jen-Hsun! Now a sizzle shot/animation about GTX 690. "GTX 690 is beautiful" he says. He's talking about how amazing the card is, all the while being "whisper quiet". This is definitely pillow talk about hardware.

10:58 - First up, graphics. Huang is talking about all about physics simulations for next-gen graphics. He's demoing how GPUs can compute how things shatter. No longer will glass just shatter in your games randomly, but they will soon shatter realistically.

 

11:01 - Now on to raytracing, and how the movie industry spends hundreds of computation hours on it. Now Huang is demoing realtime raytracing. Pretty impressive! Realtime raytraced lighting and reflections. How about realtime raytracing with fluid simulations? Yup.

 

11:07 - Fermi SM has 32 cores. Kepler SMX has 192 cores. 3x energy-efficient performance. "Energy efficiency is the ultimate barrier" says Huang.

 

11:09 - Hyper-Q to keep your GPU busy. Fermi can handle just 1 work queue. Kepler can handle 32 concurrent work queues. Basically this amounts to massive parallel processing and getting more work done.

11:13 - Along the same lines, now he's talking about dynamic parallelism. "With Kepler, on every single thread, it can generate work for itself... Kernels can start new kernel, streams can start new streams."

11:17 - Kepler computing: SMX for efficiency, Hyper-Q to keep the GPU loaded. Dynamic parallelism, for the GPU to create work itself. "These three ideas are core to Kepler's architecture."

11:18 - And now a demo for Kepler and ASTRONOMY! Now an end-body simulation can run in real time on a GPU on what used to take an entire supercomputer. Kepler is simulating some very advanced stuff in space that's going over our heads. But what we can tell is that it's simulating 3.8 billion years from now of the interaction between the Milky Way and Andromeda galaxy. We're on a collision course with Andromeda. But don't worry, we're talking 3.8 billion years from now.

 

11:24 - Jen-Hsun Huang: "We've identified dark matter!"

11:25 - 1 million simulations per second on Fermi. 10 Million on Kepler.

 

11:26 - Two new Keplers: Tesla K10 - 3x single precision, 1.8x mem bandwidth, for imaging, signal, seismic. Available now.

Tesla K20 - 3x double precision, Hyper-Q, Dynamic parallelism, for CFD, FEA, Finance, Physics. Available Q4 2012.

11:28 - And now for something new. "Would it be great..." - quoting Steve Jobs? Three new technology announcements.

 

"The largest GPU we've ever built"

"A new GPU so small it can fit inside an iPad"

"A new GPU that we can all share."

 

11:30 - Kepler is the first GPU for cloud computing. Virtual GPU; Low Latency remote display; Super energy efficient.

11:33 - The virtualization of computing environment will work for BYOD in organizations. To put that computation power into the server.

"Citrix was the pioneer of virtual desktop"

The problem is that it's all a software PC with software GPU. "Everytime you change your scene, it regenerates it in software... Wouldn't it be great if we could virtualize the GPU? Add Kepler... Now every machine can not only have a CPU but also a GPU."

11:39 - Sumit Dhawan, Group VP and GM of Citrix is on the stage with Huang now. Now working on being able to use any device you want. Trying to grow VDI further.

 

Showing off the speed of Windows on an iPad. 1536 CUDA cores in a tablet. GPU accelerated Windows inside an iPad. Autodesk Showcase on demo.

 

11:46 - Now on stage is ILM's Grady Cofer for visual effects. One frame in Battleship movie was 40 terabytes of data. Directors want to see all different angles, all with fluid movement.

Jen-Hsun wants to make a movie. They're working on a MacBook Air. But it's a virtualized Windows environment. Hah. Now showing a scene in Maya from the Avengers.

 

"Battleship total was about 1500 shots" "100 shots a week" - Demoing the shredder shots from the movie. All realtime tweaking to the special effects, all virtualized from his desktop.

11:58 - Jen-Hsun pulls a curtain and shows off 100 virtualized environments running off a single server rack, through Microsoft's VFX.

 

12:00 - Now joining Huang on stage is David Yen, GM and SVO of Data Center Group at Cisco. Cisco's into cloud computing, and now with Nvidia. Nvidia and Cisco working on a server together.

12:05 - Huang's talking up VGX again - virtual GPU. And now, talking about games. Nvidia hasn't forgotten about the impact GPUs on gaming... and now on streaming. Are we talking about OnLive and Gaikai?

A new product: GeForce Grid. "Utilizing the technologies I've showed you so far, but for games."

 

Now anything can run Crysis. But what about lag?

With a local console game, there 100 ms of pipeline lag and then 66ms in the display.

With cloud gaming gen 1, it's an extra 30ms for capture, encode, and then 75ms for network, and then an extra 15ms for decode.

Nvidia claims Gaikai powered by GeForce Grid can reduce all the cloud down to about the same as local. 50ms pipeline, 10ms capture, 30 network, 5 decode, 66 display.

 

12:09 - Dave Perry, Gaikai and Earthworm Jim daddy, is taking the stage. Perry talks about jealousy over movies about how one piece of media can play on multiple different platforms. For games consoles, this is not possible. This is where Gaikai comes in.

Gaikai has 88 data centers in various countries to keep latencies locally low. Perry thinks he can beat Hollywood as it can stream games from Walmart and Facebook. A bigger reach than Netflix.

12:16 - Now for a demo of Gaikai gaming on the Transformer Prime tablet. Loading up Hawken.

 

Hawken mech action playing on a Transformer Prime, with graphics far beyond what could ever be done locally on the Tegra 3. Streamed from Gaikai.

12:21 - Jen-Hsun Huang is wrapping up now. "Kepler is a very big deal for our company. It will take GPU accelerated computing to the next level, and for the first time in the cloud." And that's a wrap!

Read more from @MarcusYam on Twitter.

Create a new thread in the US News comments forum about this subject
This thread is closed for comments
33 comments
Comment from the forums
    Your comment
    Top Comments
  • CaedenV
    If they are such firm believers in these other uses then why did they take out a lot of that functionality from the new Kepler series?
    18
  • Other Comments
  • CaedenV
    If they are such firm believers in these other uses then why did they take out a lot of that functionality from the new Kepler series?
    18
  • EzioAs
    Anonymous said:
    If they are such firm believers in these other uses then why did they take out a lot of that functionality from the new Kepler series?


    Good one. They'll have to answer that on the conference
    2
  • andrewfy
    GPUs are basically just processors with wide SIMD - Nvidia's use CUDA which is easier to program in some ways (and harder in others) than the assembly-style intrinsics that SIMD usually has to be programmed in. The Daily Circuit did some analysis on this back in November -
    http://www.dailycircuitry.com/2011/11/128-bit-simd-is-dead-long-live-128-bit.html
    - the SIMD in CPUs stagnated for a long time at 128 bit but recently extended to 256-bit with SIMD and now 128-bit is really dead. The SIMD in GPUs was recently 512-bits wide (now 1024-bits wide), so the width advantage of GPUs is sometimes as low as 2x depending on the product cycle - and this is where most of the advantage of GPUs comes from (along with memory bandwidth to keep the SIMD units fed).
    1