Sign in with
Sign up | Sign in

Nvidia's GPU Technology Conference Keynote Liveblog

By - Source: Tom's Hardware US | B 33 comments

Nvidia shows the world that GPUs are fun, but not always about games.

While we're all well aware that modern GPUs are great for playing Crysis (or in many cases today, Diablo III), graphics processors have so much power for number crunching that they have many other uses for Serious Business. We're in San Jose at Nvidia's GPU Technology Conference, where the focus is on developers and business uses for the graphics technology that many of us run inside our gaming rigs.

Nvidia co-founder and CEO Jen-Hsun Huang will be delivering the conference's keynote speech at 10:30 a.m. local time (1:30 p.m. EDT), and we'll be there to give you the blow-by-blow. Tune in then to follow along with us!

10:25 - We're seated and waiting for the keynote to start!

10:31 - Running a tad late? At least the MacBook and Ultrabook Asus Transformer Prime are ready to rock.

10:36 - The hall is nearing capacity... any time now.

10:42 - It's starting! Words are flashing up on the giant screen behind about all the alt. uses for a GPU, such as for driving on the moon, or curing Alzheimer's. Basically, way more than just games, and many more important.

10:45 - Jen-Hsun Huang is on stage and he's giving a rundown of what he's going to talk about. Some of that includes something new. He's giving a rundown of the progress of CUDA. He says about 1 download a second of CUDA. Now in 35 supercomputers. He hopes that the top supercomputer will be running CUDA.

 

10:47 - He's very proud of the inroads that CUDA has made in education systems, and now he's highlighting the number of technical papers published. CUDA has more papers published on it than Hadoop and OpenMP.

 

10:50 - Back in 2007, Nvidia said that it had the only GPU at the Supercomputing expo in Reno, NV. Now he boasts that GPUs and CUDA are everywhere.

 

10:55 - Today will be about Kepler, confirms Jen-Hsun! Now a sizzle shot/animation about GTX 690. "GTX 690 is beautiful" he says. He's talking about how amazing the card is, all the while being "whisper quiet". This is definitely pillow talk about hardware.

10:58 - First up, graphics. Huang is talking about all about physics simulations for next-gen graphics. He's demoing how GPUs can compute how things shatter. No longer will glass just shatter in your games randomly, but they will soon shatter realistically.

 

11:01 - Now on to raytracing, and how the movie industry spends hundreds of computation hours on it. Now Huang is demoing realtime raytracing. Pretty impressive! Realtime raytraced lighting and reflections. How about realtime raytracing with fluid simulations? Yup.

 

11:07 - Fermi SM has 32 cores. Kepler SMX has 192 cores. 3x energy-efficient performance. "Energy efficiency is the ultimate barrier" says Huang.

 

11:09 - Hyper-Q to keep your GPU busy. Fermi can handle just 1 work queue. Kepler can handle 32 concurrent work queues. Basically this amounts to massive parallel processing and getting more work done.

11:13 - Along the same lines, now he's talking about dynamic parallelism. "With Kepler, on every single thread, it can generate work for itself... Kernels can start new kernel, streams can start new streams."

11:17 - Kepler computing: SMX for efficiency, Hyper-Q to keep the GPU loaded. Dynamic parallelism, for the GPU to create work itself. "These three ideas are core to Kepler's architecture."

11:18 - And now a demo for Kepler and ASTRONOMY! Now an end-body simulation can run in real time on a GPU on what used to take an entire supercomputer. Kepler is simulating some very advanced stuff in space that's going over our heads. But what we can tell is that it's simulating 3.8 billion years from now of the interaction between the Milky Way and Andromeda galaxy. We're on a collision course with Andromeda. But don't worry, we're talking 3.8 billion years from now.

 

11:24 - Jen-Hsun Huang: "We've identified dark matter!"

11:25 - 1 million simulations per second on Fermi. 10 Million on Kepler.

 

11:26 - Two new Keplers: Tesla K10 - 3x single precision, 1.8x mem bandwidth, for imaging, signal, seismic. Available now.

Tesla K20 - 3x double precision, Hyper-Q, Dynamic parallelism, for CFD, FEA, Finance, Physics. Available Q4 2012.

11:28 - And now for something new. "Would it be great..." - quoting Steve Jobs? Three new technology announcements.

 

"The largest GPU we've ever built"

"A new GPU so small it can fit inside an iPad"

"A new GPU that we can all share."

 

11:30 - Kepler is the first GPU for cloud computing. Virtual GPU; Low Latency remote display; Super energy efficient.

11:33 - The virtualization of computing environment will work for BYOD in organizations. To put that computation power into the server.

"Citrix was the pioneer of virtual desktop"

The problem is that it's all a software PC with software GPU. "Everytime you change your scene, it regenerates it in software... Wouldn't it be great if we could virtualize the GPU? Add Kepler... Now every machine can not only have a CPU but also a GPU."

11:39 - Sumit Dhawan, Group VP and GM of Citrix is on the stage with Huang now. Now working on being able to use any device you want. Trying to grow VDI further.

 

Showing off the speed of Windows on an iPad. 1536 CUDA cores in a tablet. GPU accelerated Windows inside an iPad. Autodesk Showcase on demo.

 

11:46 - Now on stage is ILM's Grady Cofer for visual effects. One frame in Battleship movie was 40 terabytes of data. Directors want to see all different angles, all with fluid movement.

Jen-Hsun wants to make a movie. They're working on a MacBook Air. But it's a virtualized Windows environment. Hah. Now showing a scene in Maya from the Avengers.

 

"Battleship total was about 1500 shots" "100 shots a week" - Demoing the shredder shots from the movie. All realtime tweaking to the special effects, all virtualized from his desktop.

11:58 - Jen-Hsun pulls a curtain and shows off 100 virtualized environments running off a single server rack, through Microsoft's VFX.

 

12:00 - Now joining Huang on stage is David Yen, GM and SVO of Data Center Group at Cisco. Cisco's into cloud computing, and now with Nvidia. Nvidia and Cisco working on a server together.

12:05 - Huang's talking up VGX again - virtual GPU. And now, talking about games. Nvidia hasn't forgotten about the impact GPUs on gaming... and now on streaming. Are we talking about OnLive and Gaikai?

A new product: GeForce Grid. "Utilizing the technologies I've showed you so far, but for games."

 

Now anything can run Crysis. But what about lag?

With a local console game, there 100 ms of pipeline lag and then 66ms in the display.

With cloud gaming gen 1, it's an extra 30ms for capture, encode, and then 75ms for network, and then an extra 15ms for decode.

Nvidia claims Gaikai powered by GeForce Grid can reduce all the cloud down to about the same as local. 50ms pipeline, 10ms capture, 30 network, 5 decode, 66 display.

 

12:09 - Dave Perry, Gaikai and Earthworm Jim daddy, is taking the stage. Perry talks about jealousy over movies about how one piece of media can play on multiple different platforms. For games consoles, this is not possible. This is where Gaikai comes in.

Gaikai has 88 data centers in various countries to keep latencies locally low. Perry thinks he can beat Hollywood as it can stream games from Walmart and Facebook. A bigger reach than Netflix.

12:16 - Now for a demo of Gaikai gaming on the Transformer Prime tablet. Loading up Hawken.

 

Hawken mech action playing on a Transformer Prime, with graphics far beyond what could ever be done locally on the Tegra 3. Streamed from Gaikai.

12:21 - Jen-Hsun Huang is wrapping up now. "Kepler is a very big deal for our company. It will take GPU accelerated computing to the next level, and for the first time in the cloud." And that's a wrap!

Read more from @MarcusYam on Twitter.

Discuss
Display all 33 comments.
This thread is closed for comments
Top Comments
  • 18 Hide
    CaedenV , May 15, 2012 1:12 PM
    If they are such firm believers in these other uses then why did they take out a lot of that functionality from the new Kepler series?
Other Comments
  • 18 Hide
    CaedenV , May 15, 2012 1:12 PM
    If they are such firm believers in these other uses then why did they take out a lot of that functionality from the new Kepler series?
  • 2 Hide
    EzioAs , May 15, 2012 1:25 PM
    Quote:
    If they are such firm believers in these other uses then why did they take out a lot of that functionality from the new Kepler series?


    Good one. They'll have to answer that on the conference
  • 1 Hide
    andrewfy , May 15, 2012 2:25 PM
    GPUs are basically just processors with wide SIMD - Nvidia's use CUDA which is easier to program in some ways (and harder in others) than the assembly-style intrinsics that SIMD usually has to be programmed in. The Daily Circuit did some analysis on this back in November -
    http://www.dailycircuitry.com/2011/11/128-bit-simd-is-dead-long-live-128-bit.html
    - the SIMD in CPUs stagnated for a long time at 128 bit but recently extended to 256-bit with SIMD and now 128-bit is really dead. The SIMD in GPUs was recently 512-bits wide (now 1024-bits wide), so the width advantage of GPUs is sometimes as low as 2x depending on the product cycle - and this is where most of the advantage of GPUs comes from (along with memory bandwidth to keep the SIMD units fed).
  • -1 Hide
    tomfreak , May 15, 2012 2:30 PM
    Can somebody ask them when is the GK106/GTX650/660 out? I am getting impatient.
  • 4 Hide
    DRosencraft , May 15, 2012 3:38 PM
    Quote:
    If they are such firm believers in these other uses then why did they take out a lot of that functionality from the new Kepler series?


    Simple answer; most people don't need it, and those who do will buy a professional series GPU that doesn't need all the other stuff in a consumer card. They will likely be announcing new Quadros. I know a lot of people who work in 3D applications and in graphic design who are very interested to see what a new series of Quadro cards can do.
  • 1 Hide
    NEO3 , May 15, 2012 4:11 PM
    Where's kepler's support for iray on 3dsmax?!
  • 2 Hide
    computernerdforlife , May 15, 2012 5:01 PM
    Diablo 3 hardcores will never read this article today. Guess where they're be living for the next few days/weeks/months/years?
  • 2 Hide
    CaedenV , May 15, 2012 5:15 PM
    DRosencraftSimple answer; most people don't need it, and those who do will buy a professional series GPU that doesn't need all the other stuff in a consumer card. They will likely be announcing new Quadros. I know a lot of people who work in 3D applications and in graphic design who are very interested to see what a new series of Quadro cards can do.

    While this is true, and most professionals ought to be on Quadros instead of GeForce cards, there is still a market for people like me who opted for a relatively cheap $380 570 (on sale when I got it, currently $285), instead of the much more expensive $750 Quadro (again, at the time, while this card now goes for $430). I am on that boarder between 'extreme hobiest' and 'entry level professional' where I do production work on my machine to the level where it is helpful to have the realtime CUDA features while editing, but cannot simply spend money on the parts that I want (besides, I do a fair amount of gaming and such as well which is better on the GeForce side). I am not complaining a whole lot as the 570 meets my needs at the moment, but next time I upgrade I would like to know that there is an in-between card that is still consumer focused, but has a few pro features.
  • 1 Hide
    CaedenV , May 15, 2012 5:20 PM
    @Rosencraft
    Put in annother way: What about a product for the growing number of video reviews on youtube and other such sites? They do a fair bit more editing than I do, yet as most of them are unpaid (or not well paid when getting started) something like a 570 with both game and editing features would be most helpful, and open up a lot more options for them without having to spend an arm and a leg. Once they get going and start raking in a bit of money then absolutely; Quadro is the way to go. But to start out on a budget the 570 was an excellent option at the time, and it does not look like it will be replaced on the nVidia side (though it looks like AMD is picking up the slack).
  • -5 Hide
    upgrade_1977 , May 15, 2012 6:51 PM
    Why try to sell something when you don't even have a product? I have been waiting for these cards for a long time now, and yet everywhere still out of stock....
  • 3 Hide
    blazorthon , May 15, 2012 7:12 PM
    caedenv@RosencraftPut in annother way: What about a product for the growing number of video reviews on youtube and other such sites? They do a fair bit more editing than I do, yet as most of them are unpaid (or not well paid when getting started) something like a 570 with both game and editing features would be most helpful, and open up a lot more options for them without having to spend an arm and a leg. Once they get going and start raking in a bit of money then absolutely; Quadro is the way to go. But to start out on a budget the 570 was an excellent option at the time, and it does not look like it will be replaced on the nVidia side (though it looks like AMD is picking up the slack).


    If you can do it with OpenCL instead of CUDA, then AMD has excellent offers for that market in the Tahiti based cards.
  • 0 Hide
    dennisburke , May 15, 2012 7:33 PM
    I was able to watch the live keynote at Nvidia and all I have to say is wow. I'm still happy with my Fermi for my PC needs at the moment, but if this cloud gaming takes off, I'll probably put my old 8600GT back in my computer. Not sure if Nvidia is shooting themselves in the foot here. Other than that, I have to admire Nvidia's vision for the future. Wish I would have bought stock three years ago.
  • 4 Hide
    redeemer , May 15, 2012 8:18 PM
    They want you to spend thousands thats why, the 7970 has more compute power than any quadro available today that cost an arm and leg!
  • 3 Hide
    darkchazz , May 15, 2012 9:19 PM
    The way this guy does presentations reminds me of Steve Jobs.
    "Wow! Look at this. Isn't this amazing?"
  • 0 Hide
    vitornob , May 15, 2012 9:56 PM
    computernerdforlifeDiablo 3 hardcores will never read this article today. Guess where they're be living for the next few days/weeks/months/years?


    I'm a Diablo 3 hardcore and I'm reading this! (of course I'll not comment about the server maintenance going through the next half hour...)

    :) 
  • -1 Hide
    dragonsqrrl , May 15, 2012 10:05 PM
    hmmm... and to think I got thumbed down just a couple days ago for even suggesting that Nvidia might officially announce gk110 at their GPU Tech Conference. I mean seriously right? Nvidia announcing their Kepler derived compute oriented GPU at a conference targeted specifically at GPGPU computing? How ridiculous.

    http://www.tomshardware.com/forum/15606-55-nvidia-announces-quarterly-results-profits-dropped

    ... and the guy who claimed that "there is no gk110" got thumbed up 19+. Some of the users in the Tom's Hardware community just never cease to amaze me. But I just have to ask, where are you guys now?

    http://www.nvidia.com/content/PDF/kepler/NV_DS_Tesla_KCompute_Arch_May_2012_LR.pdf
  • 0 Hide
    dragonsqrrl , May 15, 2012 10:21 PM
    caedenvIf they are such firm believers in these other uses then why did they take out a lot of that functionality from the new Kepler series?

    As DRosencraft already suggested, Nvidia didn't remove compute functionality from Kepler, in fact they've expanded it. It's all still there, and the Kepler architecture is designed for compute performance and efficiency. Some compute functionality is just severely limited in gk104.
  • 0 Hide
    blazorthon , May 15, 2012 10:26 PM
    dragonsqrrlhmmm... and to think I got thumbed down just a couple days ago for even suggesting that Nvidia might officially announce gk110 at their GPU Tech Conference. I mean seriously right? Nvidia announcing their Kepler derived compute oriented GPU at a conference targeted specifically at GPGPU computing? How ridiculous.http://www.tomshardware.com/forum/ [...] ts-dropped... and the guy who claimed that "there is no gk110" got thumbed up 19+. Some of the users in the Tom's Hardware community just never cease to amaze me. But I just have to ask, where are you guys now?http://www.nvidia.com/content/PDF/ [...] 012_LR.pdf


    Quote:
    I think there is no gk110. It looks like releasing gtx680 as gk106 is marketing trick. It looks to me like Nvidia is doing paper lunches and selling rumors about powerful gk110 to convince as much people as possible to wait. They are trying to buy some time so that they can resolve supply issues without loosing customers. I just wonder why would they hold off a gk110 gpu that could literally kill AMD? Some say that there is no point..gk106 is powerful enough. But that's BS. GTX680 is great but is nothing more than what gtx580 was compared to 6970 and what gtx480 was compared to 5970...

    What I am trying to say is that it would be stupid not to use your advantages and let competitors catch up. Unless your competitors are 2 or 3 generations behind like AMD is to intel, but that's not a case in AMD-Nvidia head to head race


    That guy clearly said he thinks that there is no GK110, not that he says that there will not be a GK110. Beyond that, he is clearly talking about there not being a consumer card with GK110, IE no GTX 680 TI, 685, or 685 TI with it, not about there not being a compute card based on it. You sure are quick to criticize someone when what they said is not even in the same context as what you're talking about! Besides, he was wrong about pretty much everything else in his comment, I don't understand why he was voted up like this...

    For example, the difference between the GTX 680 and the Radeon 7970 (at 4MP resolutions, the resolutions where you would most likely use these cards) is much smaller than the difference between the GTX 580 and the Radeon 6970 at any resolution. The GTX 680 uses GK104, not GK106. The GTX 480 was slower than the Radeon 5970 and the two were not directly comparable because the 5970 is a dual GPU card, whereas the 480 is a single GPU card.

    It's funny that he was voted up, despite there being at least three or four people who called him out on his many mistakes. Regardless, he was still not even talking about the compute market, just the consumer gaming market, so you were still wrong about what he said.
  • 1 Hide
    blazorthon , May 15, 2012 10:35 PM
    dragonsqrrlAs DRosencraft already suggested, Nvidia didn't remove compute functionality from Kepler, in fact they've expanded it. It's all still there, and the Kepler architecture is designed for compute performance and efficiency. Some compute functionality is just severely limited in gk104.


    Nvidia took the GK104 equipped GTX 680 and GTX 670 and despite them being more than 50% faster than the GTX 580, they are only a little more than half as fast as the 580 for DP compute. The Kepler cores used in the consumer GPUs aren't even capable of DP compute, just SP. The consumer GPUs have DP functionality at all because they have a few of the DP cores that only do DP math, so they don't help the gaming performance or other SP performance at all.

    The architecture for the consumer Kepler cores is not designed well for compute. GCN beats it greatly. For example, the 7970 is about 50% faster for SP math than the 680. The 7970 is almost six times faster than the 680 for DP performance, about three times faster than the 580 for DP performance. GCN is designed for compute and does it better too. I can't say the same for the compute oriented version of Kepler because I have yet to see benchmarks for it, but it's obviously better at compute than the consumer version. Regardless, to say that Kepler is good for compute when GCN beats it so badly just seems wrong. We'll have to see how the Pro versions of Kepler and GCN do against each other, but I have to say, I'm not seeing Kepler beat GCN.

    Nvidia will need to beat AMD with the software instead of the hardware, and really, that's not unlikely, but AMD still seems to have the better hardware, at least with what little info we have now.
  • 0 Hide
    dragonsqrrl , May 15, 2012 11:02 PM
    blazorthonRegardless, he was still not even talking about the compute market, just the consumer gaming market, so you were still wrong about what he said.

    I'm sorry but I have to disagree. I don't think that's what he was suggesting. Yes, he does speak within the context of gaming, but not because he thinks a gk110 based gaming card, in particular, doesn't exist. He implied that there probably isn't a gk110 (he makes no mention of a GTX680TI or GTX685), and questions the existence the GPU itself because "gk106" (...) already provides adequate high-end performance in games. Basically, Nvidia doesn't have a gaming oriented incentive to produce a higher-end Kepler derivative, so why would they? He even manages to toss in a conspiracy theory for good measure, explaining the very existence of the gk110 rumors.

    And I'm not sure I understand the distinction in your first sentence. At least to me, they both seem to imply the same thing. Although I suppose you could make the argument that because gk110 is still in development, it therefore does not yet exist. But again, I don't think that's what he was trying to say.
Display more comments