Nvidia Reveals More About Cloud Gaming Service: Grid
Enables 3D gaming that can render graphics directly to the cloud.

As well as announcing an Android-powered portable gaming system Project Shield, Nvidia further detailed a cloud gaming system at CES 2013 that it originally talked about at GTC in 2012.
Called Grid, it sports a server stack designed to optimize computer graphics. In development for five years, the stacks boasts a batch of graphics processing units, subsequently enabling 3D gaming that is capable of rendering graphics directly to the cloud.
Users can start playing a game on one device such as a tablet and then continue where they left off on another device such as a desktop. Currently in its trial phase, Nvidia Grid will be sold to MSOs and partners.
The company had initially discussed the Grid during its developer conference in May, where it said it would offer technology that would allow its chips to be shared through a number of devices via the internet.
If you're playing a game with low update frequency (no fps, no rts, no arpg, no strategy with rt components) then it may be acceptable... otherwise it's going to be a tough sell.
Clearly, if you're smart enough to do all that math, you will have figured out that you're not the target market here.
So, the only objective of your post is boasting. Is that it, in a nut shell?
From the example it is intended not for gaming but for designing which doesnt need the same speed as real time gaming would be nice but not nessary for us to have. So please d_kuhn dont get them mixed up.
My own point now this seems like a good idea unless they change a arm and a leg for it as i wont have to upgrade as often then just to use the current version of 3DMAX!, Photoshop, Blender, and others which take seemingly a ton of resources to just open any picture but do a wonderful job with it once opened.
I'd love to have a system where I could run my own cloud over my own gigabit network using my own games. Then you could play starcraft on your tablet or FarCry3 with cranked up graphics playing on your TV through your phone.
Clearly, if you're smart enough to do all that math, you will have figured out that you're not the target market here.
So, the only objective of your post is boasting. Is that it, in a nut shell?
mmm... boasting about what, any PC newer than 4 or even more years will be able to pump FAR more pixels than an internet stream can manage. What I was doing was pointing out that once again 'the cloud' is being pointed at an application that it's not well suited to perform (other than for a limited subset of apps). The issue for those of us who aren't 'the target audience' is that if game manufacturers decide it's a good idea... we'll all be playing crappy internet games, and at that point we'll be begging for a return to the era where games were designed for consoles.
From the example it is intended not for gaming but for designing which doesnt need the same speed as real time gaming would be nice but not nessary for us to have. So please d_kuhn dont get them mixed up.
My own point now this seems like a good idea unless they change a arm and a leg for it as i wont have to upgrade as often then just to use the current version of 3DMAX!, Photoshop, Blender, and others which take seemingly a ton of resources to just open any picture but do a wonderful job with it once opened.
I agree that for something like a turn based game this would likely work fine, lag isn't an issue and the screen isn't as dynamic... but I found it interesting that they were showing an FPS on their advert. Also... in their published teasers they showed a TV used as a gaming platform, so I'd say they're doing ALL the rendering server side and streaming the fully rendered video, at least that's the level of capability they're advertising.
That isn't exactly a fair comparison. The information exchanged between your CPU and GPU is not the same as the information transferred between your GPU and monitor.
In a cloud gaming setup, you have several streams of data that need to be transferred. You have the information from the human interface device(s) going to the server, and then the audio and video streams coming back from the server. The HID information is likely going to require negligible bandwidth, and the audio bandwidth is going to be small compared to the video bandwidth.
According to http://www.emsai.net/projects/widescreen/bandwidth/ 2560x1440 @ 60Hz is 7.87 Gbit (with a lower case B). This number is much lower than the 4GB/s PCI-e interconnect you referenced. This is uncompressed. Compression can greatly reduce this, but adds additional latency.
Lets assume that they are using H.264 compression for the video. According to http://stackoverflow.com/questions/5024114/suggested-compression-ratio-with-h-264 the formula is [image width] x [image height] x [framerate] x [motion rank] x 0.07 = [desired bitrate] where the image width and height is expressed in pixels, and the motion rank is an integer between 1 and 4, 1 being low motion, 2 being medium motion, and 4 being high motion (motion being the amount of image data that is changing between frames, see the linked document for more information).
Video games tend to be very fast paced. As a result, in order to make the game playable, lets assign a motion rank of 4. That leaves us with:
2560 x 1440 x 60 x 4 x 0.07 6.19315x10^7 bps = 59.0625 MB/s = 472.5 Mbps.
In other words, to get a quality almost as good as what you have now, you would need a 500mbps internet connection all to your self. In other words, you could do this if you lived in Kansas City. You might even be able to get this kind of bandwidth at your local university. Either way, this is something that could be possible on a wide scale in the future.
That is completely ignoring the issue of latency.
You can do this with virtualized gpus. You could access your PC's card from anywhere in the building. AMD currently doesn't have this on the books, but NV is just about there. Exactly as you are saying here.
http://www.nvidia.com/object/cloud-gaming.html
http://www.nvidia.com/object/vdi-desktop-virtualization.html
Not sure what the cost will be, but eventually this will be a commodity item used cheaply in home. One power vid card to rule your whole home. I'm thinking it will be aimed at enterprise first (workstations+regular users able to use powerful graphics) but it will migrate to our house eventually. This easily get them to sell more gpus as many will want ONE feeding the whole house (or a few?). So your experience is the same on all because they all run the same gpu, which also makes it far easier for programmers eventually and cheaper as they could program for far fewer devices, since basically one is fronting for the rest. This is no different than what Playon does letting your PC do the work and just displays a movie on your roku. The roku isn't doing the work, the PC in the other room is before streaming it. It strains my 3ghz dual core somewhat if I have alt.binz and some other crap running though. VLC can do something similar without playon. They're getting better at it, but playon works pretty well until they catch up.
you have one problem in that math.
record uncompressed 1080p and than compress it to an ok amount.
1 minute of uncompressed in any way shape or form video at about 30fps comes to well over 1gb
now compress it to an ok extent, and it can be as low as 20mb, with very little way to see the difference. i mean you have to freeze frame it to be able to tell for most people.
i just had to point that out because you were talking about uncompressed, and not taking an un noticeable amount of compression into account, granted i believe they would compress it further than un noticeable, but that isnt the point i was trying to make.
It's not really an accurate comparison. Yes the GPU has a huge amount of data bandwidth, but that is for internal calculation and rendering. The bandwidth required to display the final image on your screen is much smaller. Let's look at how the final rendered image is sent to your monitor once it has been processed by the GPU:
An HDMI cable (for example) will transfer a maximum of 10.2 Gbit (1.275 gigaBytes) per second. In this instance this is transferring the raw 2D image output of your GPU to your screen at whatever frame rate you are rendering at. Granted, this is still a lot of data (too much for an internet connection). Now we can take this 2 dimensional rendered image signal, cap it at 30 or 60fps and compress it to an efficient video codec. H264 for example will look pretty darn good at 12 mbits or above. Not really "gamer" quality, but definitely good enough for most users out there.
And I mean this not in the "I need to play my MMO in bed or while pooping, via tablet/laptop", I mean this in the "portable VR helmets and walking around the house with those on"
But CELL never came true.
In a cloud gaming setup, you have several streams of data that need to be transferred. You have the information from the human interface device(s) going to the server, and then the audio and video streams coming back from the server. The HID information is likely going to require negligible bandwidth, and the audio bandwidth is going to be small compared to the video bandwidth.
According to http://www.emsai.net/projects/widescreen/bandwidth/ 2560x1440 @ 60Hz is 7.87 Gbit (with a lower case B). This number is much lower than the 4GB/s PCI-e interconnect you referenced. This is uncompressed. Compression can greatly reduce this, but adds additional latency.
Lets assume that they are using H.264 compression for the video. According to http://stackoverflow.com/questions/5024114/suggested-compression-ratio-with-h-264 the formula is [image width] x [image height] x [framerate] x [motion rank] x 0.07 = [desired bitrate] where the image width and height is expressed in pixels, and the motion rank is an integer between 1 and 4, 1 being low motion, 2 being medium motion, and 4 being high motion (motion being the amount of image data that is changing between frames, see the linked document for more information).
Video games tend to be very fast paced. As a result, in order to make the game playable, lets assign a motion rank of 4. That leaves us with:
2560 x 1440 x 60 x 4 x 0.07 6.19315x10^7 bps = 59.0625 MB/s = 472.5 Mbps.
In other words, to get a quality almost as good as what you have now, you would need a 500mbps internet connection all to your self. In other words, you could do this if you lived in Kansas City. You might even be able to get this kind of bandwidth at your local university. Either way, this is something that could be possible on a wide scale in the future.
That is completely ignoring the issue of latency.
Latency is a problem but it's a different problem... 500mbps would likey be fine most of the time but would still introduce artifacts in high motion areas of the video, it's also 10-100x more bandwidth than the majority of users could sustain. Also, if we're talking about a university campus with say 100 concurrent gamers, the demand on the university internet drop would be 6 Gigabytes/s just for those users. If you're at home using your super duper broadband connection - you'd be eating up network bandwidth at a constant rate of 5GB/min. I'd give you a month of that before your ISP shut you down. The 'generous' cap of 250GB/month offered by some isp's would be gone in less than an hour of gaming.
It will be MANY years before the internet is able to deal with the overhead of many 500mbps media users... this service is much more likely to be closer to 1mbps, which would mean HEAVILY compressed video and greatly reduced resolution limits. We can see what that looks like by watching "HD" youtube videos... which look like crap. Services like Netflix do better, their 'hd' content is close to dvd quality, but it's also a (by today's standard) a bandwidth hog. At my house we don't have cable, get all our media from the net (Netfix, Hulu, etc...) and routinely consume >300GB/month (luckily our ISP is tolerant).
I tried onLive (which I think Nvidia was targeting to buy earlier this year). There was definite delay in the responsiveness in NBA2K12, but the graphics were impressive on my crappy laptop which could never have rendered the game itself with its lame onboard video. I'm guessing as network speeds continue to increase, anyone will be able to play high quality games on anything that can display video eventually.
So instead of forking out a ton of cash every year or two to upgrade hardware, we'll be locked into subscription gaming.