Nvidia Reveals More About Cloud Gaming Service: Grid


As well as announcing an Android-powered portable gaming system Project Shield, Nvidia further detailed a cloud gaming system at CES 2013 that it originally talked about at GTC in 2012.

Called Grid, it sports a server stack designed to optimize computer graphics. In development for five years, the stacks boasts a batch of graphics processing units, subsequently enabling 3D gaming that is capable of rendering graphics directly to the cloud.

Users can start playing a game on one device such as a tablet and then continue where they left off on another device such as a desktop. Currently in its trial phase, Nvidia Grid will be sold to MSOs and partners.

The company had initially discussed the Grid during its developer conference in May, where it said it would offer technology that would allow its chips to be shared through a number of devices via the internet.

Contact Us for News Tips, Corrections and Feedback

  • hate machine
    I hope they can get the Delay to a manageable level.
    Reply
  • dominatorix
    sow??
    Reply
  • d_kuhn
    So let me get this straight... right now I play games rendered locally on my NVidia Graphics card at 2550x1440 at say 40-80fps. NVidia is thinking what I SHOULD be doing is letting them render on their servers then stream me the game graphics - so instead of having a 4 GBps (capital B) level pci-e pipe for my graphics, it'll all need to be stuffed through a 30 mbps (lowercase b, or ~ 1/8 of a capital B... or about 4 MBps). 1/1000 the bandwidth (and my net connection is pretty respectable). Sure a lot of the workload is simplified, but how much does an uncompressed 2550 screen take to stream at 30fps (Answer: 440 MBPS)... so that's completely out - what you'll get is either a low resolution screen or highly compressed high res (which doesn't look good).

    If you're playing a game with low update frequency (no fps, no rts, no arpg, no strategy with rt components) then it may be acceptable... otherwise it's going to be a tough sell.
    Reply
  • So let me get this straight... right now I play games rendered locally on my NVidia Graphics card at 2550x1440 at say 40-80fps.

    Clearly, if you're smart enough to do all that math, you will have figured out that you're not the target market here.

    So, the only objective of your post is boasting. Is that it, in a nut shell?
    Reply
  • milesk182
    I tried onlive before and it was exactly like "d_kuhn" says, highly compressed high resolution. I remember trying Unreal tournament and though the settings were maxed it looked like crap and there was mouse delay all over the place running on a 50mbps connection. Unless this somehow shows that it can fix these issues then no way am I switching from my dedicated hardware!?
    Reply
  • yumri
    @d_kuhn you are doing active rendering on your computer while playing a game but you will be doing post rendering if you use their new set up which they are proposeing. Post rendering is faster than active rendering b/c of the way less computing it takes to do making most of your video cards not have to work as much if at all becuase its job was already done on their cloud. Though you do have a point in that it will no be good for real time gaming with higher end computers as active rendering them might be the faster of the 2 compared to just post rendering them which will be faster for lower end computers like mine with only a 6150SE chip. Though if nVidia comes out with this then the gameing instrustry will have a bigger base for turn based games and ppl who have the connection to support real time based games.
    From the example it is intended not for gaming but for designing which doesnt need the same speed as real time gaming would be nice but not nessary for us to have. So please d_kuhn dont get them mixed up.

    My own point now this seems like a good idea unless they change a arm and a leg for it as i wont have to upgrade as often then just to use the current version of 3DMAX!, Photoshop, Blender, and others which take seemingly a ton of resources to just open any picture but do a wonderful job with it once opened.
    Reply
  • jkflipflop98
    They're so close, yet missing the mark.

    I'd love to have a system where I could run my own cloud over my own gigabit network using my own games. Then you could play starcraft on your tablet or FarCry3 with cranked up graphics playing on your TV through your phone.
    Reply
  • d_kuhn
    So let me get this straight... right now I play games rendered locally on my NVidia Graphics card at 2550x1440 at say 40-80fps.

    Clearly, if you're smart enough to do all that math, you will have figured out that you're not the target market here.

    So, the only objective of your post is boasting. Is that it, in a nut shell?

    mmm... boasting about what, any PC newer than 4 or even more years will be able to pump FAR more pixels than an internet stream can manage. What I was doing was pointing out that once again 'the cloud' is being pointed at an application that it's not well suited to perform (other than for a limited subset of apps). The issue for those of us who aren't 'the target audience' is that if game manufacturers decide it's a good idea... we'll all be playing crappy internet games, and at that point we'll be begging for a return to the era where games were designed for consoles.
    Reply
  • d_kuhn
    9443837 said:
    @d_kuhn you are doing active rendering on your computer while playing a game but you will be doing post rendering if you use their new set up which they are proposeing. Post rendering is faster than active rendering b/c of the way less computing it takes to do making most of your video cards not have to work as much if at all becuase its job was already done on their cloud. Though you do have a point in that it will no be good for real time gaming with higher end computers as active rendering them might be the faster of the 2 compared to just post rendering them which will be faster for lower end computers like mine with only a 6150SE chip. Though if nVidia comes out with this then the gameing instrustry will have a bigger base for turn based games and ppl who have the connection to support real time based games.
    From the example it is intended not for gaming but for designing which doesnt need the same speed as real time gaming would be nice but not nessary for us to have. So please d_kuhn dont get them mixed up.

    My own point now this seems like a good idea unless they change a arm and a leg for it as i wont have to upgrade as often then just to use the current version of 3DMAX!, Photoshop, Blender, and others which take seemingly a ton of resources to just open any picture but do a wonderful job with it once opened.

    I agree that for something like a turn based game this would likely work fine, lag isn't an issue and the screen isn't as dynamic... but I found it interesting that they were showing an FPS on their advert. Also... in their published teasers they showed a TV used as a gaming platform, so I'd say they're doing ALL the rendering server side and streaming the fully rendered video, at least that's the level of capability they're advertising.
    Reply
  • purrcatian
    d_kuhnSo let me get this straight... right now I play games rendered locally on my NVidia Graphics card at 2550x1440 at say 40-80fps. NVidia is thinking what I SHOULD be doing is letting them render on their servers then stream me the game graphics - so instead of having a 4 GBps (capital B) level pci-e pipe for my graphics, it'll all need to be stuffed through a 30 mbps (lowercase b, or ~ 1/8 of a capital B... or about 4 MBps). 1/1000 the bandwidth (and my net connection is pretty respectable). Sure a lot of the workload is simplified, but how much does an uncompressed 2550 screen take to stream at 30fps (Answer: 440 MBPS)... so that's completely out - what you'll get is either a low resolution screen or highly compressed high res (which doesn't look good). If you're playing a game with low update frequency (no fps, no rts, no arpg, no strategy with rt components) then it may be acceptable... otherwise it's going to be a tough sell.
    That isn't exactly a fair comparison. The information exchanged between your CPU and GPU is not the same as the information transferred between your GPU and monitor.

    In a cloud gaming setup, you have several streams of data that need to be transferred. You have the information from the human interface device(s) going to the server, and then the audio and video streams coming back from the server. The HID information is likely going to require negligible bandwidth, and the audio bandwidth is going to be small compared to the video bandwidth.

    According to http://www.emsai.net/projects/widescreen/bandwidth/ 2560x1440 @ 60Hz is 7.87 Gbit (with a lower case B). This number is much lower than the 4GB/s PCI-e interconnect you referenced. This is uncompressed. Compression can greatly reduce this, but adds additional latency.

    Lets assume that they are using H.264 compression for the video. According to http://stackoverflow.com/questions/5024114/suggested-compression-ratio-with-h-264 the formula is x x x x 0.07 = where the image width and height is expressed in pixels, and the motion rank is an integer between 1 and 4, 1 being low motion, 2 being medium motion, and 4 being high motion (motion being the amount of image data that is changing between frames, see the linked document for more information).

    Video games tend to be very fast paced. As a result, in order to make the game playable, lets assign a motion rank of 4. That leaves us with:
    2560 x 1440 x 60 x 4 x 0.07 6.19315x10^7 bps = 59.0625 MB/s = 472.5 Mbps.

    In other words, to get a quality almost as good as what you have now, you would need a 500mbps internet connection all to your self. In other words, you could do this if you lived in Kansas City. You might even be able to get this kind of bandwidth at your local university. Either way, this is something that could be possible on a wide scale in the future.

    That is completely ignoring the issue of latency.
    Reply