Video of Nvidia closed conference encoding with GT 280 and CUDA

Gargamel

Distinguished
Apr 6, 2007
41
0
18,530
Here is some video footage I found while surfing of the next gen cards from nvidia that are to be released middle of June. A few examples of encoding HD movies and footage and using this new technology, HD encoding can be done at 2 x real time now. Yes, that is 2 x real time! check it out.

http://www.youtube.com/watch?v=8C_Pj1Ep4nw


 
G

Guest

Guest
very neat... hopefully this isn't the only part of the card that is much much faster
 
Once this becomes main stream, we will forget all about special programming, as itll be just another part of our experience. Each does is different tho, as the vid says, and this is being done with the to be released G280. So if you own a G80 or newer it can be done on your nVidia card, just dont expect it to be as fast as this, tho itll still stomp any cpu out there
 

Gargamel

Distinguished
Apr 6, 2007
41
0
18,530
Yeah, I think it is just part of the new chip set features, starting from this next generation encoding is going to be a whole new ball game. Encoding on multiple cores and designated two different jobs to each indidual core, sounds like it is taking advantaged of the cores like most games and modern day applications should be.
 

Gargamel

Distinguished
Apr 6, 2007
41
0
18,530
Well I am positive that the audio will still be through the SP dif and the HDMI output used as purely a video connection. Unless they start processing sound through the graphics card, which I doubt will happen, then it will remain a seperate process of the onboard sound or individual sound processor.
 

terror112

Distinguished
Dec 25, 2006
484
0
18,780
Wow, That is extremely impressive. I am a professional videographer, so this can really help encodeing times. Curret gen cpu's take hours to encode HD video. Nvidia finds new uses for the GPU everyday... From the homemade supercomputer to a super encoding box, no wonder nvidia is so confident that gpu's will revolutionize computing. Maby one day we could possibly see an entire system based of an nvidia gpu and it's SoC. (just dreaming here :) )
 
Well essentially that's what nV wants, and if you encode in C you could run almost exclusively with the GPU. The problem right now would be building a platform that is exclusive to the GPU, almost all of them desktop or rack mount, etc are related to some CPU for control.

 


No AA, but actually video RAM size and bandwidth is very important to GPGPU applications which is why the StreamCards from both companies have large VRAM sizes and use the maximum speed and bandwidth they can muster.

Expect ATi's lastest FireStreams to use 2GB of the fast GDDR5 stuff, and nV's to also use 2GB on that 512bit interface.

I wouldn't be surprised if by the end of this year or early next year we see 4GB cards geared towards GPGPU apps. I think they could use them far more than gaming cards which would likely be fine for most things at 1GB, and stuggle to justify 2GB even under the highest resolutions, etc.

 
No, GPGPU is doing computations using a DX9+ GPU.

Larrabee will be more CPGPU-ish, but even then it's supposed to be more graphically inclined but have support for some X86 components (but supposedly a little limited versus a full blown CPU).
Fusion and the replacement of Nehalem are supposed to combine the two.
 

terror112

Distinguished
Dec 25, 2006
484
0
18,780


AMD's next gen card will use 256bit with gddr 5. While that might be cheaper and more power efficient, I still believe that nvidia's solution will have higher bandwidth in terms of GB/s.
 

iluvgillgill

Splendid
Jan 1, 2007
3,732
0
22,790
what i really want to know is how much bandwidth would a card with GDDR3 have if it has unlimit buswidth.or call it maximum capable GB/s.and same to GDDR5.or the speed really depend on buswidth?
 

iluvgillgill

Splendid
Jan 1, 2007
3,732
0
22,790


to be honest i still prefer the traditional CPU+GPU not All-In-One solution.it always not as good no matter what it is.scanner+printer,phone+camera........................
 

That just depends. If the integration of cpu/gpu is done right, thered be nothing but positives. At this point, therell be limitations, maybe for just mobile in a smaller form factor, for power savings + video capabilities. Thats one +. The other would be speed and elimating components (power saving) and the ability for each to comunicate much faster. Right now, with seperate components we see the huge advantages a gpgpu can have in apps concerning video,audio and certain apps like 3D imaging using ultra sound. All these new capabilities will come in time, since weve finally caught up in hardware capabilities. Will this someday all come on a cpu? I doubt it, since with projected abilities of Nehalem, even at 32 cores it will struggle for the most part, and they dont exist yet, and by the time they do, we will have several arch improvements in gpus.