Sign in with
Sign up | Sign in
Your question

NVidias GPGPU speaks up. Interesting read

Last response: in Graphics & Displays
Share
a b U Graphics card
April 11, 2009 4:34:40 AM

nVidia has been selling its CUDA for awhile now, and it looks like they may have some real decent talent heading it up
http://bits.blogs.nytimes.com/2009/04/09/hello-dally-nv...
IMHLO, this puts nVidia in the game. Theres no assurances LRB is going to pan out, and their attempt is certainly not the norm for this or of course gpu usage as were used to .
Even tho SoC is the future, the PCI use wont die anytime soon. Im very impressed with this hiring, and excited for the "heterogeneous " approach of the future
a c 130 U Graphics card
a b Î Nvidia
April 11, 2009 9:00:03 AM

Nothing new JDJ,
The fact that Nvidia are selling CUDA to industry is definitely a bonus for them and if they are lucky they can build on it.
The problem as i see it is that companies given the economic climate just wont invest in the upgrade. I work in a top 100 company and its virtually shut down on all spending as far as new investment goes. Some of the really big companies or those that were investing already may decide to switch but that market can only be small.

The article can be read into two ways, you seem to be accepting whats being said, what i'm asking myself is. Whats stopping this just being marketing hype from Nvidia. Get a respected voice to reel of a few generic facts about the future of computing and voice concern about the x86 approach and suddenly people will start to doubt Intel and Larrabee. Yes it may not pan out but then again it may be a success on the scale of the 4850, we just don't know. Maybe they have heard things about LRB that is causing concerns and they are trying to throw cold water on the whole approach.

You know as well as i do that trying to get developers to change to a differant way of programing is half impossable a la DX9-DX10-DX10.1. So to say that "Intel isn’t going radical enough with its design, code-named Larrabee, which will still rely on the company’s beloved x86 architecture" seems a bit dissingenuous to say the least.

Reading between the lines it could be construed that Mr. Dally thought he could walk into a job at Intel and call the shots, found out he couldnt and would in fact have to do things the companies way and not be given cart blanche. The end result being this is all sour grapes.

Mactronix

a b U Graphics card
April 11, 2009 3:32:13 PM

Read my opinion where I say assured. nVidia is here, it is make headway, CUDA is here, and also doing so. While his comments may come as sour grapes, but unless a couple things are different it appears LRB will be using up a bit o silicon http://news.softpedia.com/news/Intel-Showcases-Larrabee...
Enlarging the photo on my link shows that each core is 600mm^2, making it huge. Now, either things are different, whereas this wasnt done on the 45nm process, or this isnt really a true LRB wafer, thats alot of silicon. I know people are looking to LRB, and should possibly be excited, as jut in any release, we should as TGGA says, hope for the best, expect the worst. This is not only sound logic, but even more so, in that in LRB , this is the first shot out of the barrel.
If these statements were simply made from this pic alone, then Id question his response even more than I do, as nothings writ in stone. But currently adapting the x86 set does bring with it a cumbersome amount of extras, for it to function in such a "parallel" way.
My post is more to who they chose, and somewhat less on his comments
Related resources
Can't find your answer ? Ask !
a c 130 U Graphics card
a b Î Nvidia
April 11, 2009 4:29:03 PM

Well its all down to opinions at the end of the day im not saying your wrong its just that i dont see it the way you do is all.
As i said well done to Nvidia for making progress with CUDA, and have listed my reasons for not getting to carried away about it. It may snowball across industry and good luck to them if it does, i have my doubts is all.
My reading of it is that they guessed the size of the die so it could actually be bigger, your saying each core is 600mm^2. Thats not what they are saying they are saying each chip is that big but how many cores are there on it ? Who knows. I could be wrong but thats what im reading. As you say definatly expect the worse and hope for the best.
Yes one man can make a differance and to be fair i am unfamiliar with this person but i would be very surprised if one proffesor more or less can make such a profound differance to the fortunes of Nvidia over the space of 3 months. I always thgought R+D took longer than that. Unless he has been working for them for a lot longer i seriously doubt if any of his work has even seen the light of day yet.

Mactronix
a b U Graphics card
April 11, 2009 4:41:18 PM

Its not the "now" part he brings, but the potential. We all know when it comes to heterogenous cimputing, nVidia could be the odd man out very quickly. This hiring, and their dedication will keep them in the game, and could lead to a IBM type solution down the road at the least.
If I said a simple core, sorry for the misunderstanding, I meant for LRB, and if thats true, and all weve got to go by is their intentions of keeping it low power usage, 600mm2 is still alot of silicon.
LRB reqquires alot of cache, and Im wondering if cache scales in processes as well as the rest of a unit. Maybe someone can enlighten me?
Anyways, as I said, his hiring, CUDAs implementation in varying fields is good to see, and at the least ending up as a IBM scenario wouldnt be a bad thing. Theres always going to be room for simply a gpgpu only solution, as even heterogenous may simply be overkill, and thus that at least leaves nVidia something , regardless of how things turn out
a c 130 U Graphics card
a b Î Nvidia
April 11, 2009 7:33:14 PM

Well i guess we could get a scenario where they offset the power usage with lower clocks, i guess there has to be a sweet point as far as this is concerned.

As i have said before its all about where Intel come in to the market. If and its all ifs at the moment, But if they can come in at a reasonable price point with power and heat kept down then i for one wouldn't expect it to compete at the top end. 3850 area is where i have always said i believe they need to be on release.
That then gives them the standard revision and a die shrink to iron out the bugs, and of course increase the clocks. I would be interested to find out about the Cache scalling also, my gut feeling is that it shouldnt be an issue but like yourself im not 100% sure.
As i said I'm not saying your not right, i don't know this man and as you seem to be setting so much store in him then i'm prepared to accept your judgment.

My main concern is that Nvidia seem to only have him because he couldnt get his own way at Intel, thats only my opinion and how i am reading the interview. I could be totally wrong, if im not then personally i wouldnt want someone like that in my company, no matter how good he was.

Guess we will have to wait and see how it irons out.

Mactronix
a b U Graphics card
April 11, 2009 8:14:41 PM

I believe its Intels decision to keep the power down, and thus the flops as well, saying maybe low 1+ Tflop. If all go well, itll be in the current top end range, or close to it, say 4850+, but thats if all goes well and its MHLO, but for sure they want low power numbers.
When we first itll be 300 watts, Im thinking that was a full 2Ghz model with more cores. Im thinking their process is better than that, and of course eveyone wants the 32nm process, as thats where its supposed to shine.
Itll be interesting to see nVidias G300, as thatll be second gen, and done on a better process, at some point, even done at GF, which I think has a jump on TSMC, as for process anyways, tho the G300 wont be a GF model/make, but maybe the refresh, tho itd be hard to switch.
Anyways, good times ahead
a b U Graphics card
April 11, 2009 8:42:21 PM

Read a lil on LRBni, reading it now. Heres a wiki on Dally http://en.wikipedia.org/wiki/Bill_Dally
He seems to be the right man for a smaller form, and to be honest, once LRB is built, heads will be chopped. Intels throwing alot of money at it for start up, then....once its built, the shuffling will begin. I somewhat agree with his choice
a b U Graphics card
April 11, 2009 8:59:36 PM

Havnt finished it, but something came to mind. Looking at his tables, it appears that 64bit wont suffer much on LRB, as it gives a nice breakdown. But Im betting seeing this, we arent far off from strictly 64bit on the next OS. Tho, for a better heterogenous experience, we will need to go and buy our 64bit LRBs heheh
a c 130 U Graphics card
a b Î Nvidia
April 11, 2009 9:22:52 PM

Thats true hadn't really thought of that. We dont get enough articles like that one it really get down to the nuts and bolts of the thing.
I havent read it all in depth yet myself i kinda skimmed some bits in the middle.
Oh and by the way the Cache issue seems not to be one as its meant to be global.

Each core is equipped
with a low-latency L1 cache. A large, global,
fully coherent L2 cache is evenly partitioned
into separate local caches for each core. This
means that each core has a fast access path
to its local partition as well as another access
path to remote cache partitions

Which should mean that the scalling wont be a problem.

Mactronix
a b U Graphics card
April 11, 2009 9:29:47 PM

Sounds good. Theyll need that to get to 32nm, and thats where thingsll get interesting
April 11, 2009 11:15:56 PM

I've got to laugh, your all talking about cuda as if its still going to be here in a years time.

proprietary standards can never make into the mainstream, its all coming to an end anyway as soon as DX11 comes out
!