Sign in with
Sign up | Sign in
Your question

GT300 NEWS

Last response: in Graphics & Displays
Share
May 14, 2009 12:32:18 AM

Quote:
THERE'S A LOT of fake news going around about the upcoming GPUish chip called the GT300. Let's clear some air on this Larrabee-lite architecture.

First of all, almost everything you have heard about the two upcoming DX11 architectures is wrong. There is a single source making up news, and second rate sites are parroting it left and right. The R870 news is laughably inaccurate, and the GT300 info is quite curious too. Either ATI figured out a way to break the laws of physics with memory speed and Nvidia managed to almost double its transistor density - do the math on purported numbers, they aren't even in the ballpark - or someone is blatantly making up numbers.

That said, lets get on with what we know, and delve into the architectures a bit. The GT300 is going to lose, badly, in the GPU game, and we will go over why and how. First a little background science and math. There are three fabrication processes out there that ATI and Nvidia use, all from TSMC, 65nm, 55nm and 40nm.

They are each a 'half step' from the next, and 65nm to 40nm is a full step. If you do the math, the shrink from 65nm to 55nm ((55 * 55) / (65 *65) ~= 0.72) saves you about 1/4 the area, that is, 55nm is 0.72 of the area of 65nm for the same transistor count. 55nm shrunk to 40nm gives you 0.53 of the area, and 65nm shrunk to 40nm gives you 0.38 of the area. We will be using these later.

Second is the time it takes to do things. We will use the best case scenarios, with a hot lot from TSMC taking a mere six weeks, and the time from wafers in to boards out of an AIB being 12 weeks. Top it off with test and debug times of two weeks for first silicon and one week for each subsequent spin. To simplify rough calculations, all months will be assumed to have 4 weeks.

Okay, ATI stated that it will have DX11 GPUs on sale when Windows 7 launches, purportedly October 23, 2009. Since this was done in a financial conference call, SEC rules applying, you can be pretty sure ATI is serious about this. Nvidia on the other hand basically dodged the question, hard, in its conference call the other day.

At least you should know why Nvidia picked the farcical date of October 15 for its partners. Why farcical? Lets go over the numbers once again.

According to sources in Satan Clara, GT300 has not taped out yet, as of last week. It is still set for June, which means best case, June 1st. Add six weeks for first silicon, two more for initial debug, and you are at eight weeks, minimum. That means the go or no-go decision might be made as early as August 1st. If everything goes perfectly, and there is no second spin required, you would have to add 90 days to that, meaning November 1st, before you could see any boards.

So, if all the stars align, and everything goes perfectly, Nvidia could hit Q4 of 2009. But that won't happen.

Why not? There is a concept called risk when doing chips, and the GT300 is a high risk part. GT300 is the first chip of a new architecture, or so Nvidia claims. It is also going to be the first GDDR5 part, and moreover, it with be Nvidia's first 'big' chip on the 40nm process.

Nvidia chipmaking of late has been laughably bad. GT200 was slated for November of 2007 and came out in May or so in 2008, two quarters late. We are still waiting for the derivative parts. The shrink, GT206/GT200b is technically a no-brainer, but instead of arriving in August of 2008, it trickled out in January, 2009. The shrink of that to 40nm, the GT212/GT200c was flat out canceled, Nvidia couldn't do it.

The next largest 40nm part, the GT214 also failed, and it was redone as the GT215. The next smallest parts, the GT216 and GT218, very small chips, are hugely delayed, perhaps to finally show up in late June. Nvidia can't make a chip that is one-quarter of the purported size of the GT300 on the TSMC 40nm process. That is, make it at all, period - making it profitably is, well, a humorous concept for now.

GT300 is also the first DX11 part from the green team, and it didn't even have DX10.1 parts. Between the new process, larger size, bleeding-edge memory technology, dysfunctional design teams, new feature sets and fab partners trashed at every opportunity, you could hardly imagine ways to have more risk in a new chip design than Nvidia has with the GT300.

If everything goes perfectly and Nvidia puts out a GT300 with zero bugs, or easy fix minor bugs, then it could be out in November. Given that there is only one GPU that we have heard of that hit this milestone, a derivative part, not a new architecture, it is almost assuredly not going to happen. No OEM is going to bet their Windows 7 launch vehicles on Nvidia's track record. They remember the 9400, GT200, and well, everything else.

If there is only one respin, you are into 2010. If there is a second respin, then you might have a hard time hitting Q1 of 2010. Of late, we can't think of any Nvidia product that hasn't had at least two respins, be they simple optical shrinks or big chips.

Conversely, the ATI R870 is a low risk part. ATI has a functional 40nm part on the market with the RV740/HD4770, and has had GDDR5 on cards since last June. Heck, it basically developed GDDR5. The RV740 - again, a part already on the market - is rumored to be notably larger than either the GT216 or 218, and more or less the same size as the GT215 that Nvidia can't seem to make.

DX11 is a much funnier story. The DX10 feature list was quite long when it was first proposed. ATI dutifully worked with Microsoft to get it implemented, and did so with the HD2900. Nvidia stomped around like a petulant child and refused to support most of those features, and Microsoft stupidly capitulated and removed large tracts of of DX10 functionality.

This had several effects, the most notable being that the now castrated DX10 was a pretty sad API, barely moving anything forward. It also meant that ATI spent a lot of silicon area implementing things that would never be used. DX10.1 put some of those back, but not the big ones.

DX11 is basically what DX10 was meant to be with a few minor additions. That means ATI has had a mostly DX11 compliant part since the HD2900. The R870/HD5870 effectively will be the fourth generation DX11 GPU from the red team. Remember the tessellator? Been there, done that since 80nm parts.

This is not to say that is will be easy for either side, TSMC has basically come out and said that its 40nm process basically is horrid, an assertion backed up by everyone that uses it. That said, both the GT300 and R870 are designed for the process, so they are stuck with it. If yields can't be made economically viable, you will be in a situation of older 55nm parts going head to head for all of 2010. Given Nvidia's total lack of cost competitiveness on that node, it would be more a question of them surviving the year.

That brings us to the main point, what is GT300? If you recall Jen-Hsun's mocking jabs about Laughabee, you might find it ironic that GT300 is basically a Larrabee clone. Sadly though, it doesn't have the process tech, software support, or architecture behind it to make it work, but then again, this isn't the first time that Nvidia's grand prognostications have landed on its head.

s

The basic structure of GT300 is the same a Larrabee. Nvidia is going to use general purpose 'shaders' to do compute tasks, and the things that any sane company would put into dedicated hardware are going to be done in software. Basically DX11 will be shader code on top of a generic CPU-like structure. Just like Larrabee, but from the look of it, Larrabee got the underlying hardware right.

Before you jump up and down, and before all the Nvidiots start drooling, this is a massive problem for Nvidia. The chip was conceived at a time when Nvidia thought GPU compute was actually going to bring it some money, and it was an exit strategy for the company when GPUs went away.

It didn't happen that way, partially because of buggy hardware, partially because of over-promising and under-delivering, and then came the deathblows from Larrabee and Fusion. Nvidia's grand ambitions were stuffed into the dirt, and rightly so.

Nvidia Investor Relations tells people that between five to ten per cent of the GT200 die area is dedicated to GPU compute tasks. The GT300 goes way farther here, but let's be charitable and call it 10 per cent. This puts Nvidia at a 10 per cent areal disadvantage to ATI on the DX11 front, and that is before you talk about anything else. Out of the gate in second place.

On 55nm, the ATI RV790 basically ties the GT200b in performance, but does it in about 60 per cent of the area, and that means less than 60 per cent of the cost. Please note, we are not taking board costs into account, and if you look at yield too, things get very ugly for Nvidia. Suffice it to say that architecturally, GT200 is a dog, a fat, bloated dog.

Rather than go lean and mean for GT300, possibly with a multi-die strategy like ATI, Nvidia is going for bigger and less areally efficient. They are giving up GPU performance to chase a market that doesn't exist, but was a nice fantasy three years ago. Also, remember that part about ATI's DX10 being the vast majority of the current DX11? ATI is not going to have to bloat its die size to get to DX11, but Nvidia will be forced to, one way or another. Step 1) Collect Underpants. Step 2) ??? Step 3) Profit!

On the shrink from 55nm to 40nm, you about double your transistor count, but due to current leakage, doing so will hit a power wall. Let's assume that both sides can double their transistor counts and stay within their power budgets though, that is the best case for Nvidia.

If AMD doubles its transistor count, it could almost double performance. If it does, Nvidia will have to as well. But, because Nvidia has to add in all the DX11 features, or additional shaders to essentially dedicate to them, its chips' areal efficiency will likely go down. Meanwhile, ATI has those features already in place, and it will shrink its chip sizes to a quarter of what they were in the 2900, or half of what they were in the R770.

Nvidia will gain some area back when it goes to GDDR5. Then the open question will be how wide the memory interface will have to be to support a hugely inefficient GPGPU strategy. That code has to be loaded, stored and flushed, taking bandwidth and memory.

In the end, what you will end up with is ATI that can double performance if it choses to double shader count, while Nvidia can double shader count, but it will lose a lot of real world performance if it does.

In the R870, if you compare the time it takes to render 1 Million triangles from 250K using the tesselator, it will take a bit longer than running those same 1 Million triangles through without the tesselator. Tesselation takes no shader time, so other than latency and bandwidth, there is essentially zero cost. If ATI implemented things right, and remember, this is generation four of the technology, things should be almost transparent.

Contrast that with the GT300 approach. There is no dedicated tesselator, and if you use that DX11 feature, it will take large amounts of shader time, used inefficiently as is the case with general purpose hardware. You will then need the same shaders again to render the triangles. 250K to 1 Million triangles on the GT300 should be notably slower than straight 1 Million triangles.

The same should hold true for all DX11 features, ATI has dedicated hardware where applicable, Nvidia has general purpose shaders roped into doing things far less efficiently. When you turn on DX11 features, the GT300 will take a performance nosedive, the R870 won't.

Worse yet, when the derivatives come out, the proportion of shaders needed to run DX11 will go up for Nvidia, but the dedicated hardware won't change for ATI. It is currently selling parts on the low end of the market that have all the "almost DX11" features, and are doing so profitably. Nvidia will have a situation on its hands in the low end that will make the DX10 performance of the 8600 and 8400 class parts look like drag racers.

In the end, Nvidia architecturally did just about everything wrong with this part. It is chasing a market that doesn't exist, and skewing their parts away from their core purpose, graphics, to fulfill that pipe dream. Meanwhile, ATI will offer you an x86 hybrid Fusion part if that is what you want to do, and Intel will have Larrabee in the same time frame.

GT300 is basically Larrabee done wrong for the wrong reasons. Amusingly though, it misses both of the attempted targets. R870 should pummel it in DX10/DX11 performance, but if you buy a $400-600 GPU for ripping DVDs to your Ipod, Nvidia has a card for you. Maybe. Yield problems notwithstanding.

GT300 will be quarters late, and without a miracle, miss back to school, the Windows 7 launch, and Christmas. It won't come close to R870 in graphics performance, and it will cost much more to make. This is not an architecture that will dig Nvidia out of its hole, but instead will dig it deeper. It made a Laughabee


http://www.theinquirer.net/inquirer/news/1137331/a-look...

More about : gt300 news

May 14, 2009 1:03:36 AM

Hmmm. Interesting. A lot of that went over my head, so I'll have to take his word for it, or not. I'd like to see some more information, though.
May 14, 2009 1:18:01 AM

looks like someone got fired from Nvidia and decided to get back at them...
Related resources
May 14, 2009 1:29:20 AM

Quote:
On 55nm, the ATI RV790 basically ties the GT200b in performance, but does it in about 60 per cent of the area


When it's overclocked to the limit.

This article is biased as hell. ATI's tesselator isn't DX11 compatible IIRC. Plus I doubt Charlie was given the full schematics of a GT300 when the GPU isn't even taped out yet.

Nvidia hasn't released a new GPU on a new process since the FX series. If they do go 40nm they are going to have some issues with a huge die. Hopefully they learn from their mistakes of this generation, if they pull another G200 we might start seeing eVGA ATI cards.
May 14, 2009 1:47:15 AM

Interesting if accurate, but it sounds like alot of supposition to me.
May 14, 2009 1:59:04 AM

He brings up a lot of good points that have been discussed before but does it in a very biased manor. Not to mention he also makes them up like his "actual" release date B.S.

No one will argue that nVidia has a good few things to overcome that are more serious for them than ATI, such as:
1) DX11 implementation
2) DDR5 implementation, tweaking, and the purchasing of the new modules
3) Reportedly another large chip = not very cost efficient
4) Physx and Cuda are going to fall on their face if demand doesn't increase for them

However, if the chip is as large as reported then I can not see how it will not perform well. DDR5 should solve all the bandwidths issues nVidia has been having and nVidia doesn't seem to have a problem with going over the top with the hardware without listening to the budget department.

Also ATI will not be doing great anyway. They have a headstart with most of the topics posted above, but they also have MUCH less in the way of funds for R&D like nVidia has. Also AMD still is suffering losses and isn't the most financially secure at the moment.

What will likely happen is a similar situation as we have now with the GTX 2xx vs 4xxx series, but ATI might have a slight edge with DX11.
May 14, 2009 2:36:19 AM

He lost a lot of credibility for me when he was talking about die sizes and mentioned that " [GPUs are on] 65nm, 55nm and 40nm [processes].

They are each a 'half step' from the next, and 65nm to 40nm is a full step"

55nm to 40nm is a full step, 65nm to 45nm is a full step, 65nm to 40nm is a step and one half.
a b U Graphics card
May 14, 2009 3:19:49 AM

Yep, that wasn't biased. Its too bad, because he brought up good points, but they were lost in all the bashing. I'm fairly confident some things will work out for NVidia and some won't for ATI. And also, what makes him think his sources are so accurate? I also noticed he just took ATI's word for their release date and just threw NVidias out as lies. I guess we'll see.
May 14, 2009 3:41:43 AM

cybot_x1024 said:
looks like someone got fired from Nvidia and decided to get back at them...



I Agree!


I don't think i would sit back for one second and say Nvidia doesn't have the knowledge or "Whatever" to make a GDDR5/40nm/DX11 Card.

With all the Great! Advances Nvidia has made in only the past year!

Well I'd say they might be on a slower but Much Smarter way to Advancements.

ATi has made some good stuff to "lets not let this be a one sided Battle "

Both have done Amazing Work!

ATi Slapping 500 or more Cores on one Card.

Nvidia with its Cuda,PhysX,Hybrid,Pure video HD.

Go to Nvidia's Web site you can find a lot of DX11 stuff.

I think they have a Nice surprise Coming up like they normally do.


You watch.

Nvidia is Still the King of Visual Entertainment.

Vary Goop post though.


Good job!

May 14, 2009 1:03:39 PM

Dekasav said:
He lost a lot of credibility for me when he was talking about die sizes and mentioned that " [GPUs are on] 65nm, 55nm and 40nm [processes].

They are each a 'half step' from the next, and 65nm to 40nm is a full step"

55nm to 40nm is a full step, 65nm to 45nm is a full step, 65nm to 40nm is a step and one half.


Huh? If 65nm to 45nm is 1 full step (20nm difference) how is an additional 5nm a half?
a b U Graphics card
May 14, 2009 1:15:16 PM

Quote:
Hopefully they learn from their mistakes of this generation, if they pull another G200 we might start seeing eVGA ATI cards.


AMD knocked back an offer from EVGA last year.
a b U Graphics card
May 14, 2009 2:09:18 PM

turboflame said:
Quote:
On 55nm, the ATI RV790 basically ties the GT200b in performance, but does it in about 60 per cent of the area


When it's overclocked to the limit.



While I would usually agree, with almsot all ATI partners releasing 1ghz 4890's in the coming couple weeks I'd be really interested in a comparison of the best avaliable retail cards. Most OC reviews are self overclocked and what have you.

I'd have to imagine the 1ghz 4890's would be effectively on par with the highest OC 285's avaliable at retail. Though a factory OC of 150mhz will probalby push the 4890 into a higher price segment (I wonder though).

I hate comparing user overclocked cards to user over clocked cards and having people state "x is faster than y" as there are too many variables. But It would be nice to see a factoy overclocked roundup here, really put to rest what the fasted card you can "buy" is. Maybe next month.

I find it strange the way ATI has gone about the 4890. In previous generations they would have released an official GPU with official faster specs. (1900gt vs 1900xtx for example, even 4850 vs 4870) Yet this gen they are sticking with the 850mhz 4890 and just having the partners go as crazy as they want. I mean a 1ghz 4890 is as far removed from a 4890 as a 4890 is from the 4870... I question the point of calling it the same card at this point. It makes more sence to me to have the 4870, the 4890, and the 4895+ or something for the 1ghz cards (they msut have great yields if every partner is comgin out with a 1ghz)... With so many OC varients, even from release I wonder why they spec'd the stock so low in the first place.. at any rate..
a b U Graphics card
May 14, 2009 2:11:19 PM

Quote:
For the last fcuking time, stop quoting whole articles, haven't you been warned enough about this dipshit!!!!.


But it's not like there is some easy way to make a clickable "link" to the source right? ;)  Besides that, if he had linked it peolpe might have noticed it is from the inquirer.... lol
May 14, 2009 2:24:21 PM

Quote:
For the last fcuking time, stop quoting whole articles, haven't you been warned enough about this dipshit!!!!.



no i haven't, anyway couldn't give a flying *** what you think, and by warned, do you mean you opening your big mouth or being pulled up by one of the mods, yes to the first one and no to the second, piss off you NeoNatzi
May 14, 2009 3:23:12 PM

a fascist is a fascist on the net or not, anyway im not going to ague semantics with you
a b U Graphics card
May 14, 2009 3:40:20 PM

B-Unit said:
Huh? If 65nm to 45nm is 1 full step (20nm difference) how is an additional 5nm a half?

Theoretically, the true half node would be 38.5, or half ways between 45 and 32nm, so 40nm is considered a half node.

Charlie does bring many truths, some stretched, some not at all in this, but even between his sometime stretches, hes biased. This all started awhile ago, when he and an nVidia "dude" got in a fight in HongKong. The words have gotten worse sine then. It was the mobile card failures at first, and he still hasnt let it go
May 14, 2009 3:54:48 PM

Zerk said:

I don't think i would sit back for one second and say Nvidia doesn't have the knowledge or "Whatever" to make a GDDR5/40nm/DX11 Card.


Actually that's one area where Charlie has a point. Releasing a new architecture on a new process is very risky, Nvidia learned that the hard way when they released the FX series on a new 130nm process. Since then they've always played it safe and used a more mature process, that's why it took so long for them to transition to 55nm.
a b U Graphics card
May 14, 2009 4:21:09 PM

More to his point, if nVidia had actually made a DX 10.1 part, instead of pretending how useless it was, itd have a Tesselator, and the DX10.1 HW in their cards already. Now, theyll have to do that, plus a whole new arch, plus add in the requirements (die space), and all on a new node. Not the best scenario for any company
May 14, 2009 5:59:49 PM

stranger stop starting **** with everyone in every god damn thread.
a b U Graphics card
May 14, 2009 8:49:12 PM

Quote:
This is not the first time he has done this, to my knowledge you are not allowed to directly quote whole articles on the forums for multiple reasons including copywrite.

The author has not given him permission to use it, there was no indication where it was from, that inq. link could have been to another source saying the same thing, it is just not right.

There are certain ways of doing things on forums and that ain't one of them.

Quote:
User agrees not to post any material that is protected by copyright, trademark or other proprietary right without the express permission of the owner(s) of said copyright, trademark or other proprietary right.

You aren't a mod and no one in their right mind will make you one just cuz u rip on people because of your interpretation of the rules. he brought the topic up to hear people's feedback and stuff okay? he could have just posted the link but this made it more convinient for all of us.

god...why do people just popup in a thread to start a fight?
May 14, 2009 9:04:21 PM

better do what he tells you to do, or he could ban you, sarcasm, get a life strangestranger
May 14, 2009 10:04:04 PM

I bet you all your posts combined probably equal the length of his copied article, and to tell the truth i found his post a lot more interesting than yours stranger. If anybody enforced your fanatical rules this would be the shittiest forum
May 14, 2009 10:21:13 PM

some rules cause too much friction, some people cause too much friction. a bureaucratic system is far from efficient, if everybody had to site something just to start a thread around 90% of the populous wouldn't post, or would post on a forum that didn't enforce there rules. i just cant see how bending a minuscule rule to make a forum better causes you so much upheaval
May 14, 2009 10:33:45 PM

the only person taking up space on here is you and BTW threats only work if you've got some kind of authority, and the last time i had a look you where still a little man with a limpdick
May 14, 2009 10:46:03 PM

stranger is right, although I cant say that rangers specificly has been warned by mods, they have warned others about this kind of posting. It is a violation of copyright, and tom's is liable to be sued.
a b U Graphics card
May 14, 2009 10:54:09 PM

This is a really silly argument. It's a worthwhile topic to be discussed yet we are ruining it with a daft argument over nothing much.
May 14, 2009 10:59:02 PM

always fun to read
a b U Graphics card
May 14, 2009 11:01:40 PM

Liquid, long time no hear. How's the playstation gaming going?
a b U Graphics card
May 14, 2009 11:33:01 PM

Quote:
Probably, but i made a decision a while ago not to change my attitude as it was better than most.

If my rules were enforced this would be a far better forum as most forums don't let idiots like rangers run riot, yes you can call me a hypocrite and yes you shoudn't use personal insults and all that but at least i give something of a damn to the content of these forums.

How you can think he is not wrong i do not know, a basic rule of forums usually is not to post an entire article of someone else's work unless a press release or something.

Is it my fault i do not like people just posting as they please bumping other people's posts down which may be worth reading?

Perhaps if people were to use some common sense these forums would not have the dross that accumulates like scum on the surface.

Rangers is part of that scum IMO.


if you dont like it leave. YOUR NOT A FREAKING MOD. u have no right to talk, contact a mod and get him to prosecute, its not ur job. seriously, if soo many people in a single thread are yelling at you, the problem cant be us
a c 176 U Graphics card
May 14, 2009 11:39:59 PM

First, yes, Charlie is biased as hell. I'm sure we can pretty much all agree on that. Second, we only need to make one assumption from his article. We should all be able to agree that Nvidia has a tough road ahead. Things they need to do include design new chip, test new (for them) process (40nm), implement new DX level, implement GDDR5/memory controller, etc. The only assumption we need to buy from Charlie's article is whether or not the chip is tapped out or not. If its not, they are screwed. You don't need to believe his time frame, or the issue they will/might have with the tessellator, or anything else he wrote. Just think, if they still have to finalize the chip, they will need to do their test runs first. They need to test those chips to make sure they work as they believe. This will take even more time. And seeing as they have no (publicly) working parts that support DX10.1/11, there is no guarantee that they will get it right the first time.

I'm not saying they won't. They are a big company that has lots of money to throw at this problem. (they have the other more possibly damaging problem of not being able to scale their chips down, thus they might end up limiting the lowend to DX10 only) Seeing as we don't usually get it right the first time, I'm having a hard time believing that everything will go great with their new chips. I for one happen to believe his time frame (assuming its not tapped out already) and think we'll see new chips from them VERY late in the year. The question now is, is it tapped or not?

Edited for spelling.
May 14, 2009 11:42:23 PM

If my rules were enforced you would have been banned for your insults, instead we have to put up with your crap, derogatory remarks, small mindedness, and basically chasing away new members, who maybe think we are all dciks like you
May 14, 2009 11:49:55 PM

i think both ATI and nvidia are going to have problems with 40nm if the news we have been hearing about the process is true
a b U Graphics card
May 14, 2009 11:58:27 PM

Currentrly, the 4770 using the 40nm appears to be solid, tho, what are those yields? and since the 4770 ocees so well, it does blow away the leakage problem, as said. Going to 55 from 65, as Charlie said, did take longer than anyone thought it would. Using huge di make this more a likelyhood, so again, hes right on. The rest? Could be
a b U Graphics card
May 15, 2009 12:07:45 AM

ATI must be absolutely miles ahead on 40nm. The 4770 is all the proof that is needed tbh.

If Nvidia release the g300 before the end of this year i'll be amazed. There is simply so much that can go wrong, they need a miracle to get this out first time.

On the other hand, Nvidia have done almost nothing in terms of innovation for years. Maybe they are about to pull an ATI r700 kind of trick out of the bag with g300. Truth is, they had better be.
May 15, 2009 12:08:31 AM

guys helloooooooooooo WAKE UP. Its not an nvidia vs ati vs intel this writer is fighting back i will not state details but remember fudzilla editor wrote somthin then a lot of web attacked him bla bla bla he is just fighting back cmon HELL i will wait for intel larabee before i buy either could my graphic card be names celeron or atom or i7 hmmmmmmmmmmmmm
!!!!
a b U Graphics card
May 15, 2009 12:13:07 AM

Since the release of the G80 series, weve all been waiting on nVidia for that next killer card. Now, it appears theyre going to be hard pressed just to make a DX11 card in time of W7s release, or even by the end of the year? Whats been going on over in GreenLand?
May 15, 2009 12:17:20 AM

btw
i think nvidia is already working on x86 processors maybe thats why there innovation is v. slow unlike ati they already have cpus so the just innovate nvidia has to start from zero
a b U Graphics card
May 15, 2009 12:17:24 AM

The real question is, what the hell have nvidia been doing with their cash. For their sake they better be sitting on a big mountain of money because they are losing it hand over fist right now and it's gonna get a lot worse before it gets better.
a b U Graphics card
May 15, 2009 12:18:04 AM

MO BEEJ said:
btw
i think nvidia is already working on x86 processors maybe thats why there innovation is v. slow unlike ati they already have cpus so the just innovate nvidia has to start from zero



Yep but unfortunately you need this thing called 'x86 licence', and they don't have it. :p 
May 15, 2009 12:21:23 AM

i think intels gpu will lose to ati nvidia since its fairly FAIRLY new to the market (counting gma and exprees and family chipset as a gpu is BAD) but there is always a but if intel does SDF (scalable DUAL fire) i created that like sli and crossfire intel WILL prevail u cant argue with intell support and DRIVER so we might see 2x or 3x or even 4x the performance all in all w8 for the q1 of 2010 everything will show at least 4 sure
May 15, 2009 12:22:35 AM

they dont have it yet but it does not ban them from brainstorming and LITTLE testing here and there
May 15, 2009 12:23:39 AM

hell maybe they r saving
$
to buy a license LOL
let nvidia taste a bit of its own medicine
a b U Graphics card
May 15, 2009 12:32:16 AM

I personally think Intel will buy Nvidia within 2 years, EU permitting. I think Larrabee is a great idea but it is ahead of its time and intel need a really decent gpu for the upcoming fusion battle.

AMD are well behind intel in cpu's, but their igp's are miles ahead. This is why AMD bought ATI - not to make discrete graphics but for the cpu/gpu fusion. Unless intel can suddenly make decent igp's then they will have a huge disadvantage vs AMD for fusion, that is why I think they will buy out Nvidia once they hit rock bottom, which tbh can't be far away.
a b U Graphics card
May 15, 2009 12:46:23 AM

From a few rumors and good guesses, LRB can be shrunk to lessor core amounts, and thus make thier IGPs LRB styled. Going fusion in this manor might give Intel a nice boost indeed. I keep hearing 2.5x what they currently have, which puts them close to the high end currently, until ATI releases its R700 based IGP, then itll still be slow, but fairly close to nVidias IGP
May 15, 2009 3:06:53 AM

@ jennyh

Can't complain:p , i'm back into pc gaming for killing floor, but everything else ps3:p  even sacred 2 haha:p 
May 15, 2009 3:22:32 PM

i wouldn't say ati are miles behind intel with there CPUs, behind yes but not by miles
May 15, 2009 3:40:39 PM

L1qu1d said:
@ jennyh

Can't complain:p , i'm back into pc gaming for killing floor, but everything else ps3:p  even sacred 2 haha:p 



I bet you are enjoying playing games at 20-30fps, 720p and even sometimes sub hd resolutions while struggling with you controller in fps games. Different people, different standards...
May 15, 2009 6:21:29 PM

well theres, exclusives, theres consistency in updates and dlc, yes controller for fps isn't that great, but its good to have everything at ur finger tips, i have a 720p tv, so the resolution doesn't affect me.

Plus with the console I have more audiences, seeing as not evey1 knows computers, so most of them choose console, which means better gameplay online for certain games (hawx).

I have 2 285s on a pc that hasn't been touched. :p 

Also console games have a resale value, PC games rarely do.

Like you said, different opinions:) 

P.S

yes I love playing gt 5 at 60 fps, killzone 2 at 30 fps;) I mean it works, right? :) 
May 15, 2009 8:09:31 PM

oh i'm not whining at all, I mean I'd jump right back on board to pc, when i see some titles that catch "my" eye.

Just like killing floor. Anything that can play on my laptop is welcome :) 

don't worry stranger I'll be back to annoy you as soon as dirt 2 is out;), or SC 2 :)  Hopefully by then, We'll all have computers that won't cry when playing crysis:p 
May 15, 2009 9:39:28 PM

L1qu1d said:
well theres, exclusives, theres consistency in updates and dlc, yes controller for fps isn't that great, but its good to have everything at ur finger tips, i have a 720p tv, so the resolution doesn't affect me.

Plus with the console I have more audiences, seeing as not evey1 knows computers, so most of them choose console, which means better gameplay online for certain games (hawx).

I have 2 285s on a pc that hasn't been touched. :p 

Also console games have a resale value, PC games rarely do.

Like you said, different opinions:) 

P.S

yes I love playing gt 5 at 60 fps, killzone 2 at 30 fps;) I mean it works, right? :) 



Well after this attempt I wish I could say that you "make some valid points" but you really dont.

There are tons of exclusives on PC gaming like starcraft 2, diablo 3, stalker Crysis, the Witcher, Empire Total War etc too many too count. Furthermore if you like RTS and MMORPGS then PC is your only choice.

Due to the high development costs that come from royalty fees and high risk of failure Consle gaming is dominated by overhyped, shallow titles. Developers are afraid to experiment. PC gaming as an open platform is the only place where innovation and creativity from indie developers runs rampant. Add to that unlimited backwards compatibility and the fact that PC enjoys xbox360 only games like Gears of war, mass effect and soon fable 2 etc its easily the best choice.

If you like gamepads you can use any kind of them on your PC so i dont know why you brought that up. But I really cant believe you brought up DLC which is maybe the worse trend to ever come to video gaming. Cutting stuff form release to price them later while PC gamers get unlimited content for free from mods is simply outrageous and a rip off.

I dont know what kind of audienced you are talking about but from my experience from xbox360 live I had to always mute everybody to avoid whiny kiddy voices talking "gangsta". Steam is by far more popular in terms of users compared to psn and thats a fact.

Well developers dont get anything from used games, you are only making greedy brick and mortar owners richer. 1/3rd from Gamestop annual profits came from reselling used console games, huge amounts of money that the developer gets nothing for. Its almost as bas as piracy and you dont want to bring this up....

So... Why are you playing console games again? weird tastes is the only valid reason especially when you have a good rig like you do.
!