ATI is currently rocking a substantial lead in the DirectX 11 space—it’s blazing a trail where it has, in the past, often followed suit. “But Chris, ATI gave us DirectX 10.1!” And look how pervasive or impactful that turned out to be.
This time it’s different, though. ATI and Nvidia both agree that DirectX 11 is the API that’ll change the figurative game. Has it yet? Is this boat already sailing off into the sunset? Decidedly not. ATI’s own schedule of DirectX 11-compatible software lists three titles currently shipping with support, three more slated for Q1, and two more expected in 2010. The undeniable reality is that ATI is out in front, and Nvidia brings up the rear this time around. But its tardiness means very little in the big picture (so long as you’re an enthusiast, and not a shareholder).
Playing Catch-Up
The real values in ATI’s Radeon HD 5800-series lineup, as it stands today, are gaming performance in the more pervasive DirectX 9 and DirectX 10 titles, Eyefinity, and the family’s handling of high-def audio/video content.
And while today’s preview of Nvidia’s GF100 graphics processor (the first model based on the company’s Fermi architecture) is full of detail covering the chip’s building blocks and architectural philosophies, we’re left to make educated guesses on how a productized GF100 will stand up to its competition.
Ray-tracing: possible in real-time using GF100's compute strengths.
We know that, through Nvidia Surround, the company will enable multi-monitor (more than two display) gaming. But it’s only enabling two outputs per card, so three will require an SLI configuration. That sounds like it could get expensive, especially since I'm running a configuration like that using a single $290 Radeon HD 5850.
We also now know a lot more about the resources dedicated to gaming within GF100. Given this, it’s quite reasonable to assume that a graphics card based on GF100 will be significantly faster than a card centering on ATI’s Cypress. Nvidia also admitted to us that it’d be more power-hungry than GT200—the ASIC driving GeForce GTX 200-series boards.
What we don’t yet know is when GF100-based cards will be available for sale. Nvidia says Q1; you could be waiting another two months, if its projections come to pass. We don’t know the GF100’s clock rates, nor do we have GDDR5 data rates. With those variables up in the air, price targets for the planned flagship and its derivatives are naturally right out the window. And the issue of high-def audio/video is largely irrelevant at this stage, given the GPU’s known transistor count and anticipated heat. But Nvidia says GF100 has the same A/V suite as the GeForce GT 240. That means you get Blu-ray playback acceleration and eight-channel LPCM output, but no bitstreaming—a feature ATI and Intel both offer.
Bigger Fish To Fry
So, I’ve now shined the spotlight on a few big elephants in the room which will almost surely be hot topics of discussion in the comments section anyway: no cards yet, no board specs yet, no preliminary performance numbers, even. GF100 will run hot. It won’t bitstream high-def audio (right out of the gate, at least—subsequent derivatives could). And it’ll require two (likely expensive) boards to do triple display outputs, a feature ATI supports across its 5000-series lineup.
Grim though that might sound, Nvidia’s recent preview of the GF100 architecture left us with the impression that the company has a bigger-picture plan of action for DirectX 11. Not only is the hardware in production, representatives claim, but third-party software developers are also being armed with the tools to create more compelling content.
Sure some of this is done for selfish reasons—for example, its PhysX plug-ins for 3ds Max, Maya, and Softimage are all provided to further the company’s agenda for this proprietary API. However, the Nexus toolkit, which is integrated into Visual Studio, supports CUDA C calls (of course), and also DirectX 10/11 and OpenGL. To the uninitiated gamer who might not know what it takes to bring a popular title to market, it’s really just important to know that Nvidia’s efforts will, ideally, enable more efficient development and better on-screen effects. And because DirectX 11 is fairly specific in the way it dictates compatibility, Nvidia is confident that debugging on a GF100-based graphics card will allow developers to optimize for ATI’s DX11 hardware, too.
But enough about predictions of the future—we’re writing this story because we know some new facts about what GF100 will enable when it finally does emerge sometime in the next two months.
And yes, it does matter that AMD got DX11 hardware out the gate first. Somehow, when Nvidia wins at something, whether that's being first with a technology, having the fastest card on the market, or a neato feature like Physx, it's a huge deal, but when AMD has a win, it's 'calm down people, let's not get excited, it's no big deal.' The market and public opinion, and I believe even worth of the company have all been significantly boosted by their DX11 hardware. It is a big deal. And it'll be a big deal when GF100 is faster than the 5970 too, but they are late. I believe it'll be April before we'll realistically be able to buy these without having to F5 Newegg every 10 seconds for a week, and in these months that AMD has been the only DX11 player, well, a lot of people don't want to wait that long for what might be the next best thing... all I'm trying to say is let's try not to spin things so one company sounds better. It makes me sad when I see fanboyism, whether for AMD, Intel, Nvidia, whoever, on such a high-profile review site.
*Specifically* mentioned in the title of the story, just to avoid that comment =)
wth is he supposed to benchmark? nothing has been released it's just an article giving us details on what we can expect within the next two months.
And yes, it does matter that AMD got DX11 hardware out the gate first. Somehow, when Nvidia wins at something, whether that's being first with a technology, having the fastest card on the market, or a neato feature like Physx, it's a huge deal, but when AMD has a win, it's 'calm down people, let's not get excited, it's no big deal.' The market and public opinion, and I believe even worth of the company have all been significantly boosted by their DX11 hardware. It is a big deal. And it'll be a big deal when GF100 is faster than the 5970 too, but they are late. I believe it'll be April before we'll realistically be able to buy these without having to F5 Newegg every 10 seconds for a week, and in these months that AMD has been the only DX11 player, well, a lot of people don't want to wait that long for what might be the next best thing... all I'm trying to say is let's try not to spin things so one company sounds better. It makes me sad when I see fanboyism, whether for AMD, Intel, Nvidia, whoever, on such a high-profile review site.
I hope that 5850s on shorter PCBs come out around the time of the GF100 so they can drop to a price where I can afford to buy one ^_^
*Specifically* mentioned in the title of the story, just to avoid that comment =)
You just can't win
Solid info?
No price, no benchmarks, no specs.
What's wrong with you
This is the end of the NDA. Do you even know what NDA is kid?
I'd rank it up there with Anand's on the first Phenom iteration - he had ES well before the others and there was mounting pressure to at least publish something ... and the AMD fanbois should consider tha article very fair.
I had heard Nvidia were booting some silicon and the clocks were low ... and in order to get within the power elvelope it was likely some SP's would have to be shaved ... that's about all anyone can say.
I imagine NVidia will also be concentrating on ensuring the die is securely attached to the substrate.
They won't want to cheese off the OEM's like last time:
http://www.theinquirer.net/inquirer/news/1050052/nvidia-chips-underfill
Also will they adopt a naming convention that finally makes sense? Up to 9000, reset, skip double digits and 100, go straight to 200. Now go back to 100. I mean seriously who comes up with these names?
G80, G92, G200, GF100..
http://news.softpedia.com/newsImage/Alleged-GeForce-GTX-360-and-380-benchmarks-Surface-3.jpg/
Slight tweaking of the RE:5 results (likely because they didn't point in the right direction for the existing cards)
And Charlie's recent 'Pro-nVidia' article is somewhat telling about the possibility of scaling downward, what's your opinion on it if you can say, other than "Charlie's just being Charlie".
http://www.semiaccurate.com/2010/01/17/nvidia-gf100-takes-280w-and-unmanufacturable
Do you? The end of an NDA does not mean every detail has to be divulged. You can still only provide the details that have been given to you. If NVIDIA don't hand out the review samples, you can't benchmark them. It's not rocket science!
If they claim that it is "significantly faster" then better it would be or else..