Last year was full of ups and downs in the graphics market. First, Nvidia unveiled its GT200 graphics processor and a pair of boards centering on the chip. It wiped the floor with everything else out there—not exactly difficult given AMD’s mid-range Radeon HD 3800-series, which had already been trumped.
Then AMD pulled a rabbit out of its hat, launching the RV770 GPU and two boards based on that piece of silicon. The fastest Radeon HD 4870 wasn’t quite quick enough to best the fastest Nvidia chip, but it was fast enough that everyone knew the dual-processor Radeon HD 4870 X2 AMD had pre-announced during the launch would put the underdog on top.
Since then, AMD has been busy populating its lineup with mainstream and entry-level boards based on derivative architectures. The Radeon HD 4830 has turned into the least-expensive performance offering. The Radeon HD 4670 and 4650 form the meat of AMD’s mid-range lineup. And the Radeon HD 4500-/4300-series boards make up the entry-level.
Nvidia has responded to AMD’s challenge in a number of different ways. At the high-end, it launched its own dual-GPU card, the GeForce GTX 295. In the middle of its performance line, a less-handicapped GeForce GTX 260 with 216 shader processors gets the jump on AMD’s Radeon HD 4850 (and indeed the 4870 with 512 MB of memory, as you’ll see in the benchmarks here). And a 55 nm replacement for the GT200 yields the company’s latest GeForce GTX 285.
Of course, then there’s Nvidia’s emphasis on its value-adds: CUDA, PhysX, and 3D Vision, all enabled through the company’s software drivers. While we’d consider the trio of technologies to still be in their early stages of mainstream adoption, they’re all still technically advantages. AMD is working out the kinks in its Stream video encoder, doesn’t offer any sort of physics acceleration, and has been oddly quiet about its partnership with 3D monitor-maker iZ3D, which as we revealed at this year’s CES, gives you the same experience on AMD or Nvidia graphics hardware.
In Need Of A Mainstream Answer
While it’d seem to have all of its bases covered, we have to imagine that the massive 55 nm GT200 GPU is still far too large (read: expensive) to work into a card any cheaper than the GeForce GTX 260, leaving Nvidia without a suitable successor to the aging G92, a chip that’s nearly a year and a half old.
Fortunately for Nvidia, that relatively-geriatric architecture was designed and executed well enough, carrying over from a 65 nm process down to 55 nm. Even today, it’s able to do more than just compete against the RV770-based lineup from AMD—a fact proven by today’s GeForce GTS 250 launch.
But while the new board’s name might sound like something new wedged in between the GTX 260/285 and older GeForce 9800-series boards, the truth of the matter is that it’s G92 reborn. More specifically, it’s the GeForce GTX 9800+, a die-shrunk version of the GeForce GTX 9800, which was already a slightly-overclocked re-introduction of the GeForce 8800 GTS.
- The State Of Graphics
- The GeForce GTS 250 In Detail
- Test Setup, Benchmarks, And Notes
- Benchmark Results: 3DMark Vantage
- Benchmark Results: Far Cry 2
- Benchmark Results: Crysis
- Benchmark Results: Left 4 Dead
- Benchmark Results: Call of Duty: World At War
- Benchmark Results: World in Conflict
- Power Consumption
- Conclusion

...which (in the context it has been applied) is the same as saying we don't mind nVidia renaming an 8800GT to a 9800GT and then a 9800GT to a whatever 2xx series...and so on and so forth. My point is simple: nVidia is pulling an extremely sleazy marketing scheme on consumers by renaming existing models. If you goof admit it and get on with life; that's why I appreciated the fact that when the first generation of Phenoms were botched AMD gracefully renamed unaffected quads with a 50 (IE 9650 instead of 9600). Trying to remember all the different names of the exact same model is like dealing with someone who IM's you from five different screen names, eventually you just end up blocking them out.
Meh.
And there are MASSIVE rumours saying that Nvidia is hand-picking the review models sent to reviewers, even confirmed by HardOCP. Addressing that in this article would have been great.
Yes, I agree with that totally if we're talking about the demographic these forums target. But that's absolutely absurd if you count everyone.
The "average joe" is usually a hobby gamer who has a full time job, if not two, a wife, kids, generally lower pay compared to the white colar IT job market, and just doesnt have the time for all of the 'homework'. And even then, alot of people still wouldnt know what those benchmarks mean, or even where to find them on google if they know what the word benchmark means at all.
It'd be the same if Ford released a new Mustang called Mustang GTX250. But in reality, it was identical to the Mustang GT with a different name and better tires. Ford would catch all kinds of hell for it, which is exactly why they dont do it.
But Nvidia apparently think's they're above the average consumer, and hopes to get a one-up on them to get rid of all of their oversupplied chips.
Dont sell yourself short, but dont give these companies credibility for doing what they're doing. Nvidia has been very anti-consumer lately, and they shouldnt get any reason to excuse it.
...which (in the context it has been applied) is the same as saying we don't mind nVidia renaming an 8800GT to a 9800GT and then a 9800GT to a whatever 2xx series...and so on and so forth. My point is simple: nVidia is pulling an extremely sleazy marketing scheme on consumers by renaming existing models. If you goof admit it and get on with life; that's why I appreciated the fact that when the first generation of Phenoms were botched AMD gracefully renamed unaffected quads with a 50 (IE 9650 instead of 9600). Trying to remember all the different names of the exact same model is like dealing with someone who IM's you from five different screen names, eventually you just end up blocking them out.
Cherry picked? It's a retail product.
Meh.
And there are MASSIVE rumours saying that Nvidia is hand-picking the review models sent to reviewers, even confirmed by HardOCP. Addressing that in this article would have been great.
I Agree!
1. All of the AMD 4800 cards can be easily overclocked, especially the cheap 4830 which often OCs over 700 Mhz on its GPU clock. This will effect the value evaluation, because the 9800+/250 is gonna have to OC pretty well to match it bang for buck, and seeing as the tested cards are already OCed, well I really wonder if it has that headroom?
2. 4850s and particularly 4870s come in much hotter versions than the vanilla flavors - ex. sapphire toxic etc. The prices of these models will be important to consider.
3. The G92 architecture is from what I have seen sketchy performance wise in SLI compared to the 4800 series in Crossfire. I am not sure of this, but I would be cautious of using a G92 card if you where planning on using a multicard setup, atleast from the tests I have seen. It would be interesting to see direct tests between a GTX 250 SLI and 4830/4850 CF setup. I'd put my money on the CF solution and I'd love to be proved wrong for Nvidia's sake.
A dual slot cooled video card that is just slower than a 4850 could be a good thing if they work the price low enough.
Also at shops the knowledge isn't always better. I've seen people behind the counter, who don't know the difference between ddr1 and ddr2 memory and will just tell you they don't have it.
Rebranding is evil, but if that's the way Nvidia can keep making money and stay alive, I'd rather have that than the solo reign of Ati.
Yes, I agree with that totally if we're talking about the demographic these forums target. But that's absolutely absurd if you count everyone.
The "average joe" is usually a hobby gamer who has a full time job, if not two, a wife, kids, generally lower pay compared to the white colar IT job market, and just doesnt have the time for all of the 'homework'. And even then, alot of people still wouldnt know what those benchmarks mean, or even where to find them on google if they know what the word benchmark means at all.
It'd be the same if Ford released a new Mustang called Mustang GTX250. But in reality, it was identical to the Mustang GT with a different name and better tires. Ford would catch all kinds of hell for it, which is exactly why they dont do it.
But Nvidia apparently think's they're above the average consumer, and hopes to get a one-up on them to get rid of all of their oversupplied chips.
Dont sell yourself short, but dont give these companies credibility for doing what they're doing. Nvidia has been very anti-consumer lately, and they shouldnt get any reason to excuse it.
Using GTS 250 as example, Kevin and Tuan reported that 250 is using 512 bit memory bus. I almost went over to Anand to check the spec. before clicking on this article.
As stated above, found it at the bottom of article, 512 bit is listed as only difference between 9800+ and 250 - but its not in this article - I trust Chris on this one
Are you two idiots? GTS 250 is the same as 9800+ GTX. If Nvidia can sell retail 9800+ at 738MHz why would they need to cherry pick GTS 250 at the same clock?
If every GTS 250 is running at 850MHz vs 738 on 9800+ you can say Nvidia is binning better chip for 250 but they are running at the same speed.
I actually prefer jane over kevin and tuan both. She might be blogging, but most of the time it's interesting, and never misleading or downright untrue.
Both Kevin and Tuan are total morons IMO. Are they college kids doing a practicum or something? Because there's no way they actually have any journalism credibility. They're even the laughing-stock of other forums on a consistant basis.