Nvidia GeForce GTX 465 1 GB Review: Zotac Puts Fermi On A Diet

Meet Zotac’s GeForce GTX 465

Zotac was kind enough to send over one of its first GeForce GTX 465 cards, based on Nvidia’s reference design. The board is identical to the GeForce GTX 470 Nvidia sent over for launch, measuring 9.5” long (half an inch shorter than the Radeon HD 5850). In fact, everything from the dual-slot form factor to the display output connectivity is the same—you get a pair of dual-link DVI ports and a mini-HDMI connector, but only two outputs are usable at a time.

The rated thermal design power here is 200W—15W less than the GeForce GTX 470 and 50W less than the GeForce GTX 480. In my initial review of the GTX 470 and 480, however, I noted a significant discrepancy between the power ratings of Nvidia’s GF100-based cards and competing boards from AMD. Indeed, it seems that the two companies define their power numbers differently, just like AMD and Intel’s CPU ratings are incomparable. Per Nvidia, TDP is a measure of maximum power draw over time in real-world applications, and does not represent the maximum power draw in cases like FurMark. That  helps explain why, as you’ll see in our benchmarks, the GeForce GTX 480 draws 133W more than the GTX 465 under load, despite an official TDP difference of 50W.

Turned off: 5 x SMs, 2 x ROP partitions, 2 x 64-bit memory interfaces

Specs In Depth

When Nvidia first introduced us to GF100, one aspect that stood out to me was how modular it looked. The quartet of Graphics Processing Clusters, each with four Streaming Multiprocessors, and the six ROP partitions—it just looked like pieces were meant to be pulled out to create derivative designs. And that’s sort of what Nvidia’s doing here with its GeForce GTX 465, only instead of manufacturing a smaller, cheaper GPU based on the same Fermi architecture, the company is turning off big portions of its pricey GF100.

For more background on the GF100 design, read back to our first preview of the chip from January. For more detail on how that piece of silicon worked its way into the GeForce GTX 480 and 470, check out the review I published in March.

GeForce GTX 465 leverages three GPCs—one is disabled entirely. Of the 12 remaining SMs (remember, there are four SMs per GPC), Nvidia disables one, leaving 11 SMs, each with 32 CUDA cores. That’s where we get the 352-core number.

Swipe to scroll horizontally
Header Cell - Column 0 GeForce GTX 465GeForce GTX 470GeForce GTX 480
Graphics Processing Clusters344
Streaming Multiprocessors111415
CUDA Cores352448480
Texture Units445660
ROP Partitions456
Graphics Clock607 MHz607 MHz700 MHz
Shader Clock1,215 MHz1,215 MHz1,401 MHz
Memory Clock802 MHz837 MHz924 MHz
GDDR5 Memory1GB1.25GB1.5GB
Memory Interface256-bit320-bit384-bit
Memory Bandwidth102.6 GB/s133.9 GB/s177.4 GB/s
Texture Filtering Rate26.7 GTexels/s34 GTexels/s42 GTexels/s
Connectors2 x DL-DVI, 1 x mini-HDMI2 x DL-DVI, 1 x mini-HDMI2 x DL-DVI, 1 x mini-HDMI
Form FactorDual-slotDual-slotDual-slot
Power Connectors2 x 6-pin2 x 6-pin1 x 6-pin, 1 x 8-pin
Recommended Power Supply550W550W600W
Thermal Design Power200W215W250W
Thermal Threshold105 degrees C105 degrees C105 degrees C

Also remember that, in Fermi, texture units are tied to SMs. There are four texture units per SM. Do the math—11 SMs times four units each gives you 44 total texture units. Moreover, geometry performance dips, as we’re only dealing with 11 PolyMorph engines now.

The GF100’s back-end is independent of the GPCs, yet Nvidia makes cuts here too, turning off two of the six ROP partitions and dropping pixel throughput to 32/clock. This has the side-effect of axing two 64-bit memory interfaces, taking what starts as a 384-bit path and reducing it to 256-bits. As a result, we have a nice, even 1GB of GDDR5 memory pushing up to 102.6 GB/s.

Nvidia isn’t changing the graphics and shader clocks versus GeForce GTX 470—they’re still 607 and 1,215 MHz, respectively. Memory clocks do drop slightly though, from 837 down to 802 MHz. Of course, all of this tweaking, tuning, and massaging is a means to an end—performance. Let’s see how the GeForce GTX 465 measures up.

Chris Angelini
Chris Angelini is an Editor Emeritus at Tom's Hardware US. He edits hardware reviews and covers high-profile CPU and GPU launches.
  • Annisman
    Dang, it looks like Nvidia has almost no real answers for the AMD/ATI lineup of cards. However, if this card can drop in price a little it may be competitive because of some of it's Nvidia-only features. I mean, it runs cooler and uses a fair amount less power than the 470 and 480, maybe this will become the Phsyx card to get ? Espescially if they could manage a single slot version and drop the price. Anyways, no competition is bad for everyone and I hope Nvidia can get their act together asap.
    Reply
  • fatkid35
    i'll stick to my crossfire'd 5770s. same money and same power consumption.
    Reply
  • tacoslave
    fatkid35i'll stick to my crossfire'd 5770s. same money and same power consumption.Or a 5870 same thing less problems but thats just me. oh and that thing got pwnd by a 5830 and thats not saying much.
    Reply
  • welshmousepk
    wow, the pricing of this thing is all wrong. given how well the the 480 and 470 sit in the market, this just seems like a pointless card.
    Reply
  • liquidsnake718
    How many times do I have to say that this is nothing but a marketing gimmick for defective GTX480's and possibly 470's as well. Like the 5830 which was a cut/gimped/ or limited 5850
    Reply
  • liquidsnake718
    sorry 5870 on the above comment
    Reply
  • bombat1994
    make it 60 cheaper and you might have a good card but i would buy a 5850 over this thing everyday of the weel
    Reply
  • dco
    retail is messed up they charge you for a brand not the product by comparison. Whats worse is that people will buy it.
    Reply
  • rohitbaran
    The GTX4xx line is definitely not the way it is to be played and this latest crappy piece of hardware further proved it. Hot and expensive but poor on performance. The more cards they launch, the clearer ATI's victory becomes.
    Reply
  • km4m
    Fail, fail, fail...suitable words for Nvidia at this moment.
    Reply