Sign in with
Sign up | Sign in

Meet Nvidia’s GeForce GTX 570

GeForce GTX 570 Review: Hitting $349 With Nvidia's GF110
By

It isn’t easy to build a 3+ billion transistor GPU using a maturing 40 nm lithography process—a lesson Nvidia learned the hard way with GF100. It managed to productize the chip as GeForce GTX 480, GTX 470, and GTX 465, but only the 470 managed to balance heat, power, noise, and performance well enough to go relatively unscathed in our reviews.

GF100 bought Nvidia enough time to revise and tape out GF110 within two months of the GTX 480's launch, though. The company ironed out its yields, making it possible to use fully-functional versions of the chip with all 16 SMs enabled. Transistor-level changes also facilitate higher core and shader clocks, generating even more performance within the GeForce GTX 480’s same power budget. GeForce GTX 480 ran at 700 MHz. GTX 580 jumped to 772 MHz.

Nvidia dials its new GeForce GTX 570 in with a 732 MHz core clock. Its CUDA cores run at 1464 MHz. And the memory data rate is 3800 MT/s, or 950 MHz.  Now, it’s possible that the GF110s used to build GeForce GTX 570 don’t bin at GTX 580 speeds, forcing Nvidia to turn off one SM to differentiate. Or perhaps one SM in each of these GPUs is defective, leaving room to overclock the rest of this board. Enough readers have asked for overclocking data that we went for it this time around. And the results don't look promising. Our GeForce GTX 570 sample ran loops of 3DMark11 at 780 MHz core/1560 MHz shaders, but couldn't reach 800 MHz without crashing after one run. Those results seem to be consistent with what add-in board vendors are seeing as a ceiling for GF110. We recently tried to put together a roundup of overclocked GeForce GTX 580s and watched it fall apart as vendors accepted, and then withdrew one by one, citing yield issues.

Like I said on the previous page, the GeForce GTX 570 is the product of GeForce GTX 480 and 470 getting it on in the back of a Plymouth ‘Cuda, with the improvements to GF110 sprinkled on top. Thus, its back-end hosts 1.25 GB of memory attached to five 64-bit memory channels. The full GF100’s 768 KB of L2 cache is cut back to 640 KB, too.

Nvidia’s reference GeForce GTX 570 card borrows the electrical and mechanical improvements that made the GTX 580 better than GTX 480. In fact, the card is nearly identical (with the exception of power connectors), measuring 10.5" long. Although the second-best GeForce GTX 470 was never really guilty of the same heat- and noise-related offenses as the once-flagship GTX 480, Nvidia nevertheless carries over the new vapor chamber cooler to its GTX 570, improving dissipation efficiency. Also, noise is kept under control through an improved fan control algorithm and blower with a lower pitch.

Display output connectivity should be pretty familiar by now. Nvidia only offers two independent display pipelines. So, while the GeForce GTX 570 sports two dual-link DVI outputs and a single mini-HDMI connector, it’s only able to utilize two at any given time. This limitation is perhaps the biggest reason I can’t use a GeForce card in my personal workstation, which employs three displays. You can get three simultaneous outputs through a feature Nvidia calls Surround mode, but that requires a pair of cards in SLI. AMD’s Eyefinity is just easier to use.

Because the GTX 570 has a lower TDP than the 580 (219 W versus 244 W), Nvidia’s new card gets away with a pair of six-pin auxiliary power connectors. Another thing to keep in mind is power use with more than one monitor attached. The following is an update I added to my GeForce GTX 580 review after a couple of requests came through for multi-display power testing:

According to Nvidia, it rectified the out-of-control increases that were being reported in the GeForce 256-series driver released earlier this year. So long as you're using two monitors with the same resolution and timing settings, you're supposedly safe. In an effort to double-check/verify, I attached a pair of Dell P2210H displays to a GeForce GTX 580 and charted out temps and power use:


System Power Consumption
Temperature
One Display (1920x1080), Idle
190 W
40 deg. C
Two Displays (1920x1080), Idle
192 W
45 deg. C
Two Displays (1 x 1920x1080, 1 x 1280x1024), Idle
255 W
56 deg. C


Power consumption doesn't increase much when you attach a second display running at the same resolution and timings, but the temperature does increase by five degrees.

Swapping over to a display running a different resolution, however, continues to have a profound effect on power and temperatures (Nvidia does not deny this). The jump from 192 W to 255 W and 45 degrees to 56 degrees is significant. The good news is that if you're using the same screen, the latest drivers minimize the impact of utilizing both of the GeForce GTX 580's display outputs.

Display all 108 comments.
This thread is closed for comments
Top Comments
  • 32 Hide
    nevertell , December 7, 2010 11:31 AM
    Yay, I got highlighted !
  • 26 Hide
    sstym , December 7, 2010 11:36 AM
    thearmGrrrr... Every time I see these benchmarks, I'm hoping Nvidia has taken the lead. They'll come back. It's alllll a cycle.


    There is no need to root for either one. What you really want is a healthy and competitive Nvidia to drive prices down. With Intel shutting them off the chipset market and AMD beating them on their turf with the 5XXX cards, the future looked grim for NVidia.
    It looks like they still got it, and that's what counts for consumers. Let's leave fanboyism to 12 year old console owners.
Other Comments
  • -9 Hide
    thearm , December 7, 2010 11:16 AM
    Grrrr... Every time I see these benchmarks, I'm hoping Nvidia has taken the lead. They'll come back. It's alllll a cycle.
  • 5 Hide
    xurwin , December 7, 2010 11:30 AM
    at $350 beating the 6850 in xfire? i COULD say this would be a pretty good deal, but why no 6870 in xfire? but with a narrow margin and if you need cuda. this would be a pretty sweet deal, but i'd also wait for 6900's but for now. we have a winner?
  • 32 Hide
    nevertell , December 7, 2010 11:31 AM
    Yay, I got highlighted !
  • 5 Hide
    verrul , December 7, 2010 11:35 AM
    because 2 6850s is pretty equal in price to the 570
  • 26 Hide
    sstym , December 7, 2010 11:36 AM
    thearmGrrrr... Every time I see these benchmarks, I'm hoping Nvidia has taken the lead. They'll come back. It's alllll a cycle.


    There is no need to root for either one. What you really want is a healthy and competitive Nvidia to drive prices down. With Intel shutting them off the chipset market and AMD beating them on their turf with the 5XXX cards, the future looked grim for NVidia.
    It looks like they still got it, and that's what counts for consumers. Let's leave fanboyism to 12 year old console owners.
  • 8 Hide
    nevertell , December 7, 2010 11:37 AM
    It's disappointing to see the freaky power/temperature parameters of the card when using two different displays. I was planing on using a display setup similar to that of the test, now I am in doubt.
  • -2 Hide
    reggieray , December 7, 2010 11:46 AM
    I always wonder why they use the overpriced Ultimate edition of Windows? I understand the 64 bit because of memory, that is what I bought but purchased the OEM home premium and saved some cash. For games the Ultimate does no extra value to them.
    Or am I missing something?
  • 6 Hide
    reggieray , December 7, 2010 11:50 AM
    PS Excellent Review
  • 3 Hide
    theholylancer , December 7, 2010 11:50 AM
    hmmm more sexual innuendo today than usual, new GF there chris? :D 

    EDIT:

    Love this gem:
    Quote:

    Before we shift away from HAWX 2 and onto another bit of laboratory drama, let me just say that Ubisoft’s mechanism for playing this game is perhaps the most invasive I’ve ever seen. If you’re going to require your customers to log in to a service every time they play a game, at least make that service somewhat responsive. Waiting a minute to authenticate over a 24 Mb/s connection is ridiculous, as is waiting another 45 seconds once the game shuts down for a sync. Ubi’s own version of Steam, this is not.


    When a reviewer of not your game, but of some hardware using your game comments on how bad it is for the DRM, you know it's time to not do that, or get your game else where warning.
  • 2 Hide
    amk09 , December 7, 2010 11:52 AM
    nevertellYay, I got highlighted !


    So you gonna buy it? Huh huh huh?
  • -1 Hide
    nevertell , December 7, 2010 11:54 AM
    I was planning on doing so, but I didn't get enough money from the work I was doing, so I'll stick with just a new monitor. I will definitely get a new card during the next year, but not for now :(  And by then, there might be new great cards out there.
  • 1 Hide
    lostandwandering , December 7, 2010 11:55 AM
    Good looking performance numbers. Will be interesting to see what this does in the real world to NVidia's pricing of the GTX 400 series.
  • -5 Hide
    SininStyle , December 7, 2010 12:02 PM
    Wow, no sli 460s included? Yet you include 6850s in xfire? really? *facepalm* fail
  • 2 Hide
    darkchazz , December 7, 2010 12:04 PM
    now I want gtx 560
  • 3 Hide
    anacandor , December 7, 2010 12:07 PM
    While the 5xx series is looking decent so far, it seems to me (pure speculation here) that Nvidia held back with this series and are possibly putting more resources into Kepler. I feel this because they aren't trying to kill AMD for market share, instead put up a perfectly resonable product that neither EXCELS vastly beyond last gen, but providing enough performance to justify a new product. That said i'm looking forward to their 2011 lineup.

    Also, it would have been interesting to see Metro 2033 tested with max instead of medium settings. All the cards are able to play medium at all resolutions with no AA... push them to their limits? :) 

    Thoroughly enjoyable review though. Thanks, Chris!
  • 4 Hide
    gxpbecker , December 7, 2010 12:20 PM
    i LOVE seeing Nvidia and AMD trading blows back and forth. Keeps prices in check lol and gives more optiosn for buyers!!!
  • 1 Hide
    kg2010 , December 7, 2010 12:26 PM
    You can see how the 460's in SLI did here vs the 580
    http://benchmarkreviews.com/index.php?option=com_content&task=view&id=614&Itemid=72

    But yeah, this review NEEDS 460's 1GB in SLI to be fair, as they are definitely an alternative to a 580, even a 570. There are quite a few cards at or below $199

    Dual Hawks for $320 AFTER MIR:
    http://www.newegg.com/Product/Product.aspx?Item=N82E16814127518

    And these cards will overclock well.
  • 0 Hide
    tronika , December 7, 2010 12:32 PM
    ReggieRayI always wonder why they use the overpriced Ultimate edition of Windows? I understand the 64 bit because of memory, that is what I bought but purchased the OEM home premium and saved some cash. For games the Ultimate does no extra value to them.Or am I missing something?

    noticed that too. i really can't think of any reason other than the language support for the Tom's engineers. 99% of the gamer market would be better off with home premium 64 bit. the other 1% that actually runs and maintains a domain in their house should get professional or the bloated "ultimate". i mean, who really uses bitlocker on their gaming machine anyway? great article though! i jumped on the 580 and haven't looked back. used to run on a 5850 but now that i've seen fermi in all of it's glory i'm really interested in nvidia's future "vision".
  • 1 Hide
    cangelini , December 7, 2010 12:38 PM
    theholylancerhmmm more sexual innuendo today than usual, new GF there chris? EDIT:Love this gem:When a reviewer of not your game, but of some hardware using your game comments on how bad it is for the DRM, you know it's time to not do that, or get your game else where warning.


    Wait, innuendo? Where? :) 
Display more comments