Sign in with
Sign up | Sign in

Display Outputs And AMD's Tessellation Coup

AMD Radeon HD 6990 4 GB Review: Antilles Makes (Too Much) Noise
By

Eye See You

Given the Radeon HD 6990’s brute force approach to performance and cooling, we’re happy to see that elegance didn’t go completely ignored. Using a single slot worth of the I/O bracket, AMD exposes an unprecedented five display outputs: one dual-link DVI and four mini-DisplayPort connectors. The retail Radeon HD 6990 will ship with a trio of adapters for more diversity, including one passive mini-DP-to-single-link DVI, one active mini-DP-to-single-link DVI, and one passive mini-DP-to-HDMI. AMD calls these a roughly $60 value.

I’m a big proponent of multi-display configurations for enhancing productivity, and I currently use a 3x1 landscape configuration. I consider five screens overkill for what I do. But AMD is now pushing a native 5x1 portrait mode that admittedly looks pretty interesting.

Beyond simply working more efficiently, using three or five screens is also a great way to take advantage of graphics horsepower available from a 375+ W dual-GPU card. As you’ll see in the benchmarks, 1680x1050 and 1920x1080 are often wasted on such a potent piece of hardware—even with anti-aliasing and anisotropic filtering enabled.

Just remember, with more than one screen attached to a card like AMD’s Radeon HD 6990, idle power consumption won’t match the figures we present toward the end of this piece. It actually jumps fairly substantially due to the need for higher clocks. With just one screen attached to the 6990’s dual-link DVI output, we observe 148.5 W system power consumption at idle. With a trio attached to the mini-DisplayPort connectors, that figure jumps to 187.2 W.

To be clear, you'll see higher power use from Nvidia cards as well in multi-monitor environments. AMD is the first to explain why this power increase is necessary, though:

"PowerPlay saves power via engine voltage, engine clock, and memory clock switching. Memory clock switching is timed to be done within an LCD VBLANK so that a flash isn't seen on the screen when the memory speed is changed. This can be done on a single display, but not with multiple displays because they can (and in 99% of the cases, will be) running different timings and virtually impossible to hit a VBLANK on both at the same time on all the panels connected (and when we say "timings" it’s not as simple as just the refresh rate of the panel, but the exact timings that the panel's receivers are running). So, to keep away from the end user seeing flashing all the time, the MCLK is kept at the high MCLK rate at all times.

With regard to power savings under multiple monitors, we have to trade-off between usability and power. Because we can't control what combinations of panels are connected to a desktop system we have to choose usability. Other power saving features are still active (such as clock gating, etc.) so you are still saving more power than peak activities. Note, that in a DisplayPort environment we have more control over the timing and hence this issue could go away if all the panels connected where DP."

PolyMorph What?

When Nvidia launched its GF100-based cards (GeForce GTX 480 And 470: From Fermi And GF100 To Actual Cards!), it pushed geometry as the next logical step in enhancing the realism of our games. We saw many compelling tech demos and game engine demonstrations that backed up the company's party line. But I wasn't prepared to give Nvidia a pat on the back until an actual game started shipping with more than a superficial implementation of tessellation, used to actually augment reality. HAWX 2 was the first example of this. I immediately started using HAWX 2 for all measures of tessellation performance in graphics card reviews, and I came away with some interesting conclusions.

First, the PolyMorph engines resident in each of Nvidia's Streaming Multiprocessors didn't seem to scale very well. A GeForce GTX 560 Ti features eight SMs, and consequently eight PolyMorph geometry engines. In comparison, a GeForce GTX 570 employs 15 SMs. Yet, we've already seen that the 570 retains 71% of its performance in HAWX 2 after turning tessellation on. Meanwhile the 560 Ti serves up 70% of its original performance with the feature enabled. That one percent difference screams out that more PolyMorph engines only minimize the impact of using tessellation up to a certain extent.

But at least Nvidia could still point out that AMD's cards shed nearly 40% of their performance with tessellation enabled. Well, it'd seem that a pair of Cayman GPUs cumulatively able to crank out four primitives per clock turns that story on its head. Radeon HD 6990 doesn't impress us with its frankly modest lead over the GeForce GTX 580; it impresses us by retaining 76% of its original frame rate with tessellation turned on. That's better than GeForce GTX 580's 75%. Never mind those 16 PolyMorph engines. It looks like four of AMD's tessellation units do the trick here.

Display all 193 comments.
This thread is closed for comments
  • -2 Hide
    hayest , March 8, 2011 3:34 AM
    Killer Card!

    Out of spec for default seems kind of weird though.
  • -2 Hide
    CrazeEAdrian , March 8, 2011 3:37 AM
    Great job AMD. You need to expect noise and heat when dealing with a card that beasts out that kind of performance, it's part of the territory.
  • 7 Hide
    jprahman , March 8, 2011 3:40 AM
    This thing is a monster, 375W TDP, 4GB of VRAM! Some people don't even have 4GB of regular RAM in their systems, let alone on their video card.
  • 1 Hide
    one-shot , March 8, 2011 3:43 AM
    Did I miss the load power draw? I just noticed the idle and noise ratings. It would be informative to see the power draw of Crossfire 6990s and overclocked i7. I see the graph, but a chart with CPU only and GPU only followed by a combination of both would be nice to see.
  • 0 Hide
    anacandor , March 8, 2011 3:44 AM
    For the people that actually buy this card, i'm sure they'll be able to afford an aftermarket cooler for this thing once they come out...
  • 0 Hide
    wino85 , March 8, 2011 3:46 AM
    OMG!!! It's finally here.
  • 0 Hide
    cangelini , March 8, 2011 3:48 AM
    one-shotDid I miss the load power draw? I just noticed the idle and noise ratings. It would be informative to see the power draw of Crossfire 6990s and overclocked i7. I see the graph, but a chart with CPU only and GPU only followed by a combination of both would be nice to see.


    We don't have two cards here to test, unfortunately. The logged load results for a single card are on the same page, though!
  • -1 Hide
    bombat1994 , March 8, 2011 3:52 AM
    things we need to see are this thing water cooled.

    and tested at 7680 x 1600

    that will see just how well it does.

    That thing is an absolute monster of a card.

    They really should have made it 32nm. then the power draw would have fallen below 300w and the thing would be cooler.

    STILL NICE WORK AMD
  • -1 Hide
    Bigmac80 , March 8, 2011 3:53 AM
    Pretty fast i wonder if this will be cheaper then 2 GTX 570's or 2 6950's?
    But omg this thing is freakin loud. What's the point of having a quite system now with Noctua fans :( 
  • 2 Hide
    tacoslave , March 8, 2011 3:54 AM
    Its hot, sucks alot of power, and costs a ton. But i still want one.








    Badly
  • 7 Hide
    lashton , March 8, 2011 3:54 AM
    AMD doesn't care about noise because they are waiting for custom colling solutions from OEM
  • -1 Hide
    lashton , March 8, 2011 3:56 AM
    bombat1994things we need to see are this thing water cooled.and tested at 7680 x 1600that will see just how well it does.That thing is an absolute monster of a card.They really should have made it 32nm. then the power draw would have fallen below 300w and the thing would be cooler.STILL NICE WORK AMD

    That maybe possible when they get 28nm ready on bulldozer, they are just raping the rewards of old tech.
  • 1 Hide
    scrumworks , March 8, 2011 4:01 AM
    Starts with negative comments (noise), so no surprises from Chris. Fermi of course never made so much noise and consume so much power that would require this type commenting. Everything was Power, PhysX and CUDA!
  • 0 Hide
    4745454b , March 8, 2011 4:03 AM
    Meh. To much for what it is. The only thing it does better then two 6970s is in power. (if one 6970 is 250W and this is 375W, then it uses less power then 2x6970.) But I agree that you're better off with a CF setup. Like the GTX480 and possibly the GTX580, its simply to much for what you pay for.

    Edit: I should say its either to much or to little and always in the wrong way for what you pay for. I also dislike the the 375W TDP. We have specs/rules for a reason.
  • 0 Hide
    Haserath , March 8, 2011 4:04 AM
    I think AMD will have the performance monster this round. It would be surprising if Nvidia was anticipating something like this. The GTX 570 already uses quite a few more Watts than the 6970; what will they do to match two 6970's on one board?
    This is isanity!
  • 0 Hide
    megamanx00 , March 8, 2011 4:14 AM
    OMFG!!!!

    Well, I guess if you were thinking of running 3, or even 6 displays (which would require at least one hub or daisy chain monitor), this is the card you would want, perhaps even two of them. I'm guessing if you put two of them in you really really want it water cooled.
  • 0 Hide
    MasterMace , March 8, 2011 4:14 AM
    Careful, you may not be able to here a tornado siren a mile away with this one.
  • -1 Hide
    nforce4max , March 8, 2011 4:17 AM
    Well at least it is cheaper than the two 7900gtx duos (eom) that landed some years after introduction. Personally I wouldn't purchase this card knowing the driver bugs and the usual issues that dual gpu cards have except I'll wait a few years to snatch one up on the cheap as a collectors item. For those who got the money wait at least two or three weeks for reviews and complaints by owners of this card before you buy one. I can live with the noise but bad drivers I can't.
  • 4 Hide
    cangelini , March 8, 2011 4:45 AM
    scrumworksStarts with negative comments (noise), so no surprises from Chris. Fermi of course never made so much noise and consume so much power that would require this type commenting. Everything was Power, PhysX and CUDA!


    LOL. Look at the power AND noise graphs, scrum :) 
  • 3 Hide
    dragonsqrrl , March 8, 2011 4:45 AM
    It looks like the improved CrossFireX scaling introduced with the HD6000 series really helps the HD6990 shine. There's no question about it, this thing's a top of the line performance beast.

    The big (and really inexcusable) problem is the noise, and to a lesser extent power consumption. It's by far the loudest single card stock cooler ever conceived, and that's taking into account the former champions, the GTX480 and HD5970. The load temps aren't great, but they're acceptable in my opinion. I'm not sure what people were expecting, but this is an extreme high-end dual GPU card, and load temps in the upper 80's C aren't uncommon in this performance segment. The problem is once again the excessive noise that's generated in order to keep the 2 GPU's running at those already high temps.

    I totally agree with the reviewer, the HD6990 seems rushed, the drivers are buggy, and if running a fan at 3k+ RPM is the only way to keep a card operating, it probably needs a little more tweaking before release.

    lashtonThat maybe possible when they get 28nm ready on bulldozer, they are just raping the rewards of old tech.

    Bulldozer will be manufactured using Global Foundries 32nm process, not 28. The node shrink you're referring to for next-gen GPU's will be manufactured using a completely different 28nm process at TSMC. AMD uses Golbal Foundries only for its CPU's at this time.
Display more comments