Closed

Challenging FPS: Testing SLI And CrossFire Using Video Capture - page 3

198 answers Last reply
  1. bystanderYou should read some of the other versions of this article. Here are two using the 7970 and 680 to compare SLI and Crossfire: http://www.pcper.com/reviews/Graph [...] nce-Tes-12http://techreport.com/review/24553 [...] e-tools/11Here is another with an interview with an AMD driver representative:http://www.anandtech.com/show/6857 [...] ring-issueThere is nothing biased about it and it isn't about comparing a particular card, but about comparing SLI vs Crossfire.

    after reading these and comparing to this toms article it seems like either toms is lazy and cant be bothered testing to the extent they should to deliver an unbiased opinion, or they have indeed become a pro AMD site and are just ignoring certain elements of testing to favor AMD hardware.
  2. 777iceman777 said:
    660ti should be up against 7950 non boost editions what a bias article heaping praise on Nvidia while smashing AMD. You know full well Toms that fraps is now a poor reference for frame stuttering. And this is an attempt you to say yer we know its is however we are still right to bash AMD products in the first place. This article reminds me of Garry Kasparov and IBMs Deep Blue.


    660 Ti, according to Tom's, matches up nicely with the Radeon 7870. If tom's is biased against AMD, then why do they have a better opinion of AMD's cards than you do?

    Nobody knew just how off or on FRAPS was. This article and similar ones done on other sites demonstrate a new method that is more accurate than FRAPS and compare FRAPS to it with AMD.

    This article in no way bashes AMD.
  3. iam2thecrowe said:
    bystanderYou should read some of the other versions of this article. Here are two using the 7970 and 680 to compare SLI and Crossfire: http://www.pcper.com/reviews/Graph [...] nce-Tes-12http://techreport.com/review/24553 [...] e-tools/11Here is another with an interview with an AMD driver representative:http://www.anandtech.com/show/6857 [...] ring-issueThere is nothing biased about it and it isn't about comparing a particular card, but about comparing SLI vs Crossfire.

    after reading these and comparing to this toms article it seems like either toms is lazy and cant be bothered testing to the extent they should to deliver an unbiased opinion, or they have indeed become a pro AMD site and are just ignoring certain elements of testing to favor AMD hardware.


    Towards the beginning of the article, the mentioned that most their data had to be thrown out because their motherboard was interfering with the results some how. So they plan to deliver a follow up article. Hopefully they give a more in depth review on the follow up.
  4. blazorthonThey've known about this for a long time as was pointed out several months ago when Tom's asked AMD and Nvidia about this stuff. They've been working on it too. None of what you said is accurate.

    None of what you said is accurate.
    First of all AMD wasn't aware they had a problem and admitted so even after years of user complaints and a year and a half after Scott Wasson @ TechReports starting asking them. It is all spelled out here specifically on page 5:
    http://www.anandtech.com/show/6857/amd-stuttering-issues-driver-roadmap-fraps
    Secondly, it takes a lot more than just one review site to convince a manufacture that they have a problem. Tom's has been the last of several sites to adopt frame time latencies as a metric and IMO has applied it rather poorly.
    And finally I suggest before you tell someone they don't know whast they are talking about that you have your facts straight first; your "God of the article comment section" just might slip.
  5. thanny said:
    There's a huge conceptual problem with this entire test. Everything smaller than the entire screen is a "runt" frame, and is the price you pay for not using vsync. The "practical" frame rate is always no larger than the screen refresh rate.

    Turn on vsync, and every frame is complete, which makes the rendering rate identical to the practical rate.



    They made it very clear what they are defining a runt frame, and it isn't "everything smaller than a full screen". They are defining it as any frame smaller than 22 pixels high.
  6. iam2thecrowe said:
    bystanderYou should read some of the other versions of this article. Here are two using the 7970 and 680 to compare SLI and Crossfire: http://www.pcper.com/reviews/Graph [...] nce-Tes-12http://techreport.com/review/24553 [...] e-tools/11Here is another with an interview with an AMD driver representative:http://www.anandtech.com/show/6857 [...] ring-issueThere is nothing biased about it and it isn't about comparing a particular card, but about comparing SLI vs Crossfire.

    after reading these and comparing to this toms article it seems like either toms is lazy and cant be bothered testing to the extent they should to deliver an unbiased opinion, or they have indeed become a pro AMD site and are just ignoring certain elements of testing to favor AMD hardware.


    To be fair, since Tom's didn't mention noticing something as bad as some of what pcper's article shows when they were testing the game,s then what pcper's article shows may be very exaggerated. I highly doubt that tom's is being biased.
  7. Nice article. What I would like to know is whether or not people should opt for Nvidia over AMD just because of frame latency and stuttering. And does taking these 2 issues, that plague AMD more than Nvidia, into account make it possible or necessary to revise the best graphics card for the month charts and give the green team the crown? Because from what I see in the forums, people would rather deal with paying a bit more for better gaming experience coupled with better dual gpu scaling and less issues. Please, someone let us know!
  8. ojasCustomary links:http://www.anandtech.com/show/6862 [...] ing-part-1


    dude FCAT is an nvidia made tool given to anand for benchmark
    so you can expect nvidia winning through this tool someway or the other
  9. Just a thought. Imagine a fast moving scene where both AMD and Nvidia are showing 120 FPS. Nvidia displays the half of rendered frame in each displayed frame. AMD after rendering, smartly removes the 1st frame and only display the latest full frame. Who do you think will have a smoother cleaner image?

    Nvidia will have screen tearing, while AMD will have smooth playback. Why I am saying this is that AMD should not blindly follow this latest development. They should carefully analysis the the situation and give the best user experience and should not fall for the Nvidia trap.
  10. BigMack70 said:
    There are a lot of people in here who seem to be trying to pre-emptively come to AMD's defense when neither this article nor really anyone in the comments is actually attacking AMD.

    I don't get it.

    I think you're right Mack. There does tend to be some defensiveness when it comes to discussing people's preferred video card vendors. A psychologist could have a field day.
  11. Souv_90dude FCAT is an nvidia made tool given to anand for benchmarkso you can expect nvidia winning through this tool someway or the other

    As stated in these comments and explained in the TechReports article on the bottom of this page:
    http://techreport.com/review/24553/inside-the-second-with-nvidia-frame-capture-tools/3
    With that said, it's still extremely cool that Nvidia is enabling this sort of analysis of its products. The firm says its FCAT tools will be freely distributable and modifiable, and at least the Perl script portions will necessarily be open-source (since Perl is an interpreted language). Nvidia says it hopes portions of the FCAT suite, such as the colored overlay, will be incorporated into third-party applications. We'd like to see Fraps incorporate the overlay, since using it alongside the FCAT overlay is sometimes problematic.

    The hardware and apparently the perl script is available to anyone whom can afford it. To make available a well known programing language that would have any hardware ID prejudice would be laughed at by any high school student.
  12. Hz60Just a thought. Imagine a fast moving scene where both AMD and Nvidia are showing 120 FPS. Nvidia displays the half of rendered frame in each displayed frame. AMD after rendering, smartly removes the 1st frame and only display the latest full frame. Who do you think will have a smoother cleaner image? Nvidia will have screen tearing, while AMD will have smooth playback. Why I am saying this is that AMD should not blindly follow this latest development. They should carefully analysis the the situation and give the best user experience and should not fall for the Nvidia trap.

    Tearing and where frames end up is completely random and up to luck. If you want only individual frames, you enable v-sync. What Nvidia brought to our attention, is that they make an attempt to evenly space out their frames in multi-GPU configurations, while AMD has not.

    AMD supports the data and tools. They are not crying foul. They even took their single GPU stuttering problem and have used it as an advantage, as much of the stuttering were a result of wasted time during the rendering process.

    They are now looking to fix the issue by offering a setting in future drivers that will allow you to evenly space out frames. They are aiming to give it to us in July.
  13. Sorry, with Nvidia's track record, I question any tool made by them to be biased. Seriously.
  14. ojas.....Adaptive Vsync doesn't prevent tearing below the refresh rate. .....
    ......What it does is, below the refresh rate, VS gets turned off so that frames don't fall to the next factor of the refresh rate. Doesn't prevent tearing, though it prevents stuttering.....


    Nice info, it kinda confusing since they used "Vsync" in "Adaptive V-sync".......
    (marketing tricks maybe? a good one)
  15. Its kind of sad that some people can't accept something that even AMD themselves accepts as accurate.

    You should read the first page of the article, starting at the top.

    Nvidia may be motivated to expose the issue, as it helps their multi-GPU drivers look good, but the hardware and software is legitimate.
  16. bystander....They are now looking to fix the issue by offering a setting in future drivers that will allow you to evenly space out frames. They are aiming to give it to us in July.....


    If it's true, then I guess we the user/consumer is the one that winning in the end... :D.
  17. Souv_90 said:
    If it's true, then I guess we the user/consumer is the one that winning in the end... :D.

    http://www.anandtech.com/show/6857/amd-stuttering-issues-driver-roadmap-fraps/6

    Quote:
    In a typical AMD move, AMD will ultimately be leaving this up to the user. In their July driver AMD will be introducing a multi-GPU stuttering control that will let the user pick between an emphasis on latency, or an emphasis on frame pacing. The former of course being their current method, while the latter would be their new method to reduce micro-stuttering at the cost of latency.


    Given that the frame rendering process should be fairly similar from one frame to the next, I'm betting their frame pacing option will have only a small latency penalty from time to time, as once you get them spaced out at the get go, there should not be a lot of corrections after.
  18. cobra5000 said:
    Sorry, with Nvidia's track record, I question any tool made by them to be biased. Seriously.

    you can go to anand gtx titan forum where I posted a page full of nvidia cheating link since heavy 3d mark skew in 2003
    who knows what more they have done that we don't know given their track record of notorious acts and benchmark skewing or bribing crytek,etc etc list goes on and on..
    cheating is in nvidia's da** blood
  19. bystanderTowards the beginning of the article, the mentioned that most their data had to be thrown out because their motherboard was interfering with the results some how. So they plan to deliver a follow up article. Hopefully they give a more in depth review on the follow up.

    good point, i concur. Although im still going to be watching Techreport's website in particular, they seem to be exploring everything more thoroughly than any other website, they do have a head start as one of the first sites to introduce frame latency testing.
  20. For me personally this entire article is about micro-stutter, although Don has diligently managed to avoid using the term anywhere in his article. I don’t blame him; the best definition I’ve seen thus far for micro-stutter is ‘You’ll know it when you see it’ and the term has become laden with controversy. No wonder he’s reticent to use such an ambiguous term. As a gamer myself I’ve experienced and battled the glitch a few times myself, and I know how badly it can ruin your gaming experience.

    I’d suggest another search of the data; those fat spreadsheets that’s been accrued from these tests. I suspect either of the following searches will reveal the micro-stutter phenomenon:
    1) A search for three or more consecutive frames that shows exactly the same image.
    2) A search for any one frame that lingers for longer than 200% of the average frame rate.
  21. Where does LucidLogix's VirtuHyperformance and VirtuPro et al figure in this new scenario ?
  22. Fokissed said:
    ubercakeNo tearing on 120Hz monitors until you get over 120fps and even then tearing is no longer perceivable until you hit the mid 400s.Also, that is not the point of the article.This is a great article. It's consistent with others I've read on the subject. It is consistent to what is being published regarding information AMD is also supporting. I look forward to seeing what you do with the tweaks of the FCAT software to further define what equates to a "runt" frame. Seems like that could make an even greater difference. Defining a runt frame seems somewhat subjective. Seems like many more than 21 scan lines or less could define a runt and would seem dependent on the resolution somewhat?

    You can (and will) get a tear at any fps while vsync is not enabled. Once a frame is finished drawing and the buffer is flipped, the monitor scanner will pick up on the new frame. The only exception is VSync.


    I can see it in the case where framerates exceed the refresh rate of the monitor because there is something new to draw in the buffer prior to refresh interval being ready for the frame, but in the case where framerates are lower than the monitor's refresh, if there's nothing new to draw to the screen during a particular refresh interval, wouldn't the monitor just keep the current frame on the screen and wait to draw the next frame when the buffer makes it available (or "flips") and the monitor's refresh interval occurs? This seems to be consistent with why tearing is uncommonly seen at framerates lower than monitor refresh rates.
  23. loved that article, love tom for the focus on user experience :)
  24. Btw, iver been looking at Minimum Frame Rates since i can remmber... Never cared for average frames.
    It was only about 5 years ago wehn i started to calculate a ratio between minimum frame rates and the difference between max and minimum frame rates.

    Basicly, is the minimum frames were lower than the diference between max and min frams, the game would look unplayable :D.

    Simple, and a bit raw, but it wors for me.
  25. bystanderI think the point was more about showing a new testing methodology and get some general SLI vs CF comparisons, than to compare specific cards, though I think you may have looked up the 7850, as $195 is much lower than anything I can find.


    When I checked yesterday newegg had it for 195 after rebate...the cheapest 7870 today is for $205
    http://www.newegg.com/Product/Product.aspx?Item=N82E16814202025

    still 660Ti is way up there however it is cheaper today $255 http://www.newegg.com/Product/Product.aspx?Item=N82E16814162120

    The point is that they make AMD looks bad by mismatching cards from different tiers. That is my issue.
    AMD is down and they just kick it in the balls. I mean nvida provides their software and for sure they don't want AMD to look good and that is why we see 660Ti v 7870. Simple marketing strategy.
  26. cats_Paw said:
    Btw, iver been looking at Minimum Frame Rates since i can remmber... Never cared for average frames.
    It was only about 5 years ago wehn i started to calculate a ratio between minimum frame rates and the difference between max and minimum frame rates.

    Basicly, is the minimum frames were lower than the diference between max and min frams, the game would look unplayable :D.

    Simple, and a bit raw, but it wors for me.


    Max framerate 180 - Min framerate 80 = Difference 100
    Min framerate 80 < Difference 100 = game unplayable?

    Just checking.
  27. bystander said:
    Its kind of sad that some people can't accept something that even AMD themselves accepts as accurate.

    You should read the first page of the article, starting at the top.

    Nvidia may be motivated to expose the issue, as it helps their multi-GPU drivers look good, but the hardware and software is legitimate.

    This. Read the articles; considering how and where this tool works, it is vendor-neutral. Right now, it makes clear that two-card setups should NOT be AMD, so yes nVidia's marketing guys are drooling and AMD's are soiling themselves, but the end result should be that engineers from both companies will be able to improve their products.
  28. does this tool work on 1 card? perhaps we should to bench the smoothness between the 2 GPU from diff vendor on 1 card and see the diff.

    I am not in anyway to get Multi GPU.
  29. ubercakeI can see it in the case where framerates exceed the refresh rate of the monitor because there is something new to draw in the buffer prior to refresh interval being ready for the frame, but in the case where framerates are lower than the monitor's refresh, if there's nothing new to draw to the screen during a particular refresh interval, wouldn't the monitor just keep the current frame on the screen and wait to draw the next frame when the buffer makes it available (or "flips") and the monitor's refresh interval occurs? This seems to be consistent with why tearing is uncommonly seen at framerates lower than monitor refresh rates.

    Because there isn't something new to draw every refresh, you can't get tearing as often. If the video card flips the image midway through a refresh, you see a tear, but then the next refresh will see a full image until the video card flips the image again.

    Btw, don't think the term flip means that it is instantaneous either, it is a copy operation, but a very fast one.

    So yeah, when your FPS are lower, you see less tearing, but there is still tearing.
  30. Onus said:
    bystander said:
    Its kind of sad that some people can't accept something that even AMD themselves accepts as accurate.

    You should read the first page of the article, starting at the top.

    Nvidia may be motivated to expose the issue, as it helps their multi-GPU drivers look good, but the hardware and software is legitimate.

    This. Read the articles; considering how and where this tool works, it is vendor-neutral. Right now, it makes clear that two-card setups should NOT be AMD, so yes nVidia's marketing guys are drooling and AMD's are soiling themselves, but the end result should be that engineers from both companies will be able to improve their products.


    bystander said:
    ubercakeI can see it in the case where framerates exceed the refresh rate of the monitor because there is something new to draw in the buffer prior to refresh interval being ready for the frame, but in the case where framerates are lower than the monitor's refresh, if there's nothing new to draw to the screen during a particular refresh interval, wouldn't the monitor just keep the current frame on the screen and wait to draw the next frame when the buffer makes it available (or "flips") and the monitor's refresh interval occurs? This seems to be consistent with why tearing is uncommonly seen at framerates lower than monitor refresh rates.

    Because there isn't something new to draw every refresh, you can't get tearing as often. If the video card flips the image midway through a refresh, you see a tear, but then the next refresh will see a full image until the video card flips the image again.

    Btw, don't think the term flip means that it is instantaneous either, it is a copy operation, but a very fast one.

    So yeah, when your FPS are lower, you see less tearing, but there is still tearing.


    I thought the flip was the point when buffer 0 "becomes" buffer 1 and buffer 1 "becomes" buffer 0 (given most modern cards use double-buffering by default)? Back buffer is renamed front and front renamed back?

    With framerates lower than the refresh rate, it would seem data are already fully written to the back buffer before it "becomes" the front buffer?

    I found an article that explains this pretty well:
    http://www.anandtech.com/show/2794/2
  31. ubercakeI thought the flip was the point when buffer 0 "becomes" buffer 1 and buffer 1 "becomes" buffer 0 (given most modern cards use double-buffering by default)?

    That depends on what you are labeling buffer 0. The display buffer, at least a long time ago, does not change, and I'm pretty sure I've read recently that it still does not. However, the buffers used for rendering can be. In a double buffering system, where one is the buffer for the display, and the other is the buffer that is used to render, then a flip is actually a copy. But in triple buffering, where you have two buffers used by the GPU's and one for the display, a flip from one of the GPU rendering buffers to the next is simply a matter of pointing to the new buffer.

    I hope that was clear. Of course, it is possible I'm working off ancient knowledge, as I haven't done this type of coding in many years, but there would be a few obstacles to over come and question marks ( If you flipped to a new buffer by changing the pointer, then why would the display not continue updating with its original buffer, preventing tearing? Or how do they stop the refresh of the screen, and reset the pointer in the correct offset with the new buffer?)

    EDIT: I have read three different articles, with two different answers. One says it just swaps what is considered the front buffer, two say it copies the back buffer to the front buffer. Based on my ancient experience, it is the latter, but I'm not certain.
  32. 17seconds said:


    Very interesting read. While it doesn't change my mind about buying my HD 7950, I also wasn't planning on buying a second for Xfire. I will say that with the changes I'm expecting to come around for both AMD and Nvidia in order to fix or simply refine this issue, I may be looking for a new card already with the next gen.
  33. bystander said:
    ubercakeI thought the flip was the point when buffer 0 "becomes" buffer 1 and buffer 1 "becomes" buffer 0 (given most modern cards use double-buffering by default)?

    That depends on what you are labeling buffer 0. The display buffer, at least a long time ago, does not change, and I'm pretty sure I've read recently that it still does not. However, the buffers used for rendering can be. In a double buffering system, where one is the buffer for the display, and the other is the buffer that is used to render, then a flip is actually a copy. But in triple buffering, where you have two buffers used by the GPU's and one for the display, a flip from one of the GPU rendering buffers to the next is simply a matter of pointing to the new buffer.

    I hope that was clear. Of course, it is possible I'm working off ancient knowledge, as I haven't done this type of coding in many years, but there would be a few obstacles to over come and question marks ( If you flipped to a new buffer by changing the pointer, then why would the display not continue updating with its original buffer, preventing tearing? Or how do they stop the refresh of the screen, and reset the pointer in the correct offset with the new buffer?)

    EDIT: I have read three different articles, with two different answers. One says it just swaps what is considered the front buffer, two say it copies the back buffer to the front buffer. Based on my ancient experience, it is the latter, but I'm not certain.


    Thanks again for the info. I updated my post with a link that does a good job of explaining things within the context of triple buffering. It's really good if you haven't seen it yet.
  34. ubercake said:
    Thanks again for the info. I updated my post with a link that does a good job of explaining things within the context of triple buffering. It's really good if you haven't seen it yet.

    That is the one that says it swaps, while I read two others that says it copies. From my experience, it is the latter, but it's been long enough, things could have changed.

    I could easily see Anandtech getting it wrong, as regardless if it is a copy or swap, we call it a swap.

    Now I've found a couple more, one for each. I wonder if there is optional ways of doing this, though I'd have to assume the swap method would be preferred if it works.
  35. Don! I just checked Tech Report, they compared FRAPS and FCAT for Nvidia as well, FRAPS is over-reporting Nvidia too by almost the same amount as it is AMD.

    http://techreport.com/r.x/frame-capture/skyrim-fps.gif
  36. I have always been of the impression it is a swap as well.
  37. ojas said:
    Don! I just checked Tech Report, they compared FRAPS and FCAT for Nvidia as well, FRAPS is over-reporting Nvidia too by almost the same amount as it is AMD.

    http://techreport.com/r.x/frame-capture/skyrim-fps.gif

    The over report makes sense as fraps gets the data at the start of the pipeline. It does not know that some frames partially or do not even make it.
  38. bystander said:
    ubercake said:
    Thanks again for the info. I updated my post with a link that does a good job of explaining things within the context of triple buffering. It's really good if you haven't seen it yet.

    That is the one that says it swaps, while I read two others that says it copies. From my experience, it is the latter, but it's been long enough, things could have changed.

    I could easily see Anandtech getting it wrong, as regardless if it is a copy or swap, we call it a swap.

    Now I've found a couple more, one for each. I wonder if there is optional ways of doing this, though I'd have to assume the swap method would be preferred if it works.


    The swap seems to make sense since no data has to move.
  39. nukemasterThe over report makes sense as fraps gets the data at the start of the pipeline. It does not know that some frames partially or do not even make it.

    Thrue, but go to the second or third page and read Don's comment in reply to mine, he says:
    1. We don't know if it does for nvidia
    2. Shouldn't, because FCAT doesn't show variance.

    Look at the TR chart again. Difference b/w single GPUs using either FCAT or FRAPS gives the same result, i.e. 1.8 fps. Both gain 2 fps due to FRAPS.

    Almost the same with SLI/CF, difference is 3.9 vs 3.5 fps, doesn't change the order.

    What i'm saying is, unless there's evidence to the contrary, the difference b/w FCAT "hardware" and FRAPS might amount to nothing really, when you're looking at the average frame rates.

    It's only when FCAT filters the raw data does it actually make a difference, So i think reviewers should continue to provide FRAPS data, as it'll serve to compare with FCAT's hardware results for everyone without FCAT (which is most of us, really).
  40. I think FRAPS will remain a usable but extremely coarse measurement tool. Its value for comparisons will really only be for single cards, possibly using the same drivers (i.e. one AMD card vs. another, or one nVidia vs. another). Since it does not reflect on what takes place further down the pipeline, it will be essentially useless for AMD vs. nVidia comparisons (except in extreme cases), and useless for multi-GPU setups.
  41. ubercakeThe swap seems to make sense since no data has to move.

    I won't argue that it would be the fastest, but it doesn't necessarily make the most sense, because you are now moving the monitors read location in the middle of updating the screen. That said, I suppose Windows could do that, but I don't know how disruptive it would be to the monitor.
  42. This may be the best evidence it is a swap:
    http://gameprogrammingpatterns.com/double-buffer.html

    Of course it is possible there is another layer not talked about, where the one pushed to the front buffer gets copied to a buffer the monitor uses. It doesn't really matter, I guess, not for what we are using it for.
  43. bystander said:
    This may be the best evidence it is a swap:
    http://gameprogrammingpatterns.com/double-buffer.html

    Of course it is possible there is another layer not talked about, where the one pushed to the front buffer gets copied to a buffer the monitor uses. It doesn't really matter, I guess, not for what we are using it for.


    It seems like the swap would just be between references to the frame data. For example, buffer 0 (back) is now called buffer 1 (front) and the monitor picks up buffer 1's data on refresh and buffer 1 is swapped and renamed as 0 and is now being written to as the back buffer. I'm sure this is simplified somewhat as buffer 0 and 1 cannot simultaneously swap reference names. I'm thinking the back buffer may change names while the front buffer keeps its name.

    This way, the monitor always goes after the front buffer data, so it wouldn't have to get reference anything but the front buffer.

    It would be interesting if Tom's could do something on this?
  44. @ubercake

    You should read this: http://en.wikipedia.org/wiki/Multiple_buffering

    This one explains 2 different methods. One that changes the pointer, and one that copies. One thing that caught my attention was the flip method can only be done during vertical retrace mode, meaning that it is only an option if you use v-sync. The copy method has the advantage of being capable of happening during the refresh.

    This would work around the issue I was concerned with about flipping during a refresh, as the flip method (as explained here at least), only works during vertical retrace mode (what v-sync requires).

    Look under the "Double buffering in computers" and "Page flipping" headings, they explain things pretty well.
  45. bystanderI'm not really sure it matters. The focus of the article is how Crossfire and SLI are performing, and it does a good job at showing that, though pcper.com has a more in depth picture, but THG does plan to give us more soon. It sounds like they had a system setup problem that caused them to lose a lot of time and data.

    And who is to say that more powerful Radeons will NOT do better IN THIS EXACT test?

    Comparing JUST two Radeon cards with JUST two NVidia cards does NOT prove anything when it comes to doing an overall comparison between Crossfire and SLI. People just like to get all excited and argue about something that really want to argue about.
  46. The Anandtech article indicates that even the AMD engineers have realized there is a problem (they'd never looked for it before), and they are taking steps to fix it. This strongly suggests to me that, at least for now, it is indeed a universal problem with Crossfire.
  47. This was one of the most interesting parts of the PC Perspective article. Essentially, when they cleaned up the runt frames, the resulting Observed FPS showed NO benefit from adding a second 7970 GHz in Crossfire in this example.
    AMD CrossFire configurations have a tendency to produce a lot of runt frames, and in many cases nearly perfectly in an alternating pattern. Not only does this mean that frame time variance will be high, but it also tells me that the value of performance gained by of adding a second GPU is completely useless in this case. Obviously the story would become then, “In Battlefield 3, does it even make sense to use a CrossFire configuration?” My answer based on the below graph would be no.
    /http://www.pcper.com/reviews/Graphics-Cards/Frame-Rating-Dissected-Full-Details-Capture-based-Graphics-Performance-Test-3]

  48. Onus said:
    The Anandtech article indicates that even the AMD engineers have realized there is a problem (they'd never looked for it before), and they are taking steps to fix it. This strongly suggests to me that, at least for now, it is indeed a universal problem with Crossfire.


    Which makes me wonder if the Battlefield 4 demo that AMD won't stop tooting their HD 7990-shaped horn about would run at the same framerate with a single HD 7970...

    Or maybe just slightly better, because they definitely have Vsync on in the video. Something like this:

  49. I think we can be sure that Battlefield 4 will be the subject of a lot of reviews once it comes out. We should know soon enough what difference Crossfire makes, if any.
  50. Cpu NumberOneAnd who is to say that more powerful Radeons will NOT do better IN THIS EXACT test?

    Comparing JUST two Radeon cards with JUST two NVidia cards does NOT prove anything when it comes to doing an overall comparison between Crossfire and SLI. People just like to get all excited and argue about something that really want to argue about.


    TechReport: http://www.pcper.com/reviews/Graphics-Cards/Frame-Rating-Dissected-Full-Details-Capture-based-Graphics-Performance-Testin?page=2#comments

    Pcper: http://www.pcper.com/reviews/Graphics-Cards/Frame-Rating-Dissected-Full-Details-Capture-based-Graphics-Performance-Testin?page=2#comments
Ask a new question

Read More

Performance Graphics Cards Benchmark