It's typically more difficult to get two GPUs in CrossFire or SLI overclocked aggressively than it would be to run one processor at elevated frequencies. If anything, you're always going to be limited to the peak clock rates of whichever card is slower.
I was surprised, then, to get our Mars 760 card running with a 185 MHz GPU Boost clock rate offset and a 742 MT/s memory data rate boost. Even more impressive, the reported 1257 MHz Boost clock setting was exceeded in both of the apps I experimented with (FurMark and Metro: Last Light), where I saw 1306 MHz under heavy load. Achieving those numbers required the Asus GPU Tweak utility, a 12 mV offset on the GPU, a 105% power target, and a fan speed manually dialed in to 80%.
Now, there is absolutely such a thing as vendors cherry-picking their best GPUs and memory for review samples. It happens all of the time, and we totally get the desire to put a best foot forward. I'm not saying that's what's happening here. But I would also caution our readers to consider our results knowing that we're benchmarking a review sample, and not a retail card. Your mileage may vary.

The result of our overclock is significantly more memory bandwidth than a stock GeForce GTX 690, with comparable shader processing power.


At least in this one application, the overclocked Asus Mars 760 achieves a frame rate that is almost identical to the vastly more expensive GeForce GTX 690. And again, while we obviously can't guarantee that retail boards will overclock as well as our sample, Asus has to be doing some sort of binning to get amenable GK104 processors able to stay under a certain thermal limit anyway. From what we've seen and heard in the real world, the Mars 760 is garnering a good reputation for overclocking well as a result.
- Two GK104s On A Card For $650
- The Mars 760 Bundle And Software
- Test System And Benchmarks
- Results: Battlefield 4, 2560x1440
- Results: Assassin's Creed IV, 2560x1440
- Results: Metro: Last Light, 2560x1440
- Results: BioShock Infinite, 2560x1440
- Results: Grid 2, 2560x1440
- Results: Battlefield 4, 5760x1080
- Results: Assassin's Creed IV, 5760x1080
- Results: Metro: Last Light, 5760x1080
- Results: BioShock Infinite, 5760x1080
- Results: Grid 2, 5760x1080
- Overclocking
- Power, Temperature, And Noise Benchmarks
- Asus Mars 760: We Dig The Innovation, But There Are Smarter High-End Buys
That's why we included an OC'd titan to represent 780 Ti performance.
Read the article. The memory was clocked identical to 780 Ti, and the core overclock was even calculated to simulate it as closely as possible.
It's a valid representation. I see some of you don't agree and you certainly reserve the right to do that, but I'm quite satisfied with the results.
780 is not the same price point. The 780 Ti is, and we overclocked a Titan to simulate as per above.
Really?
780 is not the same price point. The 780 Ti is, and we overclocked a Titan to simulate as per above.
Thanks, I stand corrected, and the 770, 780, and 780ti is what I would like to see compared to the Mars.
My qualm with using a Titan for comparison is 1) The titan costs $300 more than the 780ti, and 2) The titan is slower.
I usually read these type of articles from a perspective of "if I was going to purchase this Mars 760 or a comparitive other card at the $700 price point, what would I buy?"
So I wouldn't buy a Titan for 300$ more and overclock it to try to get 780ti performance out of it. I would want to see how a 780ti overclocked compares to an overclocked Mars 760 - then make a choice from that.
But, from strictly a performance consideration, I understand where you are coming from.
Those of us who don't get the Nvidia sample cards to play with have to consider the price/performance factor
My qualm with using a Titan for comparison is 1) The titan costs $300 more than the 780ti, and 2) The titan is slower.
The point is, is overclocked to *match* the 780 Ti.
We tested it at stock, ***and then again overclocked to represent the 780 Ti***.
It goes over this in detail in the article. Check the test system page
You are paying for the complexities of sticking two GPU's and the SLi bridge on one card together with the larger HSF this requires, it shouldn't be that difficult to work that out surely?
Plus stability is always worst on dual GPU card
Not my thing