Maxwell Goes Mobile: First GeForce GTX 970M Benchmarks
Nvidia introduces its second-gen Maxwell architecture to mobile, where the efficient GM204 graphics processor has a chance to show its core advantages.
Earlier this year, Nvidia introduced its first Maxwell-based GPUs for the mobile space. Today, the company follows up with higher-end graphics processors employing what it’s calling a second-generation iteration of the architecture. The GeForce GTX 970M and 980M are the result. Unfortunately, our test samples arrived too late for thorough performance analysis, so we're delivering a preliminary report with as much benchmark data as possible. We’ll follow up with more detailed coverage once we’ve run through our full suite.
Both the 970M and 980M utilize the same GM204 die found on Nvidia’s desktop GeForce GTX 970 and 980 graphics cards. Those add-in boards are already known for their excellent power efficiency, compelling performance, and reasonable prices—all characteristics we’re certain system builders are eager to add to their gaming notebooks. Let's have a look at how the specifications compare between the desktop and new mobile products based on GM204:
| Products |
|
|
|
|
| Pricing |
|
|
|
|
| GPU | GM204 (Maxwell) | GM204 (Maxwell) | GM204 (Maxwell) | GM204 (Maxwell) |
| Process | 28nm | 28nm | 28nm | 28nm |
| Shader Units | 1280 | 1536 | 1664 | 2048 |
| Texture Units | 80 | 96 | 104 | 128 |
| ROPs | 48 | 64 | 64 | 64 |
| Core Clock | 924 MHz | 1038 MHz | 1050 MHz | 1126 MHz |
| Memory Clock | 1250 MHz GDDR5 | 1250 MHz GDDR5 | 1750 MHz GDDR5 | 1750 MHz GDDR5 |
| Memory Bus | 192-bit | 256-bit | 256-bit | 256-bit |
| Memory Bandwidth | 132 GB/s | 176 GB/s | 224 GB/s | 224 GB/s |
| Memory Capacity | 3GB | 4GB | 4GB | 4GB |
| DirectX, Shader, OpenGL | 12/?/? | 12/?/? | 12/?/? | 12/?/? |
| Max. TDP | 95 Watts (estimated) | 125 Watts (estimated) | 145 Watts | 165 Watts |
| Aux. Power Connector(s) | N/A | N/A | 2x Six-Pin PCIe | 2x Six-Pin PCIe |
| Min. Power Supply | N/A | N/A | 500 Watt | 500 Watt |
The GeForce GTX 980M is essentially a desktop GeForce GTX 970 with one more SMM disabled, a 12 MHz-lower GPU clock rate, and 1250 MHz GDDR5 memory. Nvidia suggests that the GeForce GTX 980M delivers about 75% of the desktop 980’s performance, and based on these specifications, I don’t think that's an unreasonable claim.
As for the GeForce GTX 970M, it's a different animal with one 64-bit memory channel and render back-end disabled, yielding a 192-bit aggregate memory interface (instead of 256-bit), 3 GB of GDDR5 (rather than 4 GB), and 48 ROPs per clock (instead of 64). Naturally, that’s going to negatively affect performance. But power consumption should drop as well.
Nvidia is choosing not to disclose the thermal design power of either Maxwell-based mobile part, but a rough guess based on the GPU’s specs alone puts the GeForce GTX 980M in the 125W neighborhood, and the 970M somewhere around 95 W. Of course, that'd be maximum theoretical draw, which is significantly limited in real-world scenarios by thermal energy, form factor, and power consumption. This is especially true when running on battery, where lower power states kick in to shave off wattage.
Speaking of lower-power states, Nvidia makes bold claims (see above) regarding the company's BatteryBoost technology. Originally released with the GeForce 800M series, BatteryBoost monitors frame rates while gaming and pares back if a GPU’s performance potential isn’t needed to maintain a specified target. As a result, the graphics subsystem operates more efficiently, using less power as it works just hard enough to render smoothly. That frame rate target is configurable via GeForce Experience software, by the way, and the company claims to have made additional improvements since its original release. This is definitely a feature that we look forward to testing thoroughly with the GeForce GTX 980M and 970M.
Aside from BatteryBoost, the GeForce GTX 970M and 980M are endowed with the new features associated with second-generation Maxwell GPUs. That includes Dynamic Super Resolution (DSR) and Multi-Sample Anti-Aliasing (MFAA), Nvidia's ambitious Voxel Cone Tracing (VXGI) technology, VR Direct enhancements targeted at virtual reality head-mounted Display (VRHMD) users, and DirectX 12 compatibility. We'll go into depth into the features that specifically affect the GeForce GTX 970M and 980M in our upcoming review, but if you're interested in learning more about these in general, check out Nvidia GeForce GTX 970 And 980 Review: Maximum Maxwell.
For now, let's go to our preview benchmarks and see what we can learn about the mobile 900M series.




Keep in mind that these tests were performed with the 970M, not the top-of-the-line 980M. Despite this, the new GeForce chip handled every game we threw at it in single GPU mode at 1080p, averaging at least 38 FPS at ultra detail settings (usually much higher). It also beat out the GeForce GTX 870M by a significant margin. When both of the GeForce GTX 970M's were linked in SLI mode, though, the laptop never fell below 30 FPS when running in a triple-monitor 5760x1080 resolution. That is very impressive for a notebook.
Of course those tests were performed when plugged into an AC power source, so we're very keen to see how much performance the 970M can sustain with the BatteryBoost feature enabled. That's something we'll focus on in our upcoming review. In the meantime, the GeForce GTX 970M has certainly wowed us with the brute force that this GPU brings to the notebook market, either on its own or when paired with a twin in SLI mode. Soon we'll have a clearer picture of what the 980M can do, too, and we'll also learn how these chips perform under the constraints of battery power. The preliminary tests we've already run certainly give us good reason to be optimistic about the upcoming results, and we'll have those available for you as soon as we can.


Or did they simply skip the 800 desktop cards so that, from now on, desktop chips in a series will release before their mobile counterparts?
Or did they simply skip the 800 desktop cards so that, from now on, desktop chips in a series will release before their mobile counterparts?
Or did they simply skip the 800 desktop cards so that, from now on, desktop chips in a series will release before their mobile counterparts?
They did a similar thing with the gtx 300 series. This isnt a surprise.
Anyway, based on the 970M spec sheet, I'm guessing it will be appropriate for that spec to be used for gtx 960 desktop if they are making one.
For the past year i keep finding myself gaming more and more on my old gaming laptop, sitting on a very comfortable couch with your feet up while using a controller is just so much better than being stashed in my bedroom whole weekends playing on my desktop. Gf can watch tv while im busy gaming along beside her on the couch while carbo loading on food haha!
I would really find it hard to resist if they release a future laptop gpu with a 780 ti class performance. I just might wholeheartedly buy that right away.
I wish amd could have something competitive on the mobile side just to keep prices down. An r9 m290x is just not in the same league of even a 880m what more of the 980m.
Alienware for starters.
Nearly every modern gaming laptop heavily throttles the GPU when running on battery exactly for this reason.
Ummm what? I guess you haven't heard of desktop 970/980 release and how they offer same performance for way less $ and power consumption, allowing you to get way more for the same money/energy/heat? In the end you DO get more performance. Also you don't use GeForce in a workstation, that's what Quadro is for - and you get your better OpenGL support with it. What's killing you is looking at products not intended for you and thinking that they are.
The GTX 980 die and the 750ti Die. every maxwell part uses those die's mobile or desktop.
You are just plain wrong and make stupid, blind assumptions.
I am currently at work, having a break and I am writing this from a HP Z420 with Xeon 2650v2 and a Quadro K4000 machine. We are using the complete Adobe CS6 and complete Autodesk Maya and Max suites 2015. This Quadro K4000 and its drivers are horrible. I have a scene in Maya set in meters and any time I zoom out 15 units away from the 3D model, the viewport displays corrupted geometry. Big patches of pitch black. The issue is so severe that it impacts my work. I am using the much recommended Viewport 2.0. Any options I tried do not fix the issue did not help. Switching from DX to OpenGL does not help at all. Texture corruption and more follows. I used any version of any type of drivers I found. (switched around 10 of em, clean installations of windows, etc). Wrote to both Autodesk and nVidia regarding this issue - nothing. All 4 Z420 machines with the Quadro K4000s have this problem both in Max and in Maya.
This problem is so dramatic, that I take my work from the office and when I get off - I go to my own GTX 650TI to load the scenes, because the dumb GTX 650TI just works. If you really think Quadro's are the bread and butter and savior - you are deeply wrong. Quadro drivers are worse then GTX drivers and the so-called support is laughable. I used many Quadro's, GTs/GTXs and Radeons over the years, and my-oh-my GTX cards have been mostly rock solid (with few exceptions), while the Quadro's got worse and worse. Sure, the Quadro K4000 can push 4-5 times the millions of polys my personal GTX 650TI can, but with all the corruption - that does not matter. Everyone in the office who is on the Z420s with the Quadro's is envying the guy next to us with a non-branded machine and a GTX 770. Things on the GTX just work.
Please, get some hands on experience with these, before you start commenting.
As it goes for the other point, GTX 970 and 980 are supposed to be high-end cards for enthusiasts. If they had 50W more of juice, those were going to be some impressive results. And for many generations, the top-of-the-line always pushes the envelope to the limit. Just because AMD does not have any competition, nVidia is not pushing the performance front as it was. Same story on the CPU side. When your competitor can't match your performance, you start pushing efficiency. nVidia and Intel are both going for small increments in performance because AMD is riding in the back seat, and while OpenCL and HSA are still a vision in the future, things are not going to change soon.
Whatever your experience may be with Quadro/FirePro, the point is that nVIDIA/AMD do not expect you to use GeForce/Radeon for professional work. It's true that I never used WS cards myself (although I have friends who do, and they don't run into such issues, luckily), but I'm speaking from the point of nVIDIA: they are aiming GeForce towards gamers and not WS, and they aren't "sacrificing" anything, you are complaining about desktop GPUs allegedly not meeting your requirements in a news article about laptop gaming graphics. I repeat myself: umm, what? Your 770 isn't top-of-the-line anymore and there are plenty new and more powerful cards for you to buy if you want more performance. nVIDIA is right to focus on power efficiency, especially in laptop graphics - it enables smaller form-factors and better battery life. If you are not happy with the way they handled GTX 970/980 for desktop, get OCd versions, OC them yourself or wait for Maxwell-based 980 Ti/Titan X/whatever they'll call it. The power envelope isn't set in stone, it's simply what the card requires at nVIDIA reference clocks and the numbers are very impressive, considering the performance at this wattage, especially compared to previous generation. Maxwell OCs well, most factory-OCd 970s I saw tested match the stock 980, and I bet that the 980 can do even more. So while I may not be aware of your quite non-standard situation (again, all my friends who use Quadros for work don't have such issues), your initial complaint about nVIDIA allegedly sacrificing something is what's wrong here.
Anyway, I did never said GTX 770 is top of the line or I never said I have one. We have one at the office which mops the floor of the K4000s.
The problem is: we need CUDA for software reasons, new K Quadro's bad drivers and lack of proper support make them unusable, new GTXs have 1:32 double precision. It is like stuck between a rock and a hard place. nVidia's own Fermi based Quadro's and GTXs are mopping the floor with their Kepler fiasco, although GTX Kepler is doing remarkably well. nVidia does not expect me to use GTX for professional work, but I do not have other options when we are stuck to CUDA and their Quadro's do worse job for 8 times the price (GTX 650TI - 100 euros, Quadro K4000 800 euros).
What is truly wrong here is that nVidia started their CUDA fiasco and CUDA is inferior in every way to OpenCL, but just because it was 1st - it still exists (and because nVid pays certain developers to use CUDA only). And when someone complains about their fiasco - fan-boys like you jump all over. If you don't want me to rant about their GTXses - well make them fix their Quadros. Because specially in the 400, 500 and 600 series, GTX was the workstation card of choice, not because they were cheaper, but because they worked.
For at least a decade. Clevo custom build laptops to your specifications .... pick base model and then options from a lit of available componentry. Most of the boutique brands (FalconNorthwest, WidowPC, VoodooPC, and even Alienware pre-Dell purchase are / were Clevo units). Can be purchased thru distributors throughout the world.
http://www.lpc-digital.com/sager-np9377-special.html
As for the desktop laptop performance issue, you can control how laptops work performance wise both on battery and when plugged in .... simply a matter of adjusting the power profile.
We use GeForce cards which satisfy all our CAD based needs quite well....AutoCAD 2D and 3D actually perform better on GeForce than they do with Quadro. Even Maya does surprisingly well ...though if using SolidWorks, Quadro is the only way to go.
http://www.tomshardware.com/reviews/specviewperf-12-workstation-graphics-benchmark,3778-9.html