Is Nvidia Not Being Forthcoming About Lower-Speed MX150 GPUs?

Status
Not open for further replies.

King_V

Illustrious
Ambassador
I guess it would be too much to expect that maybe the differentiation should be treated the way they do with the 1070 and 1080 variants?

Or, maybe, you know, something like the powerhouse being the MX150 and the lower level ones MX150E or MX150U or somesuch?
 

MCMunroe

Distinguished
Jun 15, 2006
283
1
18,865
I would be fine if this was done at a software level. Then users who cared could change it knowing it would effect battery life, temps, and noise. This how ever is very shady.

I am a bit PO'd that my XPS 15 has a GTX 1050, with voltages and power locked, as well as missing half the ROPs: 16 (vs 32). Device Id: 1C8D.
 

Olle P

Distinguished
Apr 7, 2010
720
61
19,090
In what way does this differ from the CPUs that are run at different TDPs depending on settings in the UEFI?
Here it's a firmware setting, which just about anybody can change(?) using some OC software...
 

MCMunroe

Distinguished
Jun 15, 2006
283
1
18,865
@OLLE P- The GPU and the CPUs you speak of have firm TDP limits. It will current throttle even if you set the clocks faster. (why can't I delete my double post? Sorry my bad)
 

If the laptop's cooling system is only designed to handle a 10 watt GPU, then you might not be able to get much higher than that anyway. Clearly, this low-power version should be given a given name, so that consumers can make informed decisions when comparing laptops.
 
Well are the technical specs of each mGPU stated by the laptop OEMs in their product description technical specs list? Places like on the box packaging, in the manual, and on their websites? If not, then perhaps Nvidia needs to get with the OEMs and advise them to detail those specs on their various models so consumers do not think they are getting one or the other chip as everyone has their own needs - some want lower power with less performance but longer battery life, others want more power being plugged into the wall for example.

Perhaps Nvidia can simply just clear up any reference confusion as someone said above and just needs a letter on end of each GPU identifying each. Then Nvidia can reference the info of each mGPU in specs on their website with the hard stamped fabrication performance numbers that are set in BIOS OEMs cannot control with these:

ROPs/TMUs
Shaders
Texture Fillrate
Pixel Fillrate
Bus width
Memory size
Memory bandwidth

^^Those don't change by the laptop OEMs here as the article mentions and Nvidia should put that hard data on their website for each mGPU. Nvidia even lists most of these factory specs for their dedicated GPUs under each model. Just the GPU clock, memory speed, and memory boost speed are set by the OEMs here by the AIB OEM partners so those specs are up to them to report.

This is an easy fix by both parties and doesn't need to cause confusion. I don't think they were doing anything illicit. It just appears to be another oversight disconnect between engineering and marketing (yet again).
 

MCMunroe

Distinguished
Jun 15, 2006
283
1
18,865


I just checked on Nvidia's website. The listed mobile specs for the GTX 1050 laptop do not mention half the ROPs of the desktop model. I assume that's why it is benchmarking far lower than GTX desktop 1050's and laptop GTX 1050 Ti's than the difference in shaders and clocks suggest.

GTX%201050%20Mobile.gif

 

psiboy

Distinguished
Jun 8, 2007
180
1
18,695
Just when you think they've learned there lesson on transparency they go and pull the same shit again! (They = all manufacturers, so back off fanbois I'm not leaving anyone out!)
 

kookykrazee

Distinguished
Nov 23, 2008
40
2
18,530


Also, look at the different memory manufacturer. That 1D12 probably cost less, by quite a bit, relatively speaking.
 

Olle P

Distinguished
Apr 7, 2010
720
61
19,090
The CPUs are most definitely not one specific TDP used by all systems. There the system builder define the TDP somewhere within a span (typically 15-45W) set by the CPU manufacturer.
For the new MX150 GPUs, are those (lower) limits really hard coded by Nvidia, or are they not just set by the Laptop manufacturer (that also design the cooling solution)?

I'm in no way suggesting that the user is able to change the thermal limits, but that they're hard coded into the firmware/UEFI by the system designer/builder.
 

MCMunroe

Distinguished
Jun 15, 2006
283
1
18,865


Despite what GPU-z says, my example GTX 1050 when benchmarked (Max GPU temp 58C (which is plenty cool damn it)) will boost to 1800MHz for about 5 seconds and then settle at 1200MHz from them on. This was seen in Super Position and normal gaming. Yes, it's plugged in, yes, it's cool. Fiddling with the clocks doesn't really change this at all. I even tried underclocking to get a steady behavior.

 
Status
Not open for further replies.