An unidentified leaker has posted 3DMark TimeSpy results for Nvidia's new entry-level RTX 2050 and MX 550 GPUs to the Chinese forum Zhihu. The early benchmarks reveal that both GPUs are a significant downgrade compared to Nvidia's latest budget-friendly GPU, the RTX 3050. That's to be expected, given the 64-bit memory interface.
A few days ago, Nvidia officially announced three new mobile GPUs for the laptop market — the RTX 2050, MX570, and MX 550. The new 500-series in the MX lineup is believed to be a full upgrade and replacement of the older 400-series, while the RTX 2050 appears to be a new way Nvidia can provide DLSS and ray tracing support to an even more budget-friendly audience than the RTX 3050.
Specs for the RTX 2050 include 2,048 CUDA cores and 4GB of GDDR6 running on a very small 64-bit bus with just 112GB/s of memory bandwidth. Maximum power consumption is rated for a GSP of 45W with a maximum boost clock of 1477MHz. Nvidia never gave its budget Turing GPUs (i.e., TU116 and TU117) RT cores and Tensor cores, and the RTX 20-series GPUs were quite large. In order to create the new RTX 2050, Nvidia will be tapping the Ampere GA107 core instead of a disabled TU106 die.
According to the post, the RTX 2050 and MX570 are basically the same chip. It's not clear if the MX570 will have the exact same specs as the RTX 2050, including its 2048 CUDA core count and GA107 die, or if some things will be turned off. One key difference is that the MX570 only comes with 2GB of GDDR6 memory, which is concerning as the 4GB on the RTX 2050 was already a concern. Perhaps the RT cores and/or tensor cores will also be disabled. We'll find out more once notebooks with the new GPUs arrive.
MX550 and MX570 meanwhile are completely different animals. For starters, the MX550 apparently sports 1024 CUDA cores and is based on Nvidia's Turing TU116 GPU instead of the Ampere GA107. The MX550 will also support both 2GB and 4GB memory capacities featuring GDDR6 on a 64-bit bus — the same as the 2050 and MX570. While that may appear to be half the core count of the GA107, keep in mind that Turing featured separate floating point and integer pipelines, while on Ampere there's a dedicated FP pipeline and a shared FP/INT pipeline. Ampere is faster, more efficient, and smaller, but the MX550 may only be a modest step down in real-world gaming performance.
3DMark TimeSpy Results
Backing up the above information, the MX550 scored 7,888 points in the Time Spy CPU test and 2,510 points in the graphics test. The RTX 2050 result features a CPU score of 7,779 points and a graphics score of 3,369 points. We don't know anything about the notebooks that housed the RTX 2050 and MX 550. Still, these results are very low and are similar to Nvidia's entry-level products from a year ago.
For comparison, Nvidia's previous entry-level GTX 1650 mobile GPU scores 3,634 points in the Time Spy graphics benchmark. That makes it slightly faster than the new RTX 2050 — at least in Time Spy. It's also much slower than the new RTX 3050 mobile that scores 4,869 points in the same test.
If these results are any indication of real-world gaming performance, then the new GTX 2050 can be compared to a GTX 1650 with RT cores and DLSS support enabled on the GPU. The MX570 may perform at the same level, though with half the VRAM we wouldn't expect it to be quite as fast in some newer games.
Considering ray tracing requires a decent amount of GPU horsepower to play at respectable frame rates and decent memory bandwidth — even with DLSS enabled. We've already seen the RTX 3050 struggle with RT effects due to its very small 4GB frame buffer, so the RTX 2050 with its narrower and slower 64-bit memory bus will inevitably perform even worse.
We're not sure if there's a real demand for these low-end GPUs. Most likely they're more about providing a feature checkbox than actual usable RT/DLSS capabilities, though we'll withhold final judgements until laptops with the GPUs actually begin shipping and we can test them ourselves. They'll also have to compete against Intel's upcoming Arc Alchemist GPUs, which perhaps is the real reason for their existence.