The EVGA GeForce GTX 1650 GDDR6 is a better card in every way compared to the original GeForce GTX 1650 cards with GDDR5. With the same basic GPU (and slightly slower clocks) but 50% more memory bandwidth thanks to the switch to GDDR6, it's an easy win in a head-to-head comparison. The problem is that the GTX 1650 doesn't exist in a vacuum, and with the GTX 1650 Super essentially selling for the same price (maybe $10 more), that's clearly the budget pick for the best graphics card. We've added the GTX 1650 GDDR6 to our GPU hierarchy, and it lands near the bottom of the charts in both performance and price, much as you'd expect.
Put simply, the current budget GPU market is both confusing and underwhelming. Where have all the good budget cards gone? The previous generation had GeForce GTX 1050 and GTX 1050 Ti from Nvidia, priced around $110 and $140, respectively. Meanwhile, AMD offered up the Radeon RX 560 4GB at prices ranging from $100 to $120. More recently, the Radeon RX 570 4GB basically killed off demand for most other ultra-budget GPUs — assuming your PC could handle the 6-pin or 8-pin PEG power requirement. Where have all the good budget GPUs gone?
The latest AMD and Nvidia budget GPUs cost nearly 50% more than the previous edition, and they're potentially up to 65% faster — mostly thanks to the GTX 1050 only having 2GB VRAM. But it's been 3.5 years since the 1050 and 1050 Ti launched, and one year since the GTX 1650 landed. Prices have stagnated, and neither AMD nor Nvidia appear to be willing to target the sub-$150 market with new GPUs right now.
A few recent cards might get there with mail-in rebates, but the true budget graphics cards are uninspiring. You're far better off spending up a bit for a $200-$230 GPU like the GeForce GTX 1660 and GeForce GTX 1660 Super. Or maybe in a few months we'll see the GTX 1650 (both GDDR5 and GDDR6 variants) drop in pricing, which would help tremendously. Right now, they're basically a $20-$30 price cut away from being 'great' budget cards.
EVGA GTX 1650 GDDR6 Specifications
|Graphics Card||GTX 1660||GTX 1650 Super||GTX 1650 GDDR6||GTX 1650|
|Die size (mm^2)||284||284||200||200|
|SMs / CUs||22||20||14||14|
|Base Clock (MHz)||1530||1530||1410||1485|
|Boost Clock (MHz)||1785||1725||1590||1665|
|VRAM Speed (Gbps)||8||12||12||8|
|VRAM Bus Width||192||128||128||128|
Nvidia currently offers four different GPUs that generally fall in the sub-$200 range: the GTX 1650, GTX 1650 GDDR6, GTX 1650 Super, and GTX 1660. All are manufactured using TSMC's 12nm FinFET lithography, and we'll have to wait for Nvidia's Ampere GPUs before Nvidia shifts to 7nm or 8nm Lithography. The TU117 GPU in the GTX 1650 (both GDDR5 and GDDR6) supports up to 16 SMs (Streaming Multiprocessors), each with 64 CUDA cores. The full TU117 so far has only shown up in the mobile GTX 1650 Ti, however, with the desktop GTX 1650 models enabling 14 SMs. That means 896 FP32 CUDA cores and 56 TMUs (texture mapping units).
Clock speeds also vary slightly among the models, and as usual, the AIB partners are free to deviate. Officially, the reference spec on the GTX 1650 GDDR5 is a 1485 MHz base clock and 1665 MHz boost clock, while the GTX 1650 GDDR6 has a 1410 MHz base clock and 1590 MHz boost clock. The EVGA GTX 1650 GDDR6 card, on the other hand, has a 1710 MHz boost clock, because it's the SC Ultra Gaming edition — the other EVGA option being an SC Ultra Black edition that has a 1605 MHz boost clock and currently costs $10 more.
One key difference not listed in the above table is video codec support. The GTX 1650 and 1650 GDDR6 use the Turing TU117 GPU, while the GTX 1650 Super and GTX 1660 use the TU116 GPU. Besides offering more cores and performance, TU116 also includes the latest NVENC video block that supports encoding and decoding a variety of video formats. Generally, it delivers equivalent or superior quality to CPU-based encoding. Even the lowly GTX 1650 Super has the same capabilities as the RTX 2080 Ti in this area. TU117, on the other hand, uses the same NVENC as the previous-generation Pascal GPUs. It's not terrible, but it's definitely not as good as the Turing encoder.
EVGA GTX 1650 GDDR6 SC Ultra: A Closer Look
The EVGA GTX 1650 SC Ultra GDDR6 variant uses a relatively compact design. It measures 202.2 x 111.2 x 37.3 mm (7.96 x 4.38 x 1.47 inches) and weighs 565g (1.24 lbs). It's a full 2-slot card and extends about 5cm past the end of the PCIe slot, but it should fit in nearly any PC case that's designed to work with a dedicated GPU.
It's nice to see that, even on a budget GPU, EVGA still includes a full-coverage metal backplate. It might theoretically help with cooling the card, but that seems unnecessary. The main benefit we see is that it protects the delicate components on the back of the graphics card from accidental damage. I'm not going to name names, but I have a 'friend' who may have damaged an R9 290X back in the day when a small component (resistor or capacitor) got hit by a screwdriver while he was putting together a PC. Oops.
The cooler is relatively simple with a single heatpipe wrapping around to help disperse heat to the heatsink's fin array. Two 85mm axial fans provide airflow, with a plastic shroud helping to direct the air across the heatsink fins. While EVGA makes no mention of the fact, the card does appear to have 0dB fan tech that shuts off the fans when the GPU is idle. Even under load, the fans typically spin at less than 2000 RPM and are very quiet. Video ports consist of two DisplayPort 1.4 outputs and a single HDMI 2.0b port.
Popping off the cooler, the PCB and power circuitry is pretty tame compared to what we see on larger, higher performance graphics cards. And rightly so. There's a single 6-pin PEG (PCIe Express graphics) power connector providing extra power, probably thanks to the factory overclock. Theoretically, the GTX 1650 could run off just the PCIe x16 slot's 75W power, but EVGA tacks on a 6-pin PEG to ensure there's more than enough power on tap.
EVGA GeForce GTX 1650 GDDR6 SC Ultra: How We Test
Our current graphics card test system consists of Intel's Core i9-9900K, an 8-core/16-thread CPU that routinely ranks as the fastest overall gaming CPU. The MSI MEG Z390 Ace motherboard is paired with 2x16GB Corsair Vengeance Pro RGB DDR4-3200 CL16 memory (opens in new tab) (CMW32GX4M2C3200C16). Keeping the CPU cool is a Corsair H150i Pro RGB AIO. OS and gaming suite storage comes via a single XPG SX8200 Pro 2TB M.2 SSD.
The motherboard runs BIOS version 7B12v17. Optimized defaults were applied to set up the system, after which we enabled the memory's XMP profile to get the memory running at the rated 3200 MHz CL16 specification. No other BIOS changes or performance enhancements were enabled. The latest version of Windows 10 (1909) is used and is fully updated as of May 2020.
Our current list of test games consists of Borderlands 3 (DX12), The Division 2 (DX12), Far Cry 5 (DX11), Final Fantasy XIV: Shadowbringers (DX11), Forza Horizon 4 (DX12), Metro Exodus (DX12), Red Dead Redemption 2 (Vulkan), Shadow of the Tomb Raider (DX12), and Strange Brigade (Vulkan). These titles represent a broad spectrum of genres and APIs, which gives us a good idea of the relative performance differences between the cards. We're using driver build 445.87 for the Nvidia cards and Adrenalin 20.4.2 drivers for AMD. We've provided a selection of competing GPUs from both AMD and Nvidia for this review.
We capture our frames per second (fps) and frame time information by running OCAT during most of our benchmarks, and use the .csv files the built-in benchmarks create for The Division 2 and Metro Exodus. For GPUs clocks, fan speed, and temperature data, we use GPU-Z's logging capabilities in conjunction with Powenetics software from Cybernetics that collects accurate graphics card power consumption.