GeForce RTX 3050 Rumors: Baby Ampere with 2304 Cores Due in 2021

Ampere Architecture
(Image credit: Nvidia)

Twitter user @@kopite7kimi dropped a post on a new RTX 3050 that is possibly in the works, this SKU is rumored to be equipped with 2304 CUDA cores and a 90W TDP on a GA107 die, indicating Nvidia is already planning or has already begun production of this GPU. All we get is potential specifications for the 3050, so obvious salt and such applies. We still don't know when this card is going to launch, or its potential price. All we know is that Nvidia needs to replace its current entry-level GTX 1650 Turing lineup at some point, and an Ampere successor makes sense.

Again, #rumors, but it's interesting to see this GPU being called an RTX card. This implies that the entire RTX 3000 series lineup will feature ray-tracing acceleration and Tensor cores for DLSS and Nvidia Broadcast capabilities. That's in stark contrast to the previous Turing generation where Nvidia split it's consumer graphics cards into two lineups, the GTX 16-series, and the RTX 20-series. The GTX prefix of course signifies a lack of RT cores and Tensor cores, which made sense to keep the cost down for those parts on a 12nm process.

Compared to the current generation GTX 1650 Super, which has 1208 CUDA cores, theoretically the RTX 3050 almost doubles that to 2304 CUDA cores. However, due to the Ampere architecture, this doesn't necessarily mean a near doubling in performance. The Ampere architecture has two sets of CUDA cores: One is only for FP32, and the other is FP32 or INT32, and spends a fair amount of time doing INT32 operations. This is why you get so many of them in the GeForce RTX 3070, GeForce RTX 3080, and GeForce RTX 3090).

For games in particular, Ampere GPU performance doesn't scale quite as you'd expect based on the theoretical TFLOPS figures. However, games are becoming more and more FP32 intensive as time goes on, so Ampere's performance could improve as it ages. Ampere also has other advantages, with second generation RT cores and third gen Tensor cores.

If we want to gauge how fast the RTX 3050 might be, we can look at current Ampere GPUs for a good comparison. Let's take the RTX 2080 Ti and RTX 3070 for example. The 3070 has 5888 CUDA cores and the 2080 Ti 4352 CUDA cores (26 percent less). The two GPUs are basically equal in performance, depending on the games being tested. That means the 3070's CUDA cores potentially 30-35 percent weaker than the 2080 Ti's CUDA cores. There are of course other factors, chief of which being the 2080 Ti's 11GB of memory on a 352-bit bus (vs. 8GB on a 256-bit bus), but at least it gives us a baseline of how performance scales with Ampere GPUs.

If we apply that math to the supposed RTX 3050, the RTX 3050 would perform just above a GTX 1660 Ti / Super and just below an RTX 2060. That's purely from a CUDA core standpoint, and other elements come into play. Offering ray tracing and DLSS would naturally put the RTX 3050 ahead of the 1660 Super / 1660 Ti in features.

The big question, besides whether any of this is even true, is price. The 1050 Ti and 1650 were both in the $150 ballpark. RTX 3050 would likely push higher up the chain, but if it's priced under $200, the RTX 3050 could be a great entry-level card that would help bring DLSS and ray tracing to the masses. It would also be a great fit for modest gaming laptops.

Obviously, there's plenty of speculation right now and Nvidia hasn't confirmed the existence of GA107 or RTX 3050. Still, with Intel and AMD pushing higher up the chain on integrated graphics performance, it makes sense for entry-level cards to increase in price, performance, and features. In the meantime, we're still waiting for the official RTX 3060 / RTX 3060 Ti launch, as well as AMD's Big Navi, and there are a lot of missing pieces on the RTX 3050 puzzle.

Aaron Klotz
Freelance News Writer

Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.

  • cryoburner
    If we apply that math to the supposed RTX 3050, the RTX 3050 would perform just above a GTX 1660 Ti / Super and just below an RTX 2060.
    If we went by core counts alone, a 1650 SUPER should perform within about 9% of a 1660 SUPER. However, even at 1080p, it's closer to 25% behind that card. A lot of that likely comes down to the reduced memory bandwidth resulting from reducing the number of VRAM chips. Seeing as Nvidia hasn't really been pushing much more VRAM this generation, with the 3070 having the same amount as the 2070 before it, it's very possible that the 3050 will have 4GB.

    A 3050 with 2304 Ampere cores and 6GB of VRAM could potentially perform in-between a 1660 SUPER and a 2060, but with 4GB, it might not perform any better at rasterized rendering than the 1660 SUPER, and the limited VRAM would likely hurt performance more in demanding titles, especially moving forward. From an RT-capability standpoint, the 30-series cards don't really add much RT performance relative to the rasterized performance they deliver, so RT performance would most likely be a bit below that of a 2060 as well, which is already kind of borderline in terms of usefulness with raytraced lighting effects enabled.
    Reply
  • Pytheus
    I don't understand why they need to release new entry level cards, just lower the price point of the previous generation to align with their relative performance.
    Reply
  • cryoburner
    Pytheus said:
    I don't understand why they need to release new entry level cards, just lower the price point of the previous generation to align with their relative performance.
    It's not likely to be as profitable for them to move those older cards down to lower price points, and the cards would also require more power to run. From the cost perspective, they have moved to a newer process node with the 30-series, along with an updated architecture, so they can get more graphics chips of a given performance level out of a single wafer. And as I pointed out above, they will likely adjust the amount of VRAM as well.

    Take for example the RTX 3070, a card that performs roughly similar to a 2080 Ti in today's games. The graphics chip that goes into the 3070 is almost half the size of the one in the 2080 Ti, and VRAM has been reduced from 11GB to 8GB. With the 3070 having a suggested starting price that's half that of the 2080 Ti, it's only natural for them to look for ways to reduce manufacturing costs. That goes even more so for entry-level cards, where the profit margins tend to be smaller to begin with.
    Reply
  • Pytheus
    cryoburner said:
    it's only natural for them to look for ways to reduce manufacturing costs. That goes even more so for entry-level cards, where the profit margins tend to be smaller to begin with.

    I don't disagree. I don't know what it costs to develop and tool for the new chips verses what it costs them to produce the older chips which their supplier is already tooled for. In the case of the 2080ti I can understand eliminating it completely since newer parts give the same performance, but producing gimped versions of the new chips just to create a value line seems counterproductive and adds development cost. I'm sure they can work out something to drop a 2070 Super by 100 bucks.
    Reply