Nvidia AD106, A107 Ada Lovelace GPUs May Use PCIe x8 Interface

RTX 3060 Ti Founders Edition
(Image credit: Nvidia)

According to a tweet by resident GPU leaker, Kopite7kimi, the GeForce RTX 4060 (AD106) reportedly delivers a TimeSpy Extreme score of 7,000 points. If accurate, it would put the GeForce RTX 4060’s performance in between the RTX 3060 Ti and RTX 3070. Also, Kopite7kimi noted that Nvidia’s AD106 and the more budget-friendly AD107 die would only have eight PCIe lanes at their disposal instead of 16.

It is the first time we’ve gotten a TimeSpy Extreme performance figure from the hardware leaker regarding Nvidia’s RTX 40-series (Ada Lovelace) GPUs. AD106 will potentially power the next-generation RTX 4060 and possibly the RTX 4050 Ti (if Nvidia makes one this time).

Kopite7kimi stated that the new score is not very strong, but we would beg to differ. For reference, the current RTX 3060 has an average TimeSpy Extreme graphics score of around 4,500 points to 4,800 points. So if Kopite7kimi’s data is accurate and the RTX 4060 AD106 GPU has a TimeSpy Extreme score of approximately 7,000, the RTX 4060 is effectively 50% faster than the RTX 3060.

It would put the AD106 die, or rather RTX 4060, at performance parity with cards like the RTX 3060 Ti and RTX 3070, which isn’t the wrong place to be. According to TimeSpy Extreme alone, the RTX 4060 appears to be a good upgrade over the RTX 3060. But that is the problem; we only have the alleged TimeSpy Extreme scores on a GPU that isn’t yet out. So as always, take this data with a grain of salt. However, we will say that the RTX 4060’s estimated performance looks very accurate if history repeats itself.

When the RTX 3060 was released, its performance generally outperformed the RTX 2060 Super and RTX 2070 by a few percentage points. The RTX 4060 would be doing the same thing here, being substantially quicker than the RTX 3060, but performing similarly to the RTX 3060 Ti and RTX 3070.

PCIe Lane Limitations

Arguably the most exciting part about the Tweet is the claims of the AD106 and AD107 getting nerfed to eight PCIe lanes instead of the traditional 16. AMD does the same thing with its entry-level Radeon RX 5000 and mid-range RX 6000 series product stack. It would seem that Nvidia will follow suit with the GeForce RTX 40-series.

Assuming Nvidia decides to use PCIe 4.0 instead of PCIe 5.0, we don't believe it will be a problem on modern platforms. For example, for an RTX 4050 and RTX 4060, a PCIe 4.0 x8 configuration should be adequate and provide enough bandwidth for PCIe heavy applications. After all, PCIe 4.0 x8 features the same bandwidth as PCIe 3.0 x16, and the RTX 2080 Ti - the last GPU to run PCIe 3.0, ran just fine with a PCIe 3.0 x16 interface.

The only potential issue with PCIe 4.0 x8 is that older systems are limited to PCIe 3.0 speeds. It, in turn, will force PCIe 4.0 x8 GPUs to alternate to PCIe 3.0 x8, which is much slower than PCIe 3.0 x16 and PCIe 4.0 x8. As a result, we could see FPS reductions due to the PCIe bottleneck, but we can't be sure until we get our hands on Nvidia's RTX 4050 and RTX 4060.

Aaron Klotz
Freelance News Writer

Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.

  • AgentBirdnest
    That could really suck if true. :-\
    Wouldn't affect me, since I'm using an Zen2 X570 platform. But some of my friends are using Comet Lake or Coffee Lake, which are limited to PCIe 3.0. Coffee Lake may be getting old, but its i7 CPUs are still very relevant... but they may be a bit screwed if they planned on upgrading to a mainstream Ada Lovelace card. Or maybe not... Hopefully 8 lanes in a PCIe 3.0 slot won't have too much of a performance loss. Maybe it'll be fine. And maybe it won't even happen.
    Wait'n'see...
    Reply
  • -Fran-
    Out of all the things they could "copy" from AMD, this is the one they choose?!

    Le sigh.

    At least, a ~50% performance increase gen-over-gen for the card (purported equivalent segment SKU?) is quite impressive. I wonder how the power figures look.

    Regards.
    Reply
  • hotaru251
    If accurate, it would put the GeForce RTX 4060’s performance in between the RTX 3060 Ti and RTX 3070.

    When the RTX 3060 was released, its performance generally outperformed the RTX 2060 Super and RTX 2070 by a few percentage points. The RTX 4060 would be doing the same thing here, being substantially quicker than the RTX 3060, but performing similarly to the RTX 3060 Ti and RTX 3070.


    so somehow between a 3060ti & a 3070 base is same thing as 3060 beating 2060 super & a 2070?

    how does 3060 beating a 2070 become same as coming between a 3060ti and a 3070?

    its not beating the 3070 like the it beat the 2070.

    also fact Nvidia's pricing is seeming to be raised (as they plan for 30 and 40 series to coexist and prices...not a good sign) so the cost to performance is less than 30 series.
    Reply
  • hannibal
    Well pci 3.0 and older are dead meet to AMD and Nvidia, so why not move to cheaper solution...

    Well, not nice to people who has a little but older platform, but all in all. It makes more sense to buy used GPU with full 16 bit in anyway to those older machines! Cheaper, more speed than you can get from the new and no band wide problems.

    They really try to cut cost by any means possible...
    Reply
  • Zescion
    hannibal said:
    Well pci 3.0 and older are dead meet to AMD and Nvidia, so why not move to cheaper solution...

    Well, not nice to people who has a little but older platform, but all in all. It makes more sense to buy used GPU with full 16 bit in anyway to those older machines! Cheaper, more speed than you can get from the new and no band wide problems.

    They really try to cut cost by any means possible...
    Not just cost saving, but smart business decision.
    They'll keep selling 30xx GPUs when the new cards are out. This will give one good reason to buy the old cards.
    Reply
  • thisisaname
    Sure if it does not effect the performance but I would hope they would pass the cost saving on in the form of a lower price, but I'm not holding my breath on that happening.
    Reply
  • InvalidError
    After all, PCIe 4.0 x8 features the same bandwidth as PCIe 3.0 x16, and the RTX 2080 Ti - the last GPU to run PCIe 3.0, ran just fine with a PCIe 3.0 x16 interface.
    As all of the 4GB x4 cards have demonstrated in the past, it is the LOW-END that gets hurt worst by truncated PCIe bandwidth, not the uber-high-end with ginormous VRAM. I expect this to get much worse if DirectStorage gains momentum as low-end GPUs will have to rely far more heavily on PCIe for asset reloads from system memory than high-end ones.
    Reply
  • Kamen Rider Blade
    PCIe x12 lane configurations gets no love.

    Everybody only cares about PCIe x16 or x8, even x4 gets more love.

    Nobody wants to implement x12, despite the fact that it's been part of the PCIe spec for lane configurations since day one.
    Reply
  • InvalidError
    Kamen Rider Blade said:
    Nobody wants to implement x12, despite the fact that it's been part of the PCIe spec for lane configurations since day one.
    Probably because there aren't many actual use-cases for it: no point in having x12 on PCs since the GPU is the only thing that comes remotely close to needing x8 so the most sensible ways of splitting CPU lanes when you don't want to throw the whole x16 at the GPU is either x8x8 or x8x4x4 while modern server chips have so many PCIe lanes that the half-step compromise is unnecessary.
    Reply
  • escksu
    I would actually love them to do this to all their cards!! The bump it to pcie 5.0... having 16 lanes of pcie 5.0 for a graphics card is plain stupid. But 8x would be great. Then this leaves another 8 lanes for other things.
    Reply