Alleged Nvidia AD106 and AD107 GPU Pics, Specs, Die Sizes Revealed

Nvidia
(Image credit: Moore's Law Is Dead)

Newly revealed pictures and specifications of Nvidia's upcoming AD106 and AD107 graphics processors have been uncovered. Destined for the company's upcoming GeForce RTX 40-series solutions for desktops and laptops, they are small but may offer mighty performance. 

This week Moore's Law Is Dead (YouTube) obtained images of Nvidia's upcoming AD106 and AD107 GPUs, while TechPowerUp published detailed specifications, including die sizes and transistor counts. Nvidia's AD106 and AD107 chips essentially flesh out the company's Ada Lovelace family and complement already-known AD102, AD103, and AD104 graphics processors.

Swipe to scroll horizontally
Nvidia Ada Specifications vs. Ampere
GPURTX 4080 LaptopRTX 4070 LaptopRTX 4060 LaptopRTX 4050 Laptop
ArchitectureAD104AD106AD107AD107
Process TechnologyTSMC 4NTSMC 4NTSMC 4NTSMC 4N
Transistors (Billion)35.8???
Die size (mm^2)294.5~190~146~146
Streaming Multiprocessors60362420
GPU Cores (Shaders)7680460830722560
Tensor Cores2401449680
Ray Tracing Cores60362420
TMUs2401449680
ROPs80483232
L2 Cache (MB)48323212
Memory Interface (bit)19212812896
Memory Speed (GT/s)21161616

Regarding die sizes, all Ada Lovelace GPUs are smaller than their Ampere counterparts. This is expected, as Nvidia's latest graphics processors are made on TSMC's 4N (4nm-class) fabrication technology, whereas its previous-generation chips use Samsung's 8LPP (8nm-class node that derives from a 10nm-class tech). So, on the one hand, small die sizes cut costs, which is essential. But, on the other hand, it gets trickier for chip designers to squeeze in all necessary interfaces (such as memory and display outputs) while hitting the performance targets they need.  

Since TSMC charges more for chips produced on 4N technology than Samsung charges for GPUs made using its 8LPP node (which is in line with industrial trends), we can only guess whether smaller Ada Lovelace GPUs are cheaper to produce than bigger Ampere GPUs. Yet, as yields go up, the costs of GeForce RTX 40-series processors will inevitably decrease. 

When Nvidia introduced its GeForce RTX 40-series laptop graphics processors early this year, it said that entry-level AD107 and mid-range AD106 mobile Ada Lovelace GPUs would find their places in laptops that start at $999 and $1,500, respectively. A $999 gaming notebook with a discrete GeForce RTX 40 GPU sounds compelling, but the question is what else such a machine offers and how widespread such laptops will be. 

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • bigdragon
    Good to see the full specifications! I'm concerned there's going to be a performance regression given the reduction in CUDA cores and memory interface. Some early leaks are showing RTX 3070, 3060, and 3050 chips outperforming RTX 4070, 4060, and 4050 chips, respectively. I don't see how Nvidia could justify another price increase when generational performance declines.
    Reply
  • InvalidError
    bigdragon said:
    I don't see how Nvidia could justify another price increase when generational performance declines.
    Easy: "because we can."

    Not like AMD and Intel are a credible threat to Nvidia's stranglehold on the graphics market yet.
    Reply
  • helper800
    InvalidError said:
    Easy: "because we can."

    Not like AMD and Intel are a credible threat to Nvidia's stranglehold on the graphics market yet.
    The dark ages of tech is a higher cost, and lower performance compared to same tier previous gen parts. Are we entering the dark age of laptop gpu performance?
    Reply
  • -Fran-
    Here in the UK, the 4090 laptops are north of £4000, with some being over £5000. It's ludicrous.

    Given the price structure, all this generation is going to be "top heavy", for sure. That $1000 base price is going to be so barebones it's going to suck for whoever buys such a laptop, I'd say.

    I really hope AMD can put up a fight on the mid and lower segment with Navi32 and Navi33. I am not sure if Intel will even try to compete with any ARC models in laptops, but I sure hope they do after the OEMs have enough confidence in their drivers. As a side note (speculative): I'd imagine that is why no major/big OEM has put an ARC GPU in a laptop yet? I only know of Samsung having like 1 model and that's it?

    Regards.
    Reply
  • renz496
    bigdragon said:
    Good to see the full specifications! I'm concerned there's going to be a performance regression given the reduction in CUDA cores and memory interface. Some early leaks are showing RTX 3070, 3060, and 3050 chips outperforming RTX 4070, 4060, and 4050 chips, respectively. I don't see how Nvidia could justify another price increase when generational performance declines.

    never saw this rumor. 4070Ti already at 3090 level except at 4k. somehow 4070 will be slower than 3070?
    Reply
  • InvalidError
    renz496 said:
    never saw this rumor. 4070Ti already at 3090 level except at 4k. somehow 4070 will be slower than 3070?
    With Nvidia (and AMD) slotting in smaller dies with narrower memory bus and in many cases less memory for the same marketing tier as previous gen, there is bound to be cases where the new die performs worse than the one it is supposed to replace. I can definitely see why people are getting nervous about where this is going.
    Reply
  • bigdragon
    renz496 said:
    never saw this rumor. 4070Ti already at 3090 level except at 4k. somehow 4070 will be slower than 3070?
    I wasn't clear that I meant laptops and not desktops. My bad. The performance regression rumors I've been reading are specific to laptop GPUs. Videocardz and random Twitter Chinese resellers have posted disappointing benchmarks for the 40-series mobile chips showing little generational improvement at 1080p. The potential exists for a regression at 1440p. I'm not following the desktop equivalents.
    Reply
  • edzieba
    InvalidError said:
    With Nvidia (and AMD) slotting in smaller dies with narrower memory bus and in many cases less memory for the same marketing tier as previous gen, there is bound to be cases where the new die performs worse than the one it is supposed to replace. I can definitely see why people are getting nervous about where this is going.
    There may be memory-limited edge cases for GPU compute, but I cannot recall any being found for gaming yet. e.g. the 4070Ti continues to perform around the 3090 level (usually between 3090and 3090Ti) despite having HALF the memory capacity, buswidth, and bandwidth. The Ada architecture is clearly a lot less sensitive to memory performance than previous architectures.
    Reply
  • InvalidError
    edzieba said:
    There may be memory-limited edge cases for GPU compute, but I cannot recall any being found for gaming yet.
    Today's games, maybe, since they do have to target 4+ years old sub-$400 MSRP hardware to get a decent size audience capable of running them decently well.

    A few more years down the line where games at near-max settings may commonly use more than 12GB worth of assets and buffers though, the 3090 could age much better than the 4070Ti.
    Reply
  • edzieba
    InvalidError said:
    Today's games, maybe, since they do have to target 4+ years old sub-$400 MSRP hardware to get a decent size audience capable of running them decently well.

    A few more years down the line where games at near-max settings may commonly use more than 12GB worth of assets and buffers though, the 3090 could age much better than the 4070Ti.
    It's going to be many, many years before developers can start assuming more than 4/8GB as a GPU capacity baseline (i.e. when 12GB+ becomes the 'cheap card' baseline, which is not going to be any time soon). And at the same time, NVME drive ubiquity can also be assumed, so asset streaming comes along to alleviate that potential bottleneck.
    Reply