Nvidia confirms Nintendo Switch 2 SoC's AI, ray tracing features, keeps real specs to itself for now
Nvidia promises 10x graphics performance over the original Switch

While Nintendo made a splash at yesterday's formal unveiling of the Nintendo Switch 2, the company's hardware developers stayed quiet on the chip powering it. Now, Nvidia, which makes the custom system on a chip, has provided some more information in a blog post.
"Nintendo doesn't share too much on the hardware spec," Switch 2 technical director Tetsuya Sasaki said at a developer roundtable. "What we really like to focus on is the value we can provide to our consumers."
Nvidia is following Nintendo's lead, withholding information like core counts and speeds. Still, the company claims the new chip offers "10x the graphics performance of the Nintendo Switch."
Nvidia's RT cores allow for hardware ray tracing, lighting, and reflections, while tensor cores power DLSS upscaling. DLSS is likely being used to achieve up to 4K performance when the system is docked, and to help hit up to 120 frames per second in handheld mode.
The company also confirmed that the tensor cores allow for face tracking and background removal with AI, which was shown off with the new social GameChat feature as well as in Switch 2 games we went hands-on with, such as Super Mario Party Jamboree – Nintendo Switch 2 Edition + Jamboree TV. It isn't clear if this uses any of the same technology as Nvidia Broadcast on PC.
Additionally, Nvidia confirmed that the Switch 2's new variable refresh rate (VRR) display is powered G-Sync in handheld mode, which should prevent screen tearing.
Nvidia also powered the original Nintendo Switch, which used a custom variant of the Tegra X1. Nintendo managed to get a lot of mileage out of that chip, which was old when it launched; games are still coming out eight years later.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
We'll see how much developers can squeeze from the new chip when the Switch 2 launches on June 5 for $449.99.
Andrew E. Freedman is a senior editor at Tom's Hardware focusing on laptops, desktops and gaming. He also keeps up with the latest news. A lover of all things gaming and tech, his previous work has shown up in Tom's Guide, Laptop Mag, Kotaku, PCMag and Complex, among others. Follow him on Threads @FreedmanAE and Mastodon @FreedmanAE.mastodon.social.
-
Notton The rumored chip specs:Reply
Chip name: T239 (Tegra/Ampere)
Process: 8nm (Most likely Samsung)
CPU: 8-core Cortex-A78AE at 2.0Ghz
Instruction set: ARMv8.2-A 64-bit
RAM: 12GB LPDDR5X, 120GB/s
GPU: GA10B, 1536 CUDA cores, 2.360 - 2.820 TFlops (FP32?) (Closest consumer counterpart is RTX2050)
The Nintendo Switch used the Tegra X1 (T210), which was a 4x A57 + 4x A53 with GM20B (Maxwell, 256-core).
So yeah, leaps and bounds coming from a GPU that's not even half of a GTX 750Ti. -
DS426 How is RT in a handheld like this sensible? Battery life and temp are far more important here than desktop PC's and even than gaming laptops, so I just don't see it as prudent; moreover, based on the rumored specs, every bit of die space should be utilized for raster, not RT. Same regarding "AI". All for marketing.Reply -
DS426
I mean, it's a Nintendo product, so what do you expect?Notton said:"...So yeah, leaps and bounds coming from a GPU that's not even half of a GTX 750Ti."
-
ThisIsMe Since this is coming from NVidia, after taking out the DLSS AI stuff, it’s likely closer to 3-4 times the performance on a good day. You never really know which definition of “performance” they’re using at any given time.Reply -
thestryker I find it interesting that they don't seem to have bothered to move it to a better node. This screams just using off the shelf silicon with no actual modification again. While this is hardly the end of the world it wouldn't have taken much to make something much better suited to this use case.Reply -
oofdragon
It's not.. it's just for the marketing and gimmicks.DS426 said:How is RT in a handheld like this sensible? Battery life and temp are far more important here than desktop PC's and even than gaming laptops, so I just don't see it as prudent; moreover, based on the rumored specs, every bit of die space should be utilized for raster, not RT. Same regarding "AI". All for marketing.
Nope.. it's a potatoe again, although it does surpass a little the decade old PS4Elusive Ruse said:Can it match PS4 Pro? 🤣
They definetly should have used a better node, but they know consumers r dumb and will pay for outdated crpthestryker said:I find it interesting that they don't seem to have bothered to move it to a better node. This screams just using off the shelf silicon with no actual modification again. While this is hardly the end of the world it wouldn't have taken much to make something much better suited to this use case. -
Notton
AFAIK, the chip has been in production for at least a year and a half.thestryker said:I find it interesting that they don't seem to have bothered to move it to a better node. This screams just using off the shelf silicon with no actual modification again. While this is hardly the end of the world it wouldn't have taken much to make something much better suited to this use case.
Nintendo has been stockpiling the chip, presumably to polish the launch titles.
IDK if they bothered to produce a ton of complete systems though. -
thestryker
Ampere existed on both TSMC N7 and Samsung 8nm. That means the amount of effort to make this on say N6 would have been minimal. There's also more advanced nodes available from Samsung and given their struggles keeping their fabs active they'd have probably cut a good deal.Notton said:AFAIK, the chip has been in production for at least a year and a half.
Nintendo has been stockpiling the chip, presumably to polish the launch titles.
IDK if they bothered to produce a ton of complete systems though.
Not changing process node simply means they took the same approach as the Switch and just used an already in production part. While this is undoubtedly the cheapest option up front I have a hard time believing that it will end up saving them anything over the lifetime of the product. Let alone if they'd had nvidia make them a SoC with newer Arm cores where they could have cut the core count making the die even smaller.