Made in the USA: Inside Nvidia's $500 billion server gambit

Nvidia
(Image credit: Nvidia)

This week, Nvidia and its partners Amkor, Foxconn, SPIL, TSMC, and Wistron announced plans to build $500 billion worth of AI hardware in the U.S. over the next four years. The announcement included the production of actual AI processors, their testing and packaging, as well as assembling actual AI servers. But, while the announcement represents a plan to build half a trillion dollars’ worth of AI hardware, it lacks detail, which casts doubt on whether it can be done. So, we decided to take a closer look.

Building a local AI supply chain in the U.S.

TSMC has already committed to invest $165 billion in its Fab 21 manufacturing site over an unknown amount of time, so it is safe to say that there is (and will be) advanced manufacturing capacity to build chips for Nvidia.

The 4nm-capable Fab 21 phase 1 is already ramping production, 3nm-capable Fab 21 phase 2 is expected to commence mass production in 2028 (1-2 years after Nvidia plans to ramp production of its 3nm-based Rubin GPUs in Taiwan), and 2nm/1.6nm-capable Fab 21 phase 3 is projected to start high volume manufacturing of chips by the end of the decade.

When it comes to packaging, TSMC has committed to building two advanced testing and packaging facilities in the U.S.

Amkor is building an advanced $2 billion packaging facility that will feature 500,000 square feet (46,451 square meters) of cleanroom space when fully built and equipped. This week, SPIL also announced that it will build a packaging facility in the U.S., and based on Nvidia's press release, it will also feature 500,000 square feet (46,451 square meters) of cleanroom space. The company did not disclose planned investments, but it will likely be in the same ballpark as Amkor's plant.

To put the investments of Amkor and SPIL into context: TSMC's current advanced packaging facilities cost less than $2 billion, and given high demand, they cannot meet the demand from all customers who use CoWoS and other packaging methods.

However, two $2 billion OSAT plants will likely be enough for Apple's, AMD's, and Nvidia's products made in the USA. However, you should remember that Amkor's plant is scheduled to begin operations in 2027, and it is unclear when SPIL's factory will be ready.

In addition to chip production and packaging facilities, Nvidia's partners will also build actual AI server assembly plants in the U.S. Foxconn intends to build a factory in Houston, Texas, whereas Wistron intends to build a facility in Dallas, Texas. Both companies plan to begin construction shortly, and will start making servers in 12-15 months from now.

Foxconn subsidiary Ingrasys has invested as much as $142 million to buy 349,000 meters^2 of land (three times the size of the Pentagon footprint), and a 93,000 meters^2 facility (about the same size as a typical Amazon fulfillment center) near Houston, Texas, according to The Korea Post.

The plant seems appropriate for AI server assembly, though by Foxconn standards, it is hardly a big one. For example, the Foxconn Zhengzhou (aka iPhone City) site has 1.4 million meters^2 of factory space. It is also noteworthy that Foxconn is also building what it calls the largest AI server assembly plant in Mexico, which is expected to cost $900 million and be ready in late 2025 or early 2026, according to Bloomberg.

The dimensions of Wistron's facility are unknown. It should be noted that Nvidia and its manufacturing partners plan to deploy Nvidia's Omniverse to simulate factory operations and optimize them, as well as use Isaac GR00T to develop automated robotics systems for these facilities. Given such advantages, it is reasonable to expect that the new plants will feature higher efficiency than already deployed factories.

What is $500 billion in AI equipment?

Without a doubt, $500 billion is an exorbitant amount of money. But, in terms of AI hardware, what does that figure actually materialize into?

As a rule of thumb, AI GPUs account for half the cost of AI hardware, so Nvidia expects to produce $250 billion worth of AI GPUs and $250 billion worth of supporting hardware in the U.S.

An Nvidia DGX B200 server with eight B200 GPUs, two 56-core Intel Xeon 8570 processors, 2 TB of DDR5 memory, 30 TB of NVMe storage, six NVLink switches, eight Nvidia ConnectX-7 VPI cards, and software costs €593,000 without taxes ($670,000). $500 billion can get you over 746,000 DGX B200 servers. Nvidia's NVL72 rack with 78 B200 GPUs reportedly costs $3 million. For half a trillion dollars, you can get 166,667 NVL72 racks.

You also have to question whether the Foxconn and Wistron facilities in America (which begin operation in 12-15 months from now) can build 746,000 8-way DGX servers, or 166,667 racks with 72 GPUs over the following three years.

To do so, they will have to build 249,000 8-way DGX servers per year (682 machines per day), or 55,500 AI racks per year (152 racks per day), which is a lot. Global shipments of AI servers totalled around 639,000 units in 2024, according to DigiTimes Research. The value of AI servers reached $205 billion last year, according to TrendForce.

Building around 40% of the global 2024 AI server supply in two facilities (enhanced with Omniverse and advanced robots) is an ambitious plan. Foxconn and Wistron are known to design their facilities to run dozens of parallel lines, and a 100,000 meter^2 facility can accommodate dozens of specialized and highly automated production lines, so they may well be able to produce hundreds of thousands of AI servers yearly.

But is it possible for Nvidia to produce $250 billion worth of datacenter products (including GPUs, CPUs, and networking gear) in the U.S. by 2029? To meet Nvidia's ambitious goal of making $250 billion worth of AI-oriented silicon in the U.S. in the next four years, Nvidia and its partners will have to produce $65.5 billion worth of chips every year in America. Nvidia's datacenter earnings were $115 billion in FY2025, so if it somehow shifts production of 55% of its server products to the U.S., fabbing $65.5 billion worth of chips per annum is likely achievable.

However, considering that TSMC's Fab 21 phase 2 is set to start making 3nm Rubin GPUs in 2028, whereas Amkor's advanced packaging facility is on track to start operations in 2027, we can only wonder whether Nvidia can indeed shift 55% of its data center production to the U.S. in 2026 – 2027.

While the goal to produce $500 billion of AI hardware in the U.S. by 2029 may be too ambitious a project, Nvidia and its partners will likely produce hundreds of billions of dollars worth of AI hardware over the next four years.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • phead128
    Hype much?

    $500 Billion StarGate here, $500 billion Nvidia AI data centers here. Not including Meta's $100 billion, Microsoft's $80 billion...

    That's after DeepSeek has open sourced models and optimization techniques that requires less compute requirements to train equivalent performance.

    This looks more like a valuation hype as Nvidia stock has been absolutely hammered since DeepSeek moment.
    Reply