Nvidia and partners could charge up to $3 million per Blackwell server cabinet — analysts project over $200 billion in revenue for Nvidia

Blackwell
(Image credit: Nvidia)

According to a report from Morgan Stanley cited by United Daily News, Nvidia and its partners will charge roughly $2 million to $3 million per AI server cabinet equipped with Nvidia's upcoming Blackwell GPUs. The industry will need tens of thousands of AI servers in 2025, and their aggregate cost will exceed $200 billion.

So far, Nvidia has introduced two 'reference' AI server cabinets based on its Blackwell architecture: the NVL36, equipped with 36 B200 GPUs, which is expected to cost from $2 million ($1.8 million, according to previous reports), and the NVL72, with 72 B200 GPUs, which is projected to start at $3 million.

Demand for AI servers is setting records and will not slow down any time soon, which will benefit both makers of AI servers and developers of AI GPUs. Despite the influx of competitors, Nvidia's GPUs are set to remain the de facto standard for training and many inference workloads, which benefits the company.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • JasHod1
    Someone, seriously needs to come along and offer some competition to Nvidia. When one company is basically in charge of all the AI data it is ringing alarm bells. CUDA needs to be opened up to others if not by Nvidia then by regulators.

    Other companies have been stung by regulators for far less and it will also have the effect of, hopefully, driving down prices. This is a monopoly by any other name.
    Reply
  • edzieba
    The NVL-72 rack is pretty stuffed full of hardware with little in the way of 'wasted' support Us (no cable stuffing holes/patch panels/etc). $2m spread over 48 RUs is ~$42k per U, which you could hit with regular server hardware without trying too hard.
    Reply
  • Mindstab Thrull
    "The industry will need tens of thousands of AI servers in 2025, and their aggregate cost will exceed $200 billion." (In the first paragraph)

    Assuming this is true, is there any reason they need to be servers based on Blackwell? It feels to me very much like "it's the new shiny hotness" but I mean, there's previous gen solutions or options from other companies. And then there's the power usage, but that's a "sometime later" problem, right?
    Reply
  • robocop007
    JasHod1 said:
    Someone, seriously needs to come along and offer some competition to Nvidia. When one company is basically in charge of all the AI data it is ringing alarm bells. CUDA needs to be opened up to others if not by Nvidia then by regulators.

    Other companies have been stung by regulators for far less and it will also have the effect of, hopefully, driving down prices. This is a monopoly by any other name.
    The competition should be required to pay a % of all investment cost made by Nvidia to develop CUDA over decades before access.

    What is your plan for monopoly of ASML and TSMC? Ask regulators to force them to share their trade secrets as well?
    Reply
  • robocop007
    Mindstab Thrull said:
    "The industry will need tens of thousands of AI servers in 2025, and their aggregate cost will exceed $200 billion." (In the first paragraph)

    Assuming this is true, is there any reason they need to be servers based on Blackwell? It feels to me very much like "it's the new shiny hotness" but I mean, there's previous gen solutions or options from other companies. And then there's the power usage, but that's a "sometime later" problem, right?
    This 200$ bln is a projection based on Nvidia Blackwell servers demand alone. In total, AI industry will spend more for servers.
    Reply
  • Scourge00165
    Mindstab Thrull said:
    "The industry will need tens of thousands of AI servers in 2025, and their aggregate cost will exceed $200 billion." (In the first paragraph)

    Assuming this is true, is there any reason they need to be servers based on Blackwell? It feels to me very much like "it's the new shiny hotness" but I mean, there's previous gen solutions or options from other companies. And then there's the power usage, but that's a "sometime later" problem, right?
    Yeah, it's not the "new shiny hotness," it's the massive computing power that companies need to grow their AI capabilities.

    And if there any reason to believe this will go to the Corporation that controls 80-90% of the business because of the massive moat it's created due to the overwhelming advantage it has in hardware?

    No, it could go to Intel...that'd be like trying to get into Nascar with a Ford Pito, but a company is free to make that choice....

    As for the energy, did you just hear that question today? It's using LESS energy relative to it's queries than the inferior products.

    So yeah, the reason would be the enormous 4-5 year advantage they have over every other company's chips.
    Reply