Dev gambles on 'obviously fake' $8K Grace Hopper system, scores $80,000 worth of hardware on Reddit for one-tenth of the cost — buyer's haul includes 960GB of DDR5 RAM worth more than what he paid for the entire rig

Revamped Nvidia Grace Hopper platform bargain
(Image credit: David Noel Ng)

Would you like to run 235B parameter LLMs at home, but your lowly $10,000 budget restricts you to “consumer GPUs that can barely handle 70B parameter models”? This was the situation developer David Noel Ng found himself in, until he stumbled across an “obviously fake” Nvidia Grace-Hopper platform being sold on Reddit, of all places. Ng took the gamble and, according to his blog post, it paid off royally. He’s managed, with a bit of tinkering and fixing up, to get an enterprise system that would usually cost ~ $80,000 for a tenth of that sum. The included 960GB of LPDDR5X memory, alone, is now worth more than he paid for the full system. Hilariously, he even lowballed the seller, noting the original listing on Reddit was for 10,000 EUR before offering just 7,000 EUR.

Why Ng made the offer on an ‘obviously fake’ listing

There were some underlying, but not insurmountable, issues with the Grace Hopper system as sold, which meant it wouldn’t be widely popular on a consumer marketplace. Specifically, it was “a Frankensystem converted from liquid-cooled to air-cooled” operation. It also looked a bit of a mess, wasn’t rackable, and ran using a 48V power supply.

On the other hand, even if this were just a collection of components, the offer seemed irresistible. The specs of the system, as sold, were as follows:

  • 2x Nvidia Grace-Hopper Superchip
  • 2x 72-core Nvidia Grace CPU
  • 2x Nvidia Hopper H100 Tensor Core GPU
  • 2x 480GB of LPDDR5X memory with error-correction code (ECC)
  • 2x 96GB of HBM3 memory
  • 1,152GB of total fast-access memory
  • NVLink-C2C: 900 GB/s of bandwidth
  • Programmable from 1000W to 2000W TDP (CPU + GPU + memory)
  • 1x High-efficiency 3000W PSU 230V to 48V
  • 2x PCIe Gen4 M.2 22110/2280 slots on board
  • 4x FHFL PCIe Gen5 x16

A lot of cleaning was required (Image credit: David Noel Ng)

Naturally, the $80,000 value of the rig is probably a fairly modest estimate. Ng notes that the two H100 chips alone are "about 30-40,000 euro each."

Getting the Frankensystem working

A significant section of Ng’s blog post is devoted to receiving, cleaning, and building a new working cooling system for the Frankensystem. It makes for a fascinating read. Suffice to say, the Nvidia Hopper system, with its awesome potential, was acquired as a dusty, extremely noisy, very hot-running machine. And it was demonstrated as such before Ng took it home.

With care and patience, five liters of Isopropyl alcohol, four repurposed but cheap Arctic AiO liquid coolers, a pair of custom CNC-milled copper parts, a kilo (~2 pounds) of 3D printed parts, microscope-assisted soldering, an LED lighting strip, and some know-how, Ng eventually triumphed. You can see the finished, reassembled Grace Hopper system, pictured at the top.

Memory gold mine

Ng seems extremely happy with the finished system and its AI performance. He says he can now “run 235B parameter models at home for less than the cost of a single H100.” The cherry on the cake, though, is that since buying the system, memory prices “have become insane,” meaning that the 960GB of DDR5X in this system would cost more than Ng paid for the whole caboodle.

Google Preferred Source

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Mark Tyson
News Editor

Mark Tyson is a news editor at Tom's Hardware. He enjoys covering the full breadth of PC tech; from business and semiconductor design to products approaching the edge of reason.

  • edzieba
    Stomx said:
    He had two GPUs with 480GB each on them.
    Two CPU+GPU modules, each with 96GB HBM onboard the GPU package, and 480GB offboard the module (on the motherboard).
    This isn't the same architecture as consumer devices use.
    Reply
  • Stomx
    edzieba said:
    Two CPU+GPU modules, each with 96GB HBM onboard the GPU package, and 480GB offboard the module (on the motherboard).
    This isn't the same architecture as consumer devices use.
    I deleted my first post by mistake above.
    Of course this is not a consumer device but something is strange here.
    480GB LPDDR5 were probably on the system block running by regular processors feeding either 2x Nvidia H100 or 2x Grace-Hopper Superchips GB200. But there were only 2 HBM3 modules. That means either H100 or GB200 were without HBM3?

    2x 96GB is not enough to run even full DeepSeek R1
    Reply