Acer unveils Project Digits supercomputer featuring Nvidia's GB10 superchip with 128GB of LPDDR5x

Acer GN100 AI Mini Workstation
(Image credit: Acer)

Acer has unveiled its own version of Nvidia's Project Digits mini-supercomputer, the Acer Veriton GN100 AI Mini Workstation, which is geared toward developers, universities, data scientists, and researchers who need a compact and high-speed AI system. North American pricing starts at $3,999.

The Veriton GN100 is a compact mini-PC (measuring 150 x 150 x 50.5mm), that comes housed in a black chassis with a silver grill on the front. The system features Nvidia's GB10 Grace Blackwell Superchip, which has 20 ARM CPU cores (10 Cortex-X925 and 10 A725 cores), and a Blackwell-based GPU sporting one petaFLOP of FP4 floating point performance. The GB10 Superchip is fed by 128 GB of LPDDR5x memory and can house up to four 4TB of M.2 NVMe storage with self-encryption capabilities.

Some might argue that building an RTX 5090-powered gaming/workstation system might be better — and, on the surface, that's probably true. But Nvidia's GB10 supports 128GB of system memory and has native support for Nvidia's proprietary NVFP4 — two important factors for dedicated AI work, which the RTX 5090 cannot provide. The extra memory allows users to run AI models that would be impossible on a single RTX 5090, and NVFP4 is a new FP4 standard that can significantly improve processing efficiency in AI workloads (with accuracy that approaches BF16).

This makes Nvidia's Project Digits architecture much more attractive for dedicated, professional AI developers. As of this writing, Acer has yet to announce an exact release date for the Veriton GN100, though it has said that availability will vary by region.

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button!

Aaron Klotz
Contributing Writer

Aaron Klotz is a contributing writer for Tom’s Hardware, covering news related to computer hardware such as CPUs, and graphics cards.

  • kealii123
    My m4 macbook from work, even with all the IT bloat ware on it, is as fast as my 4090 laptop when running smaller models, tok/sec, but has triple the ram. For coding, Are all the nvidia tool kits and accelerators worth anything? For $4k I can get a 128 gig macbook
    Reply
  • checksinthemail
    Superkewl, give us project 50900 with a couple 5090s and 320gb gddr7, because the bandwidth to cpu for ddr5 is crap in comparison
    Reply
  • bit_user
    checksinthemail said:
    the bandwidth to cpu for ddr5 is crap in comparison
    But Apple managed to scale up LPDDR5X to comparable levels, simply by adding more channels (which is also leads to higher capacity). The top-end M-series processors use a 1024-bit memory interface, which can equal GDD6 at 384-bit. It's true that GDDR7 ups the ante, but you won't get enough compute to exploit that bandwidth in a processor like what these Digits boxes have.

    In Digits, Nvidia played it fairly safe with a 256-bit data bus. The next gen will undoubtedly use higher-bandwidth memory and probably go to 384 or 512-bit.

    The main problem with GDDR memory is that it's relatively low-density. Also, quite energy-intensive. That's why Nvidia didn't use it here, and I don't foresee that changing. I'm sure price is another issue.
    Reply
  • thaddeusk
    An important note is that the 5090 also supports NVFP4 and has about 3.3x the raw AI performance of the DGX. You can fit a pretty large LLM into 32gb if you're using a 4-bit quant and that extra performance is great for diffusion models.
    Reply