TinyBox AI accelerator now available starting at $15k, available in AMD 7900XTX and Nvidia RTX 4090 variants

TinyBox AI accelerator on a pallet
(Image credit: tinygrad)

The TinyBox AI accelerator is now available on sale on the tinygrad website starting at $15k. This small 16.25-inch deep, 12U freestanding unit sports six AMD Radeon RX 7900XTX or Nvidia GeForce RTX 4090 GPUs designed for deep learning and AI acceleration applications. While the computer is relatively more expensive versus consumer gaming desktop PCs, it’s much more affordable than a single Nvidia H100 or AMD MI300X accelerator, which are priced between $10 to $40k apiece.

tinygrad founder George Hotz shared the news on X about the availability of TinyBox, saying that they have 13 units in stock as of August 27. He also claims that it offers “the best perf/$ ML box in the world” and that it’s fully networkable. Although the company faced some challenges with the AMD GPUs it used for its systems, it was eventually able to find a solution to this problem. At the same time, it also added the option to use Nvidia GPUs instead to avoid AMD’s driver instability, although this comes at a 67% premium.

Swipe to scroll horizontally
Header Cell - Column 0 redgreen
TFLOPS738 FP16 TFLOPS991 FP16 TFLOPS
GPU Model6x RX 7900XTX6x RTX 4090
GPU RAM144GB144GB
GPU RAM bandwidth5,760 GB/s6,050 GB/s
GPU link bandwidth6x PCIe 4.0 x16 (64 GB/s)6x PCIe 4.0 x16 (64 GB/s)
CPU32-core AMD EPYC32-core AMD EPYC
System RAM128 GB128 GB
System RAM bandwidth204.8 GB/s204.8 GB/s
Disk size4TB raid array + 1TB boot4TB raid array + 1TB boot
Disk read bandwidth28.7 GB/s28.7 GB/s
Networking2x 1 Gbe + open OCP3.0 slot (up to 200 Gbe)2x 1 Gbe + open OCP3.0 slot (up to 200 Gbe)
Noise<50 dB, 31 low speed fans<50 dB, 31 low speed fans
Power supply2x 1600W, 100~240V2x 1600W, 100~240V
BMCAST2500AST2500
Operating systemUbuntu 22.04Ubuntu 22.04
Dimensions12U, 16.25-inch deep, 90 lbs.12U, 16.25-inch deep, 90 lbs.
Rack mountFreestanding or rack mountFreestanding or rack mount
Driver qualityAcceptableGreat
Price$15,000$25,000

Despite these seemingly high prices, the company reportedly already has 583 pre-orders for the system. And, with the drama with AMD’s GPUs behind it, it looks like that tinygrad is ready to go full steam ahead with the sale of its TinyBox AI accelerators. Companies have a choice between the more affordable AMD GPU or the Nvidia system. However, as the tinygrad said in a Tweet before, “If you like to tinker and feel pain, buy red. The driver still crashes the GPU and hangs sometimes, but we can work together to improve it.”

We'd like to believe that it has finally solved the issues with the AMD GPUs. Hence, it’s making it available to the market. But with the popularity of Nvidia in the AI accelerator space, it does make sense for the company to give this option, too. It’s just unfortunate, though, that the Intel Arc-powered TinyBox is still in the prototype stage and that it has no plans to ship it. It would’ve been nice if we had the option for that as well, so we hope that the company changes its mind in the future and starts shipping ‘blue’ models of the TinyBox.

Jowi Morales
Contributing Writer

Jowi Morales is a tech enthusiast with years of experience working in the industry. He’s been writing with several tech publications since 2021, where he’s been interested in tech hardware and consumer electronics.

  • vanadiel007
    Hopefully this will not take off well, because otherwise we will have another shortage of consumer GPU's.
    Reply
  • derekullo
    vanadiel007 said:
    Hopefully this will not take off well, because otherwise we will have another shortage of consumer GPU's.
    A 4090 is barely a consumer gpu lol.
    Reply
  • bit_user
    derekullo said:
    A 4090 is barely a consumer gpu lol.
    One thing is fab capacity and GDDR supply. Nvidia recently had to downgrade some of their GPUs from GDDR6X to regular GDDR6, due to shortages of the former.

    Another thing is that people using consumer GPUs for AI training will just move on to the next generation of consumer GPUs, when they launch. That could push up prices and shift demand down the product stack, resulting in higher prices even on lower-end models.

    I'm not saying any of this will happen, but I think they're not impossible outcomes. Then again, people have been doing multi-GPU training with Nvidia consumer GPUs for like a decade, so this isn't really anything terribly new or novel (apart from any clever algorithms Tiny might've devised).
    Reply
  • NinoPino
    vanadiel007 said:
    Hopefully this will not take off well, because otherwise we will have another shortage of consumer GPU's.
    By price and power usage those GPUs cannot be defined consumer.
    Reply
  • derekullo
    NinoPino said:
    By price and power usage those GPUs cannot be defined consumer.
    If anything a Geforce 4090 would be a prosumer gpu.
    Reply
  • Amdlova
    16k = pain
    25k = great. Everything works...

    I can't decide what is right
    Reply
  • Samlebon2306
    Very expensive dehumidifier.
    Reply
  • DS426
    NinoPino said:
    By price and power usage those GPUs cannot be defined consumer.
    Ok but that's not how consumer-class products work. It's also about support levels and some other assurances.

    Also, the price class of the 7900 XTX is completely different than the 4090 being that, based on actual market prices, the 4090 really is out on it's own at a cost of more than 50% higher than the 7900 XTX. Sure, they're both "prosumer" class products, but that still falls in the consumer category and not professional or datacenter.

    We're kind of arguing semantics here but the basics remains the same: you're getting one of the most affordable AI training servers for the performance given. Thankfully, there are alternatives to nVidia's AI strangehold, which the pricing really reflects. Now that clever folks will be tinkering with the AMD models and the both ends are working to improve driver quality, it seems to me that both options are very compelling, albeit for somewhat different reasons (price-perf vs. all-out performance and "it just works").
    Reply