Apple I replica can run ChatGPT — Macintosh Founding Father uses Wi-Fi module to turn relic into AI assistant

Apple 1 replica running ChatGPT.
(Image credit: Chris Skitch)

In a continuing example of just how versatile some early computers were, a replica of an Apple 1 showed up at the Vintage Computer Festival West run by the Computer History Museum, upgraded with a modern Wi-Fi module, and running the latest instance of ChatGPT. Better yet, it was put together by original Apple employee and Macintosh legend, Daniel Kottke.

The Apple 1 was originally released in 1976 and featured a 1 MHz processor, just 4KB of memory, and 256 bytes of storage. Targeting the hobbyist market, it was relatively expensive at the time at $666.66 - equivalent to around $3,700 in today's money. It was short-lived, though, replaced by the Apple II just three years later, and only around 200 were sold.

That's probably why even an Apple II engineer like Kottke built his latest project around a replica, but it's still impressive. The replica hardware has just a handful of integrated circuits (as ChatGPT understands in the above video, it seems), but is hooked up to a modern Wi-Fi module, giving the ancient-looking hardware a connection to the wider modern world of connected technology.

Using that to connect the monochrome system to a cutting-edge AI technology is a true blending of two worlds. The old and the new. No wonder Kottke drew a small crowd of intrigued enthusiasts at even this kind of niche event.

It also highlights one of the real strengths of modern cloud-connected computing. AI like ChatGPT is notoriously power and performance-hungry, demanding huge networks of expensive graphics processing hardware and fast storage to run at speed. But none of that needs to be run locally, so even the slowest of systems can take advantage if they can get online and receive text responses.

In the case of the Apple 1, ChatGPT, which was trained on tens of millions of dollars worth of the most cutting-edge graphical hardware in 2025, could run on a system that replicated hardware from almost 50 years before. A single megahertz processor was enough to give Kottke a fast response to a query, despite translating through decades of development and hardware progression.

Although there's an argument to be made that AI on the edge, running entirely locally on individual devices, is much better for privacy and security, when the latency of responses is so low, it means AI can run on just about anything. And probably will run on just about everything before too long, if the latest trends are anything to go by.

Thanks to Chris Skitch for the tip on this one.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Jon Martindale
Freelance Writer

Jon Martindale is a contributing writer for Tom's Hardware. For the past 20 years, he's been writing about PC components, emerging technologies, and the latest software advances. His deep and broad journalistic experience gives him unique insights into the most exciting technology trends of today and tomorrow.

  • usertests
    It was cooler when "thing displays text from ChatGPT" was done with a graphing calculator.

    Demoscene stuff pushing ancient hardware to its limits is way more interesting than Wi-Fi mods.
    Reply
  • TerryLaze
    An original apple 1 replica, not one of those fake replicated replicas.......
    Reply
  • cuvtixo
    TerryLaze said:
    An original apple 1 replica, not one of those fake replicated replicas.......
    "That's probably why even an Apple II engineer like Kottke built his latest project around a replica, but it's still impressive." eh, give 'em a break. What really excited me is the wood keyboard cabinet. It's pretty silly in the context, but I want one!
    Reply
  • TerryLaze
    cuvtixo said:
    eh, give 'em a break.
    Absolutely not!
    They make money by writing, they should know basic grammar.
    Reply
  • Spinachy
    The Apple 1 is not "running ChatGPT" - It is just sending a request to the cloud, and displaying the returned result. Fast hardware is not needed for this. This is not "AI on the Edge", it is just a client.
    Reply
  • blueshark3d
    Vintage Apple looks quite nice ! ! !
    Reply