Legendary GPU architect Raja Koduri's new startup leverages RISC-V and targets CUDA workloads — Oxmiq Labs supports running Python-based CUDA applications unmodified on non-Nvidia hardware
Another startup developing GPUs that are not meant for graphics emerges from stealth mode.

Raja Koduri, a legendary GPU architect from ATI Technologies, AMD, Apple, and Intel, on Tuesday said he had founded a new GPU startup that emerged from stealth mode today. Oxmiq Labs is focused on developing GPU hardware and software IP and licensing them to interested parties. In fact, software may be the core part of Oxmiq's business as it is designed to be compatible with third-party hardware.
Another RISC-V-based 'GPU' for AI
Oxmiq develops a vertically integrated platform that combines GPU hardware IP with a full-featured software stack aimed at AI, graphics, and multimodal workloads where explicitly parallel processing is beneficial. On the hardware side, Oxmiq offers a GPU IP core based on the RISC-V instruction set architecture (ISA) called OxCore, which integrates scalar, vector, and tensor compute engines in a single modular architecture and can support near-memory and in-memory compute capabilities.
Oxmiq also offers OxQuilt, a chiplet-based system-on-chip (SoC) builder that enables customers to create their own SoCs that integrate compute cluster bridge (CCB, which probably integrates OxCores), memory cluster bridge (MCB), and interconnect cluster bridge (ICB) modules based on specific workload requirements in a rapid and cost-efficient manner. For example, an inference AI accelerator for edge applications can pack a CCB and an ICB or two, an inference SoC requires more CCBs, MCBs, and ICBs, whereas a large-scale SoC for AI training can pack dozens of chiplets. Oxmiq does not disclose whether its OxQuilt enables building only multi-chiplet system-in-packages (SiP), or is designed to assemble monolithic processors too.
Software is the key
Oxmiq's software stack is perhaps an even more important product that the company has to offer. The software package is designed to abstract the complexity of heterogeneous hardware and enable deployment of AI and graphics workloads across a range of hardware platforms, not just those using the company's IP. The core of the software stack is OXCapsule, a unified runtime and scheduling layer that manages workload distribution, resource balancing, and hardware abstraction. The layer encapsulates applications into self-contained environments, which the company calls 'heterogeneous containers.' These containers are designed to operate independently of the underlying hardware, enabling developers to target CPUs, GPUs, and AI accelerators without modifying their codebase or dealing with low-level configuration.
A standout component of this stack is OXPython, a compatibility layer that translates CUDA-centric workloads into Oxmiq's runtime and allows Python-based CUDA applications to run unmodified on non-Nvidia hardware without recompilation. OXPython will first launch not on Oxmiq's IP, but on Tenstorrent's Wormhole and Blackhole AI accelerators. In fact, Oxmiq's software stack is fundamentally designed to be independent from Oxmiq hardware, and that is a core part of its strategy.
"We are excited to partner with Oxmiq on their OXPython software stack," said Jim Keller, CEO of Tenstorrent. "OXPython's ability to bring Python workloads for CUDA to AI platforms like Wormhole and Blackhole is great for developer portability and ecosystem expansion. It aligns with our goal of letting developers open and own their entire AI stack."
What about graphics?
Having developed graphics processors at S3 Graphics, ATI Technologies, AMD, Apple, and Intel, Raja Koduri is primarily known as a GPU developer. In fact, he even positions Oxmiq as the first GPU startup in Silicon Valley in decades.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
"We may be the first new GPU startup in Silicon Valley in 25+ years," wrote Koduri in an X post. "GPUs are not easy."
However, it should be noted that Oxmiq is not building a consumer GPU like AMD Radeon or Nvidia GeForce. In fact, it does not develop all the IP blocks necessary to build a GPU, unlike Arm or Imagination Technology: it does not support full consumer graphics features out-of-the-box (such as texture units, render back ends, display pipeline, ray tracing hardware, DisplayPort or HDMI outputs), so Oxmiq licensees must implement them in silicon themselves, if they plan to build a GPU.
Asset low strategy
Oxmiq has secured $20 million in seed funding from major tech investors, including mobile and custom AI silicon developer MediaTek, and has already recorded its first software revenue. By focusing on IP licensing instead of costly chip production or even actual silicon implementation, the company maintains high capital efficiency without relying on expensive EDA tools or tape-outs.
"Oxmiq has an impressive bold vision and world-class team," said Lawrence Loh, SVP of MediaTek. "The company's GPU IP and software innovations will drive a new era of compute flexibility across devices, from mobile to automotive to AI on the edge."
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
-
ejolson The ability to add custom GPU-style vector instructions as needed to RISC-V is exactly the idea behind an open ISA. Yeah!Reply
It's more difficult to get excited about another cross-platform heterogeneous compute architecture. Everyone except Nvidia has tried to create cross-platform tools: OneAPI, OpenCL and HIP are examples.
On the other hand DeepSeek training was made possible by using PTX assembler tuned specifically to the hardware. -
Giroro Since when are we calling Koduri a "legendary architect".Reply
He's the legendarily incompetent C-Suite bean counter who killed ATI, later returned to put a dead-stop on AMD's ability to innovate their GPUs (which they haven't completely recovered from) before being forced to move to Intel, where in under 2 years he nearly killed their dGPUs before they ever got off the ground (a critical loss which could still kill off Intel entirely).
He's a poison pill. The only company that's ever benefitted from this guy's career is Nvidia, because he's never worked for them. -
thisisaname
He has managed to get funded 🤯Giroro said:Since when are we calling Koduri a "legendary architect".
He's the legendarily incompetent C-Suite bean counter who killed ATI, later returned to put a dead-stop on AMD's ability to innovate their GPUs (which they haven't completely recovered from) before being forced to move to Intel, where in under 2 years he nearly killed their dGPUs before they ever got off the ground (a critical loss which could still kill off Intel entirely).
He's a poison pill. The only company that's ever benefitted from this guy's career is Nvidia, because he's never worked for them. -
bit_user
I think you're too harsh, but I do suspect that both AMD and Intel kicked him to the curb. In fairness, work on RDNA should've already started while he was still at AMD, so maybe he deserves partial credit for it?Giroro said:Since when are we calling Koduri a "legendary architect".
He's the legendarily incompetent C-Suite bean counter who killed ATI, later returned to put a dead-stop on AMD's ability to innovate their GPUs (which they haven't completely recovered from) before being forced to move to Intel, where in under 2 years he nearly killed their dGPUs before they ever got off the ground (a critical loss which could still kill off Intel entirely).
I think he was probably a decent engineer, early in his career, which is how he rose through the ranks. What we might be seeing is the Peter Principle at work. In short, it observes that success is rewarded by promotion. Absent other considerations, one would therefore expect an employee to continue getting promoted until they reach a position for which they're no longer competent.
Of course, even if that happens once, you don't tend to get another job at that level, unless you're also a shameless self-promoter. So, I'm not trying to say Raja is merely a victim of his early successes.
LOL, you might be on to something!Giroro said:He's a poison pill. The only company that's ever benefitted from this guy's career is Nvidia, because he's never worked for them.
: D -
bit_user
Not with very much. As the article noted, $20M is toy money in the hardware game.thisisaname said:He has managed to get funded 🤯
Not sure how much of it is his own or Jim Keller's, either. Maybe the Tenstorrent deal also had a lot to do with securing that seed funding.
I don't really know what Jim sees in Raja, but I would observe that the last time they actually worked together was at Apple, in the late 2000's. -
SkyBill40
A fool and his money are soon parted. 🤷♂️thisisaname said:He has managed to get funded 🤯 -
Alvar "Miles" Udell Raja sure is legendary. He messed up Vega so badly it almost bankrupted AMD, then he messed up Intel's GPUs so badly it almost bankrupted the company...Reply -
wwenze1 Another startup developing GPUs that are not meant for graphics
AMD GPU during the later GCN era: People buy them to mine crypto
Intel ARC: People buy them to... er... something
Also, Graphics Processing Units that are not meant for graphics lol