Ascenium Wants to Reinvent the CPU and Kill Instruction Sets Altogether

High-level view of Ascenium's Aptos processor
Aptos is Ascenium's homogeneous architecture, many-core processor that does away with instruction limitations. (Image credit: Ascenium)

Ascenium is one of the startup companies making waves in the CPU and general purpose computing design space. The company is helmed by Peter Foley, CEO and co-founder, who previously worked on Apple's Apple I and Apple II computers as well as a long list of hardware-design focused companies. Ascenium has recently secured $16 million in funding via a Series A raise, which clearly signals a belief in the company's mission. And what is that mission, exactly?

To outdo existing CPU architectures in both performance and power efficiency via the first software-defined processor.

Ascenium hopes to do this via the perfect marrying of software and hardware achieved in its Aptos processor by killing the deep pipelines associated with the best-performing CPUs of today and creating a true, compiler-powered parallel-processing CPU based on the LLVM compiler architecture. The Aptos processor the company is currently developing is based on a 64-bit capable 128-core array of general-purpose, simple cores. If you remember Intel's efforts on the (now cancelled)  Xeon Phi architecture, Ascenium's Aptos is essentially that same many-core design paradigm, but eschewing the x86 instruction set (and its limitations and requirements on core design) while deploying a high-performance compiler that parallelizes workloads across its hardware resources. 

Ascenium has already secured nine patents related to its architecture and software designs, which will offer the company a much-needed defence against entrenched computing giants who won't/can't abandon their current instruction sets, such as x86 and Arm, and would likely go after an emerging player that had a product good enough to threaten the established, 50-year-in-the-making ISAs we currently know.

A deep pipeline (which is essentially the route instructions take inside the CPU architecture until a solution for the current problem is produced) allows for increased performance in workload serialization but negates many scenarios where parallelization (and thus higher performance) could be achieved. And with deep pipelines and the specialized hardware registers and stages which make up a modern CPU's processing, Ascenium estimates that around 50% of instructions are related to the movement of data through the pipeline — instructions and moves which take up both processing time and power budget. The idea of a compiler-based software solution embedded in an architecture would theoretically allow the Aptos processor to interpret workload instructions and distribute them across processing resources in such a way that the amount of work being parallelized is as close as possible to the theoretical maximum, whilst taking advantage of having much lesser architectural inefficiencies than instruction-based processors. Ascenium plans to push for higher power/performance ratios, and even savings of 10% in that equation are gold to hyperscalers and the type of data centre client that Ascenium hopes to lure into its ecosystem first. They are the ones to benefit the most from such an architecture.

Naturally, if one could create an architecture that reduces the need to shuffle data around, it'd find itself with a relevant power efficiency advantage. And then there's also the matter of x86's structural weaknesses, requiring an inordinate amount of transistors to be thrown at a given problem to achieve even small amounts of performance improvements. Ascenium CEO Peter Foley places it in the order of billions of additional transistors to achieve performance increases that sometimes don't even enter the two-digit realm.

So Ascenium plans to do away with instruction sets, create the world's first homogeneous, instruction set and architecture-free processor, and usher in a new processing architecture built from the ground-up. These are lofty aims and the risk is immense. But then again, so is the reward.

Francisco Pires
Freelance News Writer

Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.

  • TerryLaze
    100% misleading title.
    This is never going to be a CPU replacement, just like the xeon phi that is mentioned in the article, it will only be good for very specific things.
    Reply
  • jkflipflop98
    CyrixInstead
    Reply
  • NightHawkRMX
    Good luck with that.
    Reply
  • kaalus
    Current state:
    5 incompatible competing CPU architectures

    Let's fix this!

    New state:
    6 incompatible competing CPU architectures.
    Reply
  • What is really amusing is that they have the audacity to believe they could displace the other CPUs
    Reply
  • TJ Hooker
    TerryLaze said:
    100% misleading title.
    This is never going to be a CPU replacement, just like the xeon phi that is mentioned in the article, it will only be good for very specific things.
    No, the title is accurate, Ascenium plans for their chip to be a CPU replacement, not a co-processor. Whether they'll actually succeed in doing that is of course a very different question.

    Reading the interview that was linked in this article, it appears they're using an EPIC approach, where they're basically relying on the compiler to do all the heavy lifting. This is not unlike Intel's Itanium, which didn't really work out, in part because the magical compilers that would perfectly parallelize and optimize the code apparently never appeared (or at least weren't available when it was released). Ascenium claims to already have a working compiler prototype that can successfully optimize programs on the order of 100K lines of code for their Aptos architecture, but we'll see if they'll be able to get it working well for real world programs.
    Reply
  • TerryLaze
    TJ Hooker said:
    No, the title is accurate, Ascenium plans for their chip to be a CPU replacement, not a co-processor. Whether they'll actually succeed in doing that is of course a very different question.

    Reading the interview that was linked in this article, it appears they're using an EPIC approach, where they're basically relying on the compiler to do all the heavy lifting. This is not unlike Intel's Itanium, which didn't really work out, in part because the magical compilers that would perfectly parallelize and optimize the code apparently never appeared (or at least weren't available when it was released). Ascenium claims to already have a working compiler prototype that can successfully optimize programs on the order of 100K lines of code for their Aptos architecture, but we'll see if they'll be able to get it working well for real world programs.
    A CPU replacement maybe, just like xeon-phi was made to boot up on its own,
    but not a CPU replacement for many people.

    This is just an FPGA in principle, one that is made up of lots and lots of individual "cores" it's going to be great at parallelism but it's going to suck really hard at running anything and everything that almost all PC users know.

    People will have to reinvent anything they want this thing to run efficiently.
    Reply
  • husker
    I think I remember something about a company called Transmeta that wanted to do the same thing. Linus Torvalds was involved. Ended up just selling it's technology to other companies.
    Reply
  • Pytheus
    Correct me if I'm wrong, but isn't this is just a hybrid FPGA? Instead of instruction sets they're attempting to reconfigure the chip for every task they're running? I've thought of something like this but that sounds tedious for software engineers.
    Reply
  • Chung Leong
    TJ Hooker said:
    Reading the interview that was linked in this article, it appears they're using an EPIC approach, where they're basically relying on the compiler to do all the heavy lifting. This is not unlike Intel's Itanium, which didn't really work out, in part because the magical compilers that would perfectly parallelize and optimize the code apparently never appeared (or at least weren't available when it was released).

    No compiler can infer what hasn't been expressed by the programmer, implicitly or explicitly. The EPIC approach really had no chance when most programs were written in bare-metal languages like C and C++. Availability of pointers messed everything up. The programming landscape has changed quite a bit in the last 20 years, so the same approach could succeed this time around.
    Reply