Intel to Explore RISC-V Architecture for Zettascale Supercomputers

Intel
(Image credit: Intel)

This week, Intel and the Barcelona Supercomputing Centre (BSC) said they would invest €400 million (around $426 million) in a laboratory that will develop RISC-V-based processors that could be used to build zettascale supercomputers. However, the lab will not focus solely on CPUs for next-generation supercomputers but also on processor uses for artificial intelligence applications and autonomous vehicles. 

The research laboratory will presumably be set up in Barcelona, Spain, and will receive €400 million from Intel and the Spanish Government over 10 years. The fundamental purpose of the joint research laboratory is to develop chips based on the open-source RISC-V instruction set architecture (ISA) that could be used for a wide range of applications, including AI accelerators, autonomous vehicles, and high-performance computing. 

The creation of the joint laboratory does not automatically mean that Intel will use RISC-V-based CPUs developed in the lab for its first-generation zettascale supercomputing platform but rather indicates that the company is willing to make additional investments in RISC-V. After all, last year, Intel tried to buy SiFive, a leading developer of RISC-V CPUs and is among the top sponsors of RISC-V International, a non-profit organization supporting the ISA.  

While around $21.3 million is a significant sum of money, Intel will be pouring a lot more into its x86-based products in the coming years, so spending on RISC-V processors does not mean a lower focus on x86 designs. On the contrary, throughout its history, Intel invested hundreds of millions in non-x86 architectures (including RISC-based i960/i860 designs in the 1980s, Arm in the 2000s, and VLIW-based IA64/Itanium in the 1990s and the 2000s). Eventually, those architectures were dropped, but technologies developed for them found their way into x86 offerings. 

With its RISC-V efforts, Intel could be killing several birds with one stone. First, suppose engineers from the joint laboratory manage to design a CPU technology that is more suitable for ZettaFLOPS-class supercomputers. In that case, Intel will be able to use it for its products. As an added bonus, Intel’s Foundry Services division will likely become a fab of choice for CPUs/SoCs developed in the joint lab. 

“High-performance computing is the key to solving the world’s most challenging problems, and we at Intel have an ambitious goal to sprint to zettascale era for HPC,” said Jeff McVeigh, vice president and general manager of the Super Compute Group at Intel. “Barcelona Supercomputing Center shares our vision for this goal, with equal emphasis on sustainability and an open approach. We are excited to partner with them to embark on this journey.” 

Last year Intel set itself an ambitious goal to build a ZettaFLOPS-class supercomputer platform by 2027, which means to increase the performance of supercomputers by 1000 times in about five years. The company said it would need new compute architectures, new system architectures, high-speed memory and I/O interfaces, novel fabrication technologies, and sophisticated chip packaging methods, among other things. One of the company’s ways to radically improve compute performance is to build an architecture that would combine the company’s x86 general-purpose cores with Xe-HPC compute GPUs. The first product that uses this concept is Falcon Shores — which is already in development. 

Anton Shilov
Freelance News Writer

Anton Shilov is a Freelance News Writer at Tom’s Hardware US. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • JayNor
    Mobileye's chips are moving to RISC-V, so perhaps Intel has some immediate need to be involved.
    Reply
  • escksu
    Interesting. I have to say it will be quite a long time before x86 will be replaced. Anyway, pure x86 doesn't exist anymore today. When cpu received an x86 instruction, it will break it down into simpler proprietary instructions for faster execution (AMD and Intel has their own implementations). Only certain complex instructions need to rely on slow microcode ROM (very few).

    We are already using arm and x86 interchangeably without realizing it. Phones can link with our laptops via wireless/Bluetooth/USB and communicate etc even though both are using different OS and cpu. Android and iOS are using arm based CPUs while laptops are x86.

    So now, it's more of an interface rather than processor architecture issue.
    Reply
  • JamesJones44
    escksu said:
    Interesting. I have to say it will be quite a long time before x86 will be replaced. Anyway, pure x86 doesn't exist anymore today. When cpu received an x86 instruction, it will break it down into simpler proprietary instructions for faster execution (AMD and Intel has their own implementations). Only certain complex instructions need to rely on slow microcode ROM (very few).

    We are already using arm and x86 interchangeably without realizing it. Phones can link with our laptops via wireless/Bluetooth/USB and communicate etc even though both are using different OS and cpu. Android and iOS are using arm based CPUs while laptops are x86.

    So now, it's more of an interface rather than processor architecture issue.

    This is even true on backends. Several software vendors release and run their software on both x86 and Arm hardware. It will take more time to leak down into the consumer desktop/laptop space, but I believe CPU architecture agnostic software will come to everything eventually.
    Reply
  • hotaru.hino
    The whole concept of "run your software on anything" dates back to at least the late 70s and early 80s. Every home computer had a BASIC interpreter and from what I can gather, most of the major ones had similar code words that you could conceivably take a BASIC program from one computer and plop onto another, barring of course using POKE, PEEK, and other memory related commands as-is.

    As long as we're talking about application level software and programming languages made for such, then such applications are already CPU agnostic. However, if we're talking about systems level software, then you can't really escape from the nuances of the hardware you have until we unify on a single ISA.
    Reply
  • JayNor
    Intel's Ponte Vecchio is stated to be 100B transistors.

    how many risc-v cores could be built with 100B transistors?
    Reply
  • hotaru.hino
    JayNor said:
    Intel's Ponte Vecchio is stated to be 100B transistors.

    how many risc-v cores could be built with 100B transistors?
    Depends on how many bells and whistles you want each core to have. You could probably fit thousands in this, but only if you restricted it to a basic instruction decoder, in-order execution, one ALU, one AGU, one FPU, and enough cache to make a mainstream Intel part look like a data vault.

    ... And then you'd basically have a RISC-V GPU at that point.
    Reply
  • DavidC1
    escksu said:
    Interesting. I have to say it will be quite a long time before x86 will be replaced. Anyway, pure x86 doesn't exist anymore today.

    Actually on the Atom-based architecture, they directly execute x86 instructions lot of the time.

    The whole shebang about x86 vs ARM isn't as important today. What matters more is execution of the companies and teams that involve them.

    If x86 vs ARM was the only issue, why is Apple dominating the rest on the GPU side as well? Intel/AMD/Nvidia isn't using x86 GPUs are they?
    Reply
  • JayNor
    hotaru.hino said:

    ... And then you'd basically have a RISC-V GPU at that point.
    well, it's good to see Intel involved. They have performance analysis and simulation tools that could be a big help, if ported.
    Reply
  • escksu
    DavidC1 said:
    Actually on the Atom-based architecture, they directly execute x86 instructions lot of the time.

    The whole shebang about x86 vs ARM isn't as important today. What matters more is execution of the companies and teams that involve them.

    If x86 vs ARM was the only issue, why is Apple dominating the rest on the GPU side as well? Intel/AMD/Nvidia isn't using x86 GPUs are they?

    All x86 CPUs will still receive and execute x86 instructions. There is no change on this since the beginning, the difference is that all x86 CPUs now has its own instruction decoders which will break the x86 into its own instructions (micro-ops) for faster execution. These micro-ops are proprietary and not x86.

    Of course, the decoder is not able to decode every x86 instruction. ITs only for more commonly used ones. Certain isntructions still have to rely on the microcode rom.

    Btw, GPU is neither x86 nor RISC. Its using different architecture which is independent of CPU. GPUs are compatible with all kinds of CPUs. ITs the drivers and firmware that acts as a bridge to allow GPU to communicate with CPU.

    Your Windows or MAC or Linux does not talk directly to the GPU, it talks to the drivers. Then the drivers will act like a "decoder" which in turns tell GPU what to do. This is why drivers are very critical to GPU performance.
    Reply
  • DavidC1
    escksu said:
    All x86 CPUs will still receive and execute x86 instructions. There is no change on this since the beginning, the difference is that all x86 CPUs now has its own instruction decoders which will break the x86 into its own instructions (micro-ops) for faster execution. These micro-ops are proprietary and not x86.

    Yes I know. Atom based cores don't have to do that for most instructions.

    ARM also has to deal with decoders and have the uop cache.

    Btw, GPU is neither x86 nor RISC. Its using different architecture which is independent of CPU. GPUs are compatible with all kinds of CPUs. ITs the drivers and firmware that acts as a bridge to allow GPU to communicate with CPU.

    Again, I know and it doesn't matter as much as people think. People arguing about x86 vs ARM don't realize Apple beats everybody in GPUs which are in a completely different category. x86 vs ARM just gives them an excuse to rely on as a crutch. Some guys and teams just do better than others that's all.

    There are no such thing as Apple vs Apple in the real world because everything is different.
    Reply