RISC-V Evolving to Address Supercomputers and AI

(Image credit: ORNL)

The open source RISC-V instruction set architecture is gaining more mainstream attention in the wake of Intel's rumored $2 billion bid for SiFive, the industry's leading RISC-V design house. Unfortunately, RISC-V has long been relegated to smaller chips and microcontrollers, limiting its appeal. However, that should change soon as RISC-V International, the organization that oversees the development of the RISC-V instruction set architecture (ISA), has announced plans to extend the architecture to high performance computing, AI, and supercomputing applications.

The RISC-V open-source ISA was first introduced in 2016, but the first cores were only suitable for microcontrollers and some basic system-on-chip designs. However, after several years of development, numerous chip developers (e.g., Alibaba) have created designs aimed at cloud data centers, AI workloads (like the Jim Keller-led Tenstorrent), and advanced storage applications (e.g., Seagate, Western Digital).

The means there's plenty of interest from developers for high-performance RISC-V chips. But to foster adoption of the RISC-V ISA by edge, HPC, and supercomputing applications, the industry needs a more robust hardware and software ecosystem (along with compatibility with legacy applications and benchmarks). That's where the RISC-V SIG for HPC comes into play. 

At this point, the RISC-V SIG-HPC has 141 members on its mailing list and 10 active members in research, academia, and the chip industry. The key task for the growing SIG is to propose various new HPC-specific instructions and extensions and work with other technical groups to ensure that HPC requirements are considered for the evolving ISA. As a part of this task, the SIG needs to define AI/HPC/edge requirements and plot a feature and capability path to a point when RISC-V is competitive against Arm, x86, and other architectures. 

There are short-term goals for the RISC-V SIG-HPC, too. In 2021, the group will focus on the HPC software ecosystem. First up, the group plans to find open source software (benchmarks, libraries, and actual programs) that can work with the RISC-V ISA right out of the box. This process is set to be automatized. The first investigations will be aimed at applications like GROMACS, Quantum ESPRESSO and CP2K; libraries like FFT, BLAS, and GCC and LLVM; and benchmarks like HPL and HPCG. 

The RISC-V SIG-HPC will develop a more detailed roadmap after the ecosystem is solidified. The long-term goal of the RISC-V SIG is to build an open-source ecosystem of hardware and software that can address emerging performance-demanding applications while also accomodating legacy needs.

How many years will that take? Only time will tell, but industry buy-in from big players, like Intel, would certainly help speed that timeline. 

Anton Shilov
Freelance News Writer

Anton Shilov is a Freelance News Writer at Tom’s Hardware US. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • grlegters
    Maybe they could call it an Increased Instruction Set Computer.
  • グレェ「grey」
    Intel making a bid to buy SiFive might be good for the two of them, but I can't help but wonder how it will be good for RISC-V? SiFive has already publicly prototyped 5nm RISC-V chips via TSMC (also see: https://www.tomshardware.com/news/openfive-tapes-out-5nm-risc-v-soc ). Meanwhile, Micro Magic has demonstrated a 5GHz RISC-V chip operating at 1W last year. It's pretty obvious what RISC-V's wins are, with Linux, FreeBSD and most recently OpenBSD ports already in the works. it's less obvious to me if Intel can do much more than throw a lot of money behind it, and given their reputation, as well as swaths of paid off industry shills, is that a good thing?

    Personally, I am probably most excited by the GaNext Project which is looking to implement RISC-V in Gallium Nitride. Remember when Cray was already iterating GaA (Gallium Arsenide) in the 1980s? That consumer targeted computing is still languishing in the realms of silicon decades later is IMHO, beyond pathetic and the inefficiencies intrinsic to silicon designs in use, particularly relative to the abuse of cryptocurrency miners, have real world global climate change impacts which are empirically measurable with horrific ramifications and such designs and abuses of hardware for short term predatory profits should continually be shunned. Meanwhile, DOD/military affiliated contract designers I know have already released 10+GHz RAM designs to complement presumably as fast or faster CPUs > decade ago. Not that such things are public knowledge to most I guess?

    Per grlegters' comment, while RISC-V is an open freely licensed CPU ISA, others have already implemented custom extensions to it which might be closer to your "Increased Instruction Set Computing" but the core remains reduced, not entirely dissimilar to commercial derivatives of BSD derived codebases (e.g. something such as EMC/Isilon and their OneFS relative to FreeBSD which is the upstream, or Apple and macOS with again, large parts of FreeBSD and some smaller security focused pieces of OpenBSD as their upstream codebases, the upstreams remaining more versatile and with fewer after market gee-whiz features if perhaps less user friendly than the commercial focused derivations). Subsequently, I think RISC-V remaining RISC and reduced in nature, is ideal as it makes derivative works closer to trivial and with already extent parallels in the software world. We've already seen Troy Benjegerdes re-implement the SiFive Unleashed PCB using KiCad thanks to its simple and open nature.

    After all, RISC-V doesn't even specify an FPU (and really, with unum/posits as an alternative to IEEE-754 floating points, why would a cutting edge CPU ISA attempt to drag an obsolete 20th century design into the 21st century that should have been ditched a while ago?). Other examples of extensions such as BOOM and derivatives are commonplace. MIT has also released xv6 to replace its Lion's Commentary on Unix inspired curricula to being updated to a RISC-V based toolchain, so it's pretty clear what the engineers of the future are being trained on as well.

    Characterizing RISC-V as merely for microcontrollers seems to me, to be doing it a grave disservice.

    While it is great that it will be replacing a lot of embedded systems such as SiFive's HiFive1 as an Arduino drop in replacement as well such as Western Digital's SweRV seem promising, that is far from the whole story and ignores the OS developer targeted systems such as the SiFive Unleashed, Unmatched and more recently the BeagleV as a RISC-V based system filling something closer to the Raspberry Pi niche. Meanwhile, for FPGA folks, if you haven't seen how Olof Kindgren has already crammed > 5000 SERV (his own serial RISC-V implementation) RISC-V cores into an FPGA devkit, I wonder if you even really pay attention to researchers in the field? Dr. Andrew "bunnie" Huang of Hacking the Xbox notoriety hardened handheld Precursor project due out next year is also RISC-V soft-CPU based (being prototyped in FPGA). Even Nvidia has been quite public for years now, that their next generation GPU designs will be using 64bit RISC-V cores, whereas their existing GPUs have been based around 32bit ARM cores (which is presumably why Nvidia has been eager to acquire ARM from SoftBank, because unlike Apple, Nvidia did not have a perpetual license for that CPU ISA).

    The present and future looks brighter and brighter for RISC-V, while I've read that Intel has plans to fab their own RISC-V chips, a SiFive acquisition to me, reads more like an act of desperation from a company that has been "too big to fail". Its Itanium floundering was infamous and even in a dystopia where people often fixate on the negative, it seems baffling to me that apparently too few seem to recall that huge Intel failure despite how recently that was. I would think that Tom's Hardware would recognize its own place in such realms, given how it played such a significant role in that CPU ISA's failing by its continual bad mouthing of Rambus back in the day.
  • mikeinjbay
    @ Grey W.O.W. You certainly know your stuff.
  • hotaru.hino
    グレェ「grey」 said:
    Its Itanium floundering was infamous and even in a dystopia where people often fixate on the negative, it seems baffling to me that apparently too few seem to recall that huge Intel failure despite how recently that was. I would think that Tom's Hardware would recognize its own place in such realms, given how it played such a significant role in that CPU ISA's failing by its continual bad mouthing of Rambus back in the day.
    I think it's easy to point and laugh at Intel about Itanium when hindsight is 20/20. But looking at the landscape in the late 90s regarding large scale computers, there really wasn't something that was considered the de-facto ISA at the time. There's a chart of ISAs used in the TOP500 supercomputers floating around on Wikipedia and by the look of things, around 8-10 ISAs had a fairly significant share in the late 90s. To me, it felt more like Intel (and HP, since HP was involved) wanted to find something that would topple the other ISAs and dominate that market and Intel was hoping their third attempt at it would finally be the one. I don't think Intel ever intended IA-64 to replace IA-32 though.

    NetBurst however, that's more up the normal consumer's alley with regards to Intel's slump.