First RISC-V 3D GPUs Will Be Demoed Next Week
Claimed to be the industry's first RISC-V 3D GPUs for graphics and deep learning tasks.
We often hear about RISC-V, its competitive open-source architecture, and how it is making inroads into the CPU industry. However, last year we reported on an open source GPU under development, which would seriously bolster the development of open-source graphics engines. This week Think Silicon announced that it is ready to "showcase the industry's first RISC-V-based GPU."
The new RISC-V 3D GPUs from Think Silicon will form the foundation of its Neox series. Two branches are already being readied to expand Neox offerings; the Neox G series for graphics acceleration and the Neox A series for accelerating deep learning tasks.
Why is Think Silicon purposing these two separate Neox branches? It is reasonable to assume that, as Think Silicon is a self-proclaimed "leader in ultra-low power graphics IP," that it designs lean products strictly for targeted tasks. These won't be flabby general-purpose GPUs.
Think Silicon's press release gives us a few more clues to the type of embedded products and applications the respective Neox GPUs will target. It says that the new GPUs will feature customized SoCs for graphics, machine learning, vision/video processing, and general-purpose compute workloads. Mobile devices and platforms appear to be the most obvious, but not exclusive, target for this technology. Think Silicon reckons Neox G and A cores will be used in; next-generation smartwatches, augmented reality (AR) eyewear, video for surveillance and entertainment, and smart displays for point-of-sale/point-of-interaction terminals.
"Unveiling the first RISC-V-based GPU is a significant milestone for the graphics industry and for Think Silicon," said Ulli Mueller, Director of IP Licensing, Sales, and Marketing at Think Silicon. Mueller added that he hoped developers coming to the exhibition would be inspired by his firm's new 3D GPUs. In particular, he hopes that some will see the Neox GPU's potential to create "exceptional user experiences" while being thrifty with power.
Think Silicon will demonstrate its RISC-V 3D GPU family at Embedded World 2022. This trade show takes place in Nuremberg, Germany, from Tuesday, June 21 to Thursday, June 23. It will be interesting to see the level of performance these early RISC-V GPUs can muster, albeit in the low-power and embedded segments.
Before we go, it is worth pointing out that the PC graphics industry big hitters like AMD are also looking at using RISC-V technologies in their product portfolios.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Mark Tyson is a news editor at Tom's Hardware. He enjoys covering the full breadth of PC tech; from business and semiconductor design to products approaching the edge of reason.
-
hannibal This is actually one thing that could reduce GPU prices. Open source, so anyone from China to Taiwan can produce these, so there is real competition!Reply -
thisisaname hannibal said:This is actually one thing that could reduce GPU prices. Open source, so anyone from China to Taiwan can produce these, so there is real competition!
Anyone from China to Taiwan can produce these, does that mean just people from China can make this?
This post may contain sarcasm ;) -
Findecanor Lots of buzzwords...Reply
One interesting thing with the RISC-V spec. is that instead of having a fixed-width SIMD instruction set similar to Intel AVX2 and ARM Neon, it has a vector instruction extension where the vector-length is an implementation detail. It also support vector-lane masking. An implementation with very wide vectors and masking would make it quite similar to how GPUs from AMD and NVidia do computation.
Neox could have that ... or it could have a more or less custom ISA. (There have been RISC-V CPUs with weird vector units before)
BTW. ARM also has a scalable vector extension in addition to Neon. But few chips have also implemented it so far.
"Deep learning"/"AI" unit usually means simply that it is capable of doing parallel multiply-adds with 16-bit floating point factors and 32-bit sums. That could maybe be parallelised into dot products or full multiplications with fewer instructions but there is no way of knowing which from just buzzwords.