Entry-level GPU RAID card enables mind-bending storage speeds — 80 GB/s of throughput from eight SSDs with SupremeRAID SR-1001

SupremeRAID SR-1000
SupremeRAID SR-1000 (Image credit: Graid Technology Inc.)

On January 25th, 2024, Graid Technology officially announced an entry-level version of their SupremeRAID series of GPUs to build a RAID array out of the best SSDs. The new SupremeRAID SR-1001 is the entry-level version of the higher-end SupremeRAID SR-1010. SupremeRAID features a unique application of RAID controlled through GPU technology and the high-performance gains that come with it, allowing server operators and other high-end users to leverage the best of their multi-NVMe drive setups.

While RAID is a mainstay in the PC space for multi-drive users, RAID has historically been somewhat problematic for SSD users since they can't always maximize their speeds while using RAID. This applies especially when using software RAID bound to the CPU since reduced CPU performance usually doesn't allow for the throughput of a traditional hardware RAID controller.

Graid Technology's solution to the issues with software RAID and traditional hardware RAID controllers is to approach the problem with GPU hardware. GPUs make notoriously good workhorses when you can use them for non-graphical issues, and if Graid's benchmarking holds, they've managed to utilize that.

SupremeRAID Storage GPU Performance vs Hardware/Software RAID

Swipe to scroll horizontally
Header Cell - Column 0 4K Random Read/Write (IOPS)1M Sequential Read/Write (GB/s)Throughput (GB/s)Maximum SSDs Supported
SupremeRAID SR-10016M / 600K80 / 30808
SupremeRAID SR-100016M / 820K220 / 922032
SupremeRAID SR-101028M / 2M260 / 10026032
Hardware RAID3.9M / 108K13.5 / 413.58
Software RAID2M / 200K9 / 2932

Compared to its higher-end cousins, the latest SupremeRAID-1001 card is targeted squarely at prosumers, performance enthusiasts, and home server users. With support for only eight SSDs but a maximum throughput of up to 80 GB/s, SupremeRAID-1001 should provide enough leeway for a few modern NVMe Gen 3, 4, or 5 SSDs.

Of course, proper server or data center operators should probably be looking toward the higher-end SupremeRAID solutions. Both support 32 SSDs, though if you're tapping out that capacity or using a lot of NVMe Gen 4/5 drives, you'll almost certainly want to opt for the highest-end SupremeRAID SR-1010 to minimize any possible performance loss.

For most users, though, it seems the SupremeRAID-1001 will be the best solution— provided the pricing is correct. Unfortunately, no pricing info for the SupremeRAID GPUs is listed anywhere on Graid Technology's site.

Freelance News Writer
  • purpleduggy
    I wonder what the price is to get to 80GB/s. From what I can see it would need around 7+ gen5 nvme drives (12GB/s theoretical) minimum unless there is a massive increase in gen6 drives.
    Reply
  • USAFRet
    "...targeted at home servers and gaming PCs"

    Someone is going to buy this silly thing, and then come here and complain that their FPS in CounterStrike did not go up.
    Reply
  • CelicaGT
    USAFRet said:
    "...targeted at home servers and gaming PCs"

    Someone is going to buy this silly thing, and then come here and complain that their FPS in CounterStrike did not go up.
    Yes.
    Storage speeds even in the budget range drives are well past what a gaming rig requires. Even Direct storage, touted as the killer app for high transfer rate storage shows large gains with "slow" drives. The law of diminishing returns kicked in around Gen3 NVMe in my opinion. Maybe in 5 or 6 years stuff like this (or at least a drive with the advertised transfer rates) will be of use, but not today.

    *Stutters, it's gonna be stutters in CS2. They'll also have a 5 year old AIO with no water left in it and the pump in the wrong orientation. But it was totally the drive...
    Sorry I'm getting increasingly cynical the farther past 40 I get..
    Reply
  • ezst036
    This uses the host GPU right? Like a 4090/Arc or onboard APU? Or does this actually have an Nvidia or AMD chip onboard the card?

    If using the host GPU that means you must have at least 2 16x slots in order to accommodate the card? That could be problematic as motherboards these days seem to have a decreasing amount of PCIe slots on them.
    Reply
  • HideOut
    ezst036 said:
    This uses the host GPU right? Like a 4090/Arc or onboard APU? Or does this actually have an Nvidia or AMD chip onboard the card?

    If using the host GPU that means you must have at least 2 16x slots in order to accommodate the card? That could be problematic as motherboards these days seem to have a decreasing amount of PCIe slots on them.
    Sounds to me like it includes the GPU. my guess is it doesn't need a 4090 or anything near that beastly. If the GPU is dedicated just to raid calculations a much much weaker GPU would hardly break a sweat.
    Reply
  • Findecanor
    Apparently these RAID cards don't actually connect to the drives. They only do the data processing, but leave buffers in main memory to be DMA'd to/from the drives like usual.

    This means that it should theoretically be possible to do the same processing on a regular GPU.

    And it looks to me that it would also be theoretically possible to make a RAID card that uses the same kind of GPU tech but connects directly to the drives and get even higher performance ... only that nobody has done that yet.

    Or am I missing something?
    Reply
  • thestryker
    The high end PCIe 4.0 version with 32 drive support they put out used an A2000.

    I like the idea and implementation for the most part, but this card is PCIe 3.0 x16 so to get any real use out of it you need Xeon W/Scaleable/Threadripper/EPYC or sacrifice all of your CPU PCIe lanes.
    Reply
  • CelicaGT
    thestryker said:
    The high end PCIe 4.0 version with 32 drive support they put out used an A2000.

    I like the idea and implementation for the most part, but this card is PCIe 3.0 x16 so to get any real use out of it you need Xeon W/Scaleable/Threadripper/EPYC or sacrifice all of your CPU PCIe lanes.
    Definitely a huge consideration. It's spelled out right in my mainboard manual that these kinds of devices drop my PCIe slot down to x8. I'm also pretty sure that the true target market for these devices is HEDT, NOT gaming. They're probably hoping to gain some whale sales in the gaming market is all, so they add it in there because why not..
    Reply
  • mdd1963
    Gen 6 (when it arrives) headroom in a PCI-e 6.0 x 4 slot should be double that of PCI-e 5.0x4, so, about 28 GB/sec, assuming drives' throughput able to progress to those ludicrous speeds within 3 years of PCI-e 6.0 adoption (even that is a quite large assumption)

    I predict Win 12 will boot 1/4 sec quicker than with a top Gen 5 drive....
    Reply
  • CelicaGT
    mdd1963 said:
    Gen 6 (when it arrives) headroom in a PCI-e 6.0 x 4 slot should be double that of PCI-e 5.0x4, so, about 28 GB/sec, assuming drives' throughput able to progress to those ludicrous speeds within 3 years of PCI-e 6.0 adoption (even that is a quite large assumption)

    I predict Win 12 will boot 1/4 sec quicker than with a top Gen 5 drive....
    Right? Ffs my BIOS splash screen is up longer than it takes Windows to cold boot.
    Reply