HighPoint SSD7101 Series SSD Review

Conclusion

Most people won't buy the HighPoint SSD7101A-1 just for the fun of it. These are serious workstation-focused products that require a specific workload to extract the most value. HighPoint has delivered a product with broad compatibility, and you aren’t tied down to a single SSD brand, either.

But there are a select few that would buy the $399 SSD7101A-1 just for the fun of it. I think I know all of those people! For most of our audience, NVMe all but killed RAID. Coalescing SATA SSDs in RAID still delivered a sequential performance increase that improved application performance, but NVMe ushered in a new era where the processor and software became the bottleneck again. If you're an enthusiast and your only motivation is performance, this is just an expensive option that may leave you dissatisfied.

If you seek high-performance capacity for a large game library, pairing the SSD7101A-1 with low-cost drives would fit the bill. You won't experience a severe performance penalty, but your wallet will certainly be a lot lighter.

Professional users know all too well that CPUs and GPUs get faster every year. The HighPoint SSD7101 series is faster than any of the hardware I have in the lab to crunch data. We ran a Vegas Pro (formally Sony Vegas) test with the SSD7101 and a handful of the other drives. The test consists of several effects and video streams. Sony built the test, which essentially creates a broadcast-quality commercial. We see an enormous difference in rendering times between hard disk drives and SATA SSDs. With most NVMe SSDs there isn't a difference at all because the storage isn't the bottleneck. We're looking at building an updated test using 4K resolution clips for future tests. The increased resolution may put more strain on the storage system.

This means the bar is very high to take advantage of the SSD7101's performance. If you can fathom where that bar is, we would like to know. It would be great to build a new test that benefits from four of the fastest consumer SSDs ever released. In a way, that sums up the HighPoint SSD7101 series. It comes packing so much sequential performance we don't even know what to do with it.

MORE: Best SSDs

MORE: How We Test HDDs And SSDs

MORE: All SSD Content

Create a new thread in the US Reviews comments forum about this subject
This thread is closed for comments
21 comments
Comment from the forums
    Your comment
  • gasaraki
    So this is a "RAID controller" but you can only use software RAID on this? Wut? I want to be able to boot off of this and use it as the C drive. Does this have legacy BIOS on it so I can use it on systems without UEFI?
  • mapesdhs
    I'll tell you what to do with it, contact the DoD and ask them if you can run the same defense imaging test they used with an Onyx2 over a decade ago, see if you can beat what that system was able to do. ;)

    http://www.sgidepot.co.uk/onyx2/groupstation.pdf

    More realistically, this product is ideal for GIS, medical, automotive and other application areas that involve huge datasets (including defense imaging), though the storage capacity still isn't high enough in some cases, but it's getting there.

    I do wonder about the lack of power loss protection though, seems like a bit of an oversight for a product that otherwise ought to appeal to pro users.

    Ian.
  • samer.forums
    Anonymous said:
    So this is a "RAID controller" but you can only use software RAID on this? Wut? I want to be able to boot off of this and use it as the C drive. Does this have legacy BIOS on it so I can use it on systems without UEFI?


    This is not a Raid controller. it uses software raid. the PLX chip on it allows the system to "think" that the x16 slot is a 4 independent x4 slots thats all , and all active and bootable. thats all. The rest is software.
  • samer.forums
    I see this product very useful for people who want huge SSD storage The motherboard will not give you ... and for Raid 1 . the performance gain in real life for NVME SSD Raid cant be noticed ...
  • samer.forums
    Hey HighPoint if you are reading this , make this card low profile please and place 2 M2 SSD on each side of the card (total 4).
  • samer.forums
    Anonymous said:

    I do wonder about the lack of power loss protection though, seems like a bit of an oversight for a product that otherwise ought to appeal to pro users.

    Ian.


    there is a U2 Version of this card , using it with built in power loss U2 nvme SSD will solve this problem.

    link

    http://www.highpoint-tech.com/USA_new/series-ssd7120-overview.htm
  • lorfa
    Would have liked some xpoint thrown in for comparison
  • CRamseyer
    We're both still waiting for that.
  • JonDol
    I hope that the lack of power loss protection was the single reason this product didn't get an award. Could you, please, confirm it or list other missing details/features for an award ?

    The fact alone that such a pro product landed in the consumer arena, even if it is for a very selective category of consumers, is nothing small of an achievement.

    The explanation of how the M.2 traffic is routed through the DMI bus was very clear. I wonder if there is even a single motherboard that will route that traffic directly to the CPU. Anyone knows about such a motherboard ?

    Cheers
  • samer.forums
    Anonymous said:
    I hope that the lack of power loss protection was the single reason this product didn't get an award. Could you, please, confirm it or list other missing details/features for an award ?

    The fact alone that such a pro product landed in the consumer arena, even if it is for a very selective category of consumers, is nothing small of an achievement.

    The explanation of how the M.2 traffic is routed through the DMI bus was very clear. I wonder if there is even a single motherboard that will route that traffic directly to the CPU. Anyone knows about such a motherboard ?

    Cheers


    All motherboards have slots directed to the CPU lanes. for example , most Z series come with 2 slots SLI 8 lanes each. you can use one of them for GPU and the other for any card and so on.

    If the CPU has more than 16 lanes you will have more slots connected to the CPU lanes like the X299 motherboards.
  • seekaliao
    This is not a new product, it has been around for a few years. Check out amfeltec. They have the same thing long ago. Btw, you need the PLX chip if your board does not support bifurcation.

    If your board does, you can skip the cost and go for a Dell or HP card. ITs does not have PLX chip. Its a very simple card. 16 lanes of PCIE, 4 lanes to each SSD and its cheap.

    http://barefeats.com/hard210.html

    5GB/s way back in 2015......its pcie 2.0 and achi though.
  • seekaliao
    http://amfeltec.com/squid-pci-express-carrier-boards-for-m-2-ssd-modules/

    amfeltec one. Its low profile but no heatsink for the SSDs.
  • samer.forums
    Anonymous said:
    This is not a new product, it has been around for a few years. Check out amfeltec. They have the same thing long ago. Btw, you need the PLX chip if your board does not support bifurcation.

    If your board does, you can skip the cost and go for a Dell or HP card. ITs does not have PLX chip. Its a very simple card. 16 lanes of PCIE, 4 lanes to each SSD and its cheap.

    http://barefeats.com/hard210.html

    5GB/s way back in 2015......its pcie 2.0 and achi though.


    sorry it is a new product .. it is NVME and were waiting for NVME . and it is NVME and Bootable as well .
  • JonDol
    @samer.forums: I am aware of how slots are connected to the CPU and yet as the recent THW X299 MB roundup shows, unless I've read them too fast, only 2 of the almost 10 reviewed MBs have M.2 slots linked directly to the CPU: the AsRock Gaming i9 and the Gigabyte Aorus 7...
  • mikenth
    is that really such a big deal? as long as your mb supports enough PCIe lanes directly connected to the CPU, why not just use cheap M.2 -> PCIe adapter cards and get the full performance possible that way? most moderd mb's have so much stuff built-in, there's hardly a need for the PCIe slots for most users other than for graphics cards. just fill those empty slots with M.2 adapters. I have an ancient X58 mb with three M.2 -> PCIe adapters, all populated with Samsung NVMe drives - one 950 for bootimg and two 960's for sw RAID1. The performance is phenomenal -- better on this 7 year old system than what is possible on new DMI-limited mb's. The secret sauce? X58 was a server-class chipset, like X99 / X299 / X399, and had more free PCIe lanes then than Intel-crippled 'consumer' chipsets do now. With any server-class chipset, you can have your cake and eat it too -- get the full x16 performance from one or even two GPUs, and still get the full x4 performance on several other slots populated with M.2 adapters. It will be some time before you can beat this configurations cost and performance using on-board M.2 slots on a consumer mb (at least, an Intel consumer mn; AMD is less stingy and condescending about how many PCIe lanes a lowly enthusiast like you could possibly need).
  • JonDol
    Anonymous said:
    is that really such a big deal? as long as your mb supports enough PCIe lanes directly connected to the CPU, why not just use cheap M.2 -> PCIe adapter cards and get the full performance possible that way?


    Because I dont wish to waste PCIe slots that way, especially when those boards already come with 3 M.2 ports.
  • mikenth
    Anonymous said:
    Because I dont wish to waste PCIe slots that way, especially when those boards already come with 3 M.2 ports.


    1. You're not "wasting" a slot that is unused. If you have a full-size ATX mb the vast majority of users won't have a need for all the slots. If you want maximum M.2 performance, you just found a need for them.

    2. An M.2 slot on the motherboard with poor performance is not categorically different than trying to connect a fast SSD through the motherboard's USB ports, or plugging an x16 graphics card into an x1 slot. Just don't do it if you care about performance.

    This generation of motherboards just doesn't have fast M.2 on-board slots, so forget about them. Pretend they aren't even there. You seem to be hung up on the idea of having your cake and eating it too with a currently available mb with full-speed x4 direct to CPU M.2 slots. You can keep wishing they are plentiful, but it isn't going to solve your problem. Either wait until the next generation of mb's -- maybe X399 ones will have them? -- or go with the solution that is at hand today. Or suffer with substandard M.2 on-board perf. There is no fourth option, unless it is to whine about the state of affairs and do nothing.
  • JonDol
    Anonymous said:
    Anonymous said:
    Because I dont wish to waste PCIe slots that way, especially when those boards already come with 3 M.2 ports.


    1. You're not "wasting" a slot that is unused. If you have a full-size ATX mb the vast majority of users won't have a need for all the slots. If you want maximum M.2 performance, you just found a need for them.



    If it's not used today it doesn't mean it will still be unused tomorrow. And too bad I'm not part of that "vast majority of users".

    Anonymous said:

    2. An M.2 slot on the motherboard with poor performance is not categorically different than trying to connect a fast SSD through the motherboard's USB ports, or plugging an x16 graphics card into an x1 slot. Just don't do it if you care about performance.


    I strongly doubt that the fastest SSD on USB will be as fast as the slowest M.2 SSD. And btw, one reason I read THW is to make sure to buy not that poor performer hardware.

    Anonymous said:

    This generation of motherboards just doesn't have fast M.2 on-board slots, so forget about them. Pretend they aren't even there.


    Oh really ? What would it need to be a "fast M.2" slot ? Are you dreaming of some magic super shortcut that would make them even faster ? Why would them be not fast since they are directly linked to the CPU just like the PCIe slots ?
  • mapesdhs
    Anonymous said:
    ... I have an ancient X58 mb with three M.2 -> PCIe adapters, all populated with Samsung NVMe drives - one 950 for bootimg and two 960's for sw RAID1. The performance is phenomenal -- better on this 7 year old system than what is possible on new DMI-limited mb's. The secret sauce? X58 was a server-class chipset, like X99 / X299 / X399, and had more free PCIe lanes then than Intel-crippled 'consumer' chipsets do now. With any server-class chipset, you can have your cake and eat it too -- get the full x16 performance from one or even two GPUs, and still get the full x4 performance on several other slots populated with M.2 adapters. ...


    I've been doing that a lot recently with X79 setups, typically with cheaper SM951/SM961 drives (excellent cache/scratch drives for editing, etc.), but also 950 Pro or whatever for boot (there are a great many older ASUS boards for which modded BIOS files with NVMe boot support is now available). I bought a 1TB SM961 for my Z68/2700K setup, works very well.

    There's one other advantage of using M.2 adapters in this way: for older mbds with limited native Intel SATA3 ports, it frees them up for more targeted use, in my case a port is linked to a front-bay hot swap so I can do live C-drive backups or access other drives for whatever reason (3rd party SATA3 controllers generally suck).

    A caveat though, albeit perhaps a minor issue: on some older mbds with lots of PCIe slots (mainly X58 and X79), some slots are routed via PCIe switches, which can add a little bit of latency. There's usually at least one x4 or x8 slot though which goes straight to the CPU, as can been seen for example in the following diagram of the ASUS P9X79-E WS (in a 4960X system I recently built, the directly connected x8 slot holds a 512GB SM961, which belts along at almost 3.5GB/sec):

    https://images.anandtech.com/doci/7613/Chipset%20Diagram.png

    In this case, as mikenth says, one can have several GPUs aswell as a crazy fast NVMe as boot or some other purpose. My 4960X system has an SM951 256GB for the C-drive (only cost 65 UKP) and two 780 Ti GPUs for CUDA which both run at x16.

    There's even a P55 board one can do something like this with (not quite to the same extent, but still surprisingly potent), namely the ASUS P7P55 WS Supercomputer, which via two PLX switches supports x8/x8/x8/x8. I have two of these P7P55 WS boards, great for CUDA crunching; I used one of them with three 980s SLI to bag all of the 3DMark P55 records (except for the DX12/VR stuff, as I'm only using WIn7).

    There's a lot of life in older tech where one has plenty of PCIe lanes/slots to throw around. I have an Asrock X58 Extreme6 I plan on experimenting with at some point, not gotten round to it yet.

    Oh and btw, X79 boards usually support the dirt cheap 10-core XEON E5-2680 v2 (mine cost 165 UKP), a great way to get native PCIe 3.0 and a good performer for threaded tasks (scores 15.45 for CB 11.5 and 1389 for CB R15).

    This of course is why Intel and mbd vendors do not want older products to support bootable NVMe, it would extent their life, which means fewer people upgrading. Thank grud for BIOS mods.

    Lastly, the 950 Pro is a wonderful drive because it has its own boot ROM, so it can be used even on older mbds that cannot be flashed with a modded BIOS for bootable NVMe.

    Ian.
  • JonDol
    Anonymous said:
    Anonymous said:
    ... I have an ancient X58 mb with three M.2 -> PCIe adapters, all populated with Samsung NVMe drives - one 950 for bootimg and two 960's for sw RAID1. The performance is phenomenal -- better on this 7 year old system than what is possible on new DMI-limited mb's. The secret sauce? X58 was a server-class chipset, like X99 / X299 / X399, and had more free PCIe lanes then than Intel-crippled 'consumer' chipsets do now. With any server-class chipset, you can have your cake and eat it too -- get the full x16 performance from one or even two GPUs, and still get the full x4 performance on several other slots populated with M.2 adapters. ...


    I've been doing that a lot recently with X79 setups, typically with cheaper SM951/SM961 drives (excellent cache/scratch drives for editing, etc.), but also 950 Pro or whatever for boot (there are a great many older ASUS boards for which modded BIOS files with NVMe boot support is now available). I bought a 1TB SM961 for my Z68/2700K setup, works very well.

    There's one other advantage of using M.2 adapters in this way: for older mbds with limited native Intel SATA3 ports, it frees them up for more targeted use, in my case a port is linked to a front-bay hot swap so I can do live C-drive backups or access other drives for whatever reason (3rd party SATA3 controllers generally suck).

    A caveat though, albeit perhaps a minor issue: on some older mbds with lots of PCIe slots (mainly X58 and X79), some slots are routed via PCIe switches, which can add a little bit of latency. There's usually at least one x4 or x8 slot though which goes straight to the CPU, as can been seen for example in the following diagram of the ASUS P9X79-E WS (in a 4960X system I recently built, the directly connected x8 slot holds a 512GB SM961, which belts along at almost 3.5GB/sec):

    https://images.anandtech.com/doci/7613/Chipset%20Diagram.png

    In this case, as mikenth says, one can have several GPUs aswell as a crazy fast NVMe as boot or some other purpose. My 4960X system has an SM951 256GB for the C-drive (only cost 65 UKP) and two 780 Ti GPUs for CUDA which both run at x16.

    There's even a P55 board one can do something like this with (not quite to the same extent, but still surprisingly potent), namely the ASUS P7P55 WS Supercomputer, which via two PLX switches supports x8/x8/x8/x8. I have two of these P7P55 WS boards, great for CUDA crunching; I used one of them with three 980s SLI to bag all of the 3DMark P55 records (except for the DX12/VR stuff, as I'm only using WIn7).

    There's a lot of life in older tech where one has plenty of PCIe lanes/slots to throw around. I have an Asrock X58 Extreme6 I plan on experimenting with at some point, not gotten round to it yet.

    Oh and btw, X79 boards usually support the dirt cheap 10-core XEON E5-2680 v2 (mine cost 165 UKP), a great way to get native PCIe 3.0 and a good performer for threaded tasks (scores 15.45 for CB 11.5 and 1389 for CB R15).

    This of course is why Intel and mbd vendors do not want older products to support bootable NVMe, it would extent their life, which means fewer people upgrading. Thank grud for BIOS mods.

    Lastly, the 950 Pro is a wonderful drive because it has its own boot ROM, so it can be used even on older mbds that cannot be flashed with a modded BIOS for bootable NVMe.

    Ian.



    Thank you for all these details. Looks like you are crazy geeks with plenty of time to spare.

    With your examples I got a better insight on our colleague's mikenth previous comment.

    I know there is a lot of potential in older quality hardware and not only I personally try to squeeze the maximum performance for my needs from it but I also encourage everyone to do so and the more I read about the security flaws in IME, IPMI or backdoors in cars crap the longer I wish to keep my older hardware.