HighPoint SSD7101 Series SSD Review

Performance Testing

Comparison Products

Loading...

We put the HighPoint 4TB SSD7101B-040T up against our group of 1TB NVMe SSDs. This group of products spans a wide range of price points. The Intel 600p is the only true low-cost drive in the group. Most of the other products fall in the middle of the price spectrum. The Corsair Neutron NX500 and Samsung 960 Pro are the most expensive drives in this class, but the latter delivers the highest performance. We used the Samsung 960 Pro 2TB instead of the 1TB. Now that we have 960 Pro 1TB drives in the lab for testing, we will add them to the charts for future reviews.

Test System

To fully utilize the HighPoint SSD7101 we brought back the Intel Core i9-7900X system with the X299 chipset. We introduced the system in our Aplicata Quad M.2 PCIe x8 Adapter review. The system allows us to run the SSD7101 in full PCIe 3.0 x16 mode.

HighPoint and Samsung sent us four 960 Pro 1TB NVMe SSDs to load the adapter. That creates the SSD7101B-040T, a $4,000 SSD that's guaranteed to outperform every SSD we've ever tested.

Sequential Read Performance

To read about our storage tests in-depth, please check out How We Test HDDs And SSDs. We cover four-corner testing on page six of our How We Test guide.

The HighPoint SSD7101B-040T 4TB SSD sets a strong tone for this review. With a single worker, it delivers over 9,000 MB/s of sequential read performance. The exceptionally high result comes at a queue depth (QD) of 16, which is well beyond what you would see with a typical desktop workload. The drive delivered lower performance than a single 960 Pro at low QDs, but the throughput shot up significantly when we reached QD4.

Sequential Write Performance

The sequential write performance test shows a different result. QD1 performance is in line with the single 960 Pro SSD, but by QD2 the SSD7101 again pulled away from the rest of the products. The HighPoint leveled off around 8,000 MB/s at QD8 and held solid through the remainder of the workload.

Random Read Performance

The random read performance test shows us how poorly software-based RAID scales as we ramp up the workload. The HighPoint array still delivered very good low queue depth performance, but it's slightly slower than a single 960 Pro. We see good scaling up to QD8, but the array loses momentum at that point. It eventually holds steady at just over 115,000 IOPS.

Random Write Performance

Random write performance takes the brunt of the software RAID inefficiency; the array only mustered around 100,000 IOPS. The saving grace is that the array can reach that level of performance at just QD4. The SSD7101 array will not feel slow by any means; you just have to match it to the correct workload. This isn't a product for databases or transactional workloads.

80% Mixed Sequential Workload

We describe our mixed workload testing in detail here and describe our steady state tests here.

Now that we've established that the HighPoint SSD7101 accelerates sequential workloads, we can shift our focus to what you should use the drive for. Most sequential workloads feature large-block transfers. Audio and video creation/editing are the obvious target market for this product, but to take full advantage of the system, you have to go above and beyond what a home studio is capable of. This is the perfect storage device for real-time editing on local storage.

80% Mixed Random Workload

Again, the random read performance isn't bad with this product in the configuration. You're not missing a lot of performance over a single drive, but you are not gaining anything in these workloads.

Sequential Steady-State

The sequential steady-state test writes 128KB sequential data for ten hours to fill the drive several times before we measure performance. This is the worst case scenario for the SSDs with sequential data. It's also a common workload for studios recording at high bit rates.

Random Steady-State

You may start to doubt what we said about random performance. The results look very good, but for the same money you could buy an enterprise SSD designed for random workloads and extract more performance.

We don't publish RAID articles too often, but we examine performance consistency because it has a direct relation to RAID. The single 960 Pro 2TB delivers a flat performance line with slight variation between its highest and lowest points. As you add drives, you magnify the variation by the number of drives in the array. You can see the minor differences in the single drive, and visually see the array with roughly four times the variation.

PCMark 8 Real-World Software Performance

For details on our real-world software performance testing, please click here.

The lower random performance carries over to traditional desktop performance. We expected to see the HighPoint SSD7101 lead the heavy Photoshop test. We went back to Futuremark's technical guide to see if we could spot the reason the HighPoint didn't break away. The test consists primarily of heavy sequential writes, but it also sprinkles in a healthy dose of random reads. The random access during the test hurts the total overall score.

Application Storage Bandwidth

The HighPoint SSD7101B-040T 4TB array still managed to outperform most of the other NVMe SSDs that we tested, but you can see the performance is slightly lower than a single 960 Pro when the drives aren't full.

PCMark 8 Advanced Workload Performance

To learn how we test advanced workload performance, please click here.

We fill the drives again, reduce the idle time for the heavy section of the test, and then give the drives five minutes of rest for the recovery stages. The steady performance during the tests shows the array is virtually immune to workload fatigue.

Total Service Time

Many of us will buy a larger SSD than we need to maintain high performance. With the HighPoint SSD7101, you can have your capacity and actually use it, too.

Disk Busy Time

The results really shouldn't come as a surprise. HighPoint took the best consumer SSD available and combined several to supersize performance. 

MORE: Best SSDs

MORE: How We Test HDDs And SSDs

MORE: All SSD Content

Create a new thread in the US Reviews comments forum about this subject
This thread is closed for comments
21 comments
Comment from the forums
    Your comment
  • gasaraki
    So this is a "RAID controller" but you can only use software RAID on this? Wut? I want to be able to boot off of this and use it as the C drive. Does this have legacy BIOS on it so I can use it on systems without UEFI?
  • mapesdhs
    I'll tell you what to do with it, contact the DoD and ask them if you can run the same defense imaging test they used with an Onyx2 over a decade ago, see if you can beat what that system was able to do. ;)

    http://www.sgidepot.co.uk/onyx2/groupstation.pdf

    More realistically, this product is ideal for GIS, medical, automotive and other application areas that involve huge datasets (including defense imaging), though the storage capacity still isn't high enough in some cases, but it's getting there.

    I do wonder about the lack of power loss protection though, seems like a bit of an oversight for a product that otherwise ought to appeal to pro users.

    Ian.
  • samer.forums
    Anonymous said:
    So this is a "RAID controller" but you can only use software RAID on this? Wut? I want to be able to boot off of this and use it as the C drive. Does this have legacy BIOS on it so I can use it on systems without UEFI?


    This is not a Raid controller. it uses software raid. the PLX chip on it allows the system to "think" that the x16 slot is a 4 independent x4 slots thats all , and all active and bootable. thats all. The rest is software.
  • samer.forums
    I see this product very useful for people who want huge SSD storage The motherboard will not give you ... and for Raid 1 . the performance gain in real life for NVME SSD Raid cant be noticed ...
  • samer.forums
    Hey HighPoint if you are reading this , make this card low profile please and place 2 M2 SSD on each side of the card (total 4).
  • samer.forums
    Anonymous said:

    I do wonder about the lack of power loss protection though, seems like a bit of an oversight for a product that otherwise ought to appeal to pro users.

    Ian.


    there is a U2 Version of this card , using it with built in power loss U2 nvme SSD will solve this problem.

    link

    http://www.highpoint-tech.com/USA_new/series-ssd7120-overview.htm
  • lorfa
    Would have liked some xpoint thrown in for comparison
  • CRamseyer
    We're both still waiting for that.
  • JonDol
    I hope that the lack of power loss protection was the single reason this product didn't get an award. Could you, please, confirm it or list other missing details/features for an award ?

    The fact alone that such a pro product landed in the consumer arena, even if it is for a very selective category of consumers, is nothing small of an achievement.

    The explanation of how the M.2 traffic is routed through the DMI bus was very clear. I wonder if there is even a single motherboard that will route that traffic directly to the CPU. Anyone knows about such a motherboard ?

    Cheers
  • samer.forums
    Anonymous said:
    I hope that the lack of power loss protection was the single reason this product didn't get an award. Could you, please, confirm it or list other missing details/features for an award ?

    The fact alone that such a pro product landed in the consumer arena, even if it is for a very selective category of consumers, is nothing small of an achievement.

    The explanation of how the M.2 traffic is routed through the DMI bus was very clear. I wonder if there is even a single motherboard that will route that traffic directly to the CPU. Anyone knows about such a motherboard ?

    Cheers


    All motherboards have slots directed to the CPU lanes. for example , most Z series come with 2 slots SLI 8 lanes each. you can use one of them for GPU and the other for any card and so on.

    If the CPU has more than 16 lanes you will have more slots connected to the CPU lanes like the X299 motherboards.
  • seekaliao
    This is not a new product, it has been around for a few years. Check out amfeltec. They have the same thing long ago. Btw, you need the PLX chip if your board does not support bifurcation.

    If your board does, you can skip the cost and go for a Dell or HP card. ITs does not have PLX chip. Its a very simple card. 16 lanes of PCIE, 4 lanes to each SSD and its cheap.

    http://barefeats.com/hard210.html

    5GB/s way back in 2015......its pcie 2.0 and achi though.
  • seekaliao
    http://amfeltec.com/squid-pci-express-carrier-boards-for-m-2-ssd-modules/

    amfeltec one. Its low profile but no heatsink for the SSDs.
  • samer.forums
    Anonymous said:
    This is not a new product, it has been around for a few years. Check out amfeltec. They have the same thing long ago. Btw, you need the PLX chip if your board does not support bifurcation.

    If your board does, you can skip the cost and go for a Dell or HP card. ITs does not have PLX chip. Its a very simple card. 16 lanes of PCIE, 4 lanes to each SSD and its cheap.

    http://barefeats.com/hard210.html

    5GB/s way back in 2015......its pcie 2.0 and achi though.


    sorry it is a new product .. it is NVME and were waiting for NVME . and it is NVME and Bootable as well .
  • JonDol
    @samer.forums: I am aware of how slots are connected to the CPU and yet as the recent THW X299 MB roundup shows, unless I've read them too fast, only 2 of the almost 10 reviewed MBs have M.2 slots linked directly to the CPU: the AsRock Gaming i9 and the Gigabyte Aorus 7...
  • mikenth
    is that really such a big deal? as long as your mb supports enough PCIe lanes directly connected to the CPU, why not just use cheap M.2 -> PCIe adapter cards and get the full performance possible that way? most moderd mb's have so much stuff built-in, there's hardly a need for the PCIe slots for most users other than for graphics cards. just fill those empty slots with M.2 adapters. I have an ancient X58 mb with three M.2 -> PCIe adapters, all populated with Samsung NVMe drives - one 950 for bootimg and two 960's for sw RAID1. The performance is phenomenal -- better on this 7 year old system than what is possible on new DMI-limited mb's. The secret sauce? X58 was a server-class chipset, like X99 / X299 / X399, and had more free PCIe lanes then than Intel-crippled 'consumer' chipsets do now. With any server-class chipset, you can have your cake and eat it too -- get the full x16 performance from one or even two GPUs, and still get the full x4 performance on several other slots populated with M.2 adapters. It will be some time before you can beat this configurations cost and performance using on-board M.2 slots on a consumer mb (at least, an Intel consumer mn; AMD is less stingy and condescending about how many PCIe lanes a lowly enthusiast like you could possibly need).
  • JonDol
    Anonymous said:
    is that really such a big deal? as long as your mb supports enough PCIe lanes directly connected to the CPU, why not just use cheap M.2 -> PCIe adapter cards and get the full performance possible that way?


    Because I dont wish to waste PCIe slots that way, especially when those boards already come with 3 M.2 ports.
  • mikenth
    Anonymous said:
    Because I dont wish to waste PCIe slots that way, especially when those boards already come with 3 M.2 ports.


    1. You're not "wasting" a slot that is unused. If you have a full-size ATX mb the vast majority of users won't have a need for all the slots. If you want maximum M.2 performance, you just found a need for them.

    2. An M.2 slot on the motherboard with poor performance is not categorically different than trying to connect a fast SSD through the motherboard's USB ports, or plugging an x16 graphics card into an x1 slot. Just don't do it if you care about performance.

    This generation of motherboards just doesn't have fast M.2 on-board slots, so forget about them. Pretend they aren't even there. You seem to be hung up on the idea of having your cake and eating it too with a currently available mb with full-speed x4 direct to CPU M.2 slots. You can keep wishing they are plentiful, but it isn't going to solve your problem. Either wait until the next generation of mb's -- maybe X399 ones will have them? -- or go with the solution that is at hand today. Or suffer with substandard M.2 on-board perf. There is no fourth option, unless it is to whine about the state of affairs and do nothing.
  • JonDol
    Anonymous said:
    Anonymous said:
    Because I dont wish to waste PCIe slots that way, especially when those boards already come with 3 M.2 ports.


    1. You're not "wasting" a slot that is unused. If you have a full-size ATX mb the vast majority of users won't have a need for all the slots. If you want maximum M.2 performance, you just found a need for them.



    If it's not used today it doesn't mean it will still be unused tomorrow. And too bad I'm not part of that "vast majority of users".

    Anonymous said:

    2. An M.2 slot on the motherboard with poor performance is not categorically different than trying to connect a fast SSD through the motherboard's USB ports, or plugging an x16 graphics card into an x1 slot. Just don't do it if you care about performance.


    I strongly doubt that the fastest SSD on USB will be as fast as the slowest M.2 SSD. And btw, one reason I read THW is to make sure to buy not that poor performer hardware.

    Anonymous said:

    This generation of motherboards just doesn't have fast M.2 on-board slots, so forget about them. Pretend they aren't even there.


    Oh really ? What would it need to be a "fast M.2" slot ? Are you dreaming of some magic super shortcut that would make them even faster ? Why would them be not fast since they are directly linked to the CPU just like the PCIe slots ?
  • mapesdhs
    Anonymous said:
    ... I have an ancient X58 mb with three M.2 -> PCIe adapters, all populated with Samsung NVMe drives - one 950 for bootimg and two 960's for sw RAID1. The performance is phenomenal -- better on this 7 year old system than what is possible on new DMI-limited mb's. The secret sauce? X58 was a server-class chipset, like X99 / X299 / X399, and had more free PCIe lanes then than Intel-crippled 'consumer' chipsets do now. With any server-class chipset, you can have your cake and eat it too -- get the full x16 performance from one or even two GPUs, and still get the full x4 performance on several other slots populated with M.2 adapters. ...


    I've been doing that a lot recently with X79 setups, typically with cheaper SM951/SM961 drives (excellent cache/scratch drives for editing, etc.), but also 950 Pro or whatever for boot (there are a great many older ASUS boards for which modded BIOS files with NVMe boot support is now available). I bought a 1TB SM961 for my Z68/2700K setup, works very well.

    There's one other advantage of using M.2 adapters in this way: for older mbds with limited native Intel SATA3 ports, it frees them up for more targeted use, in my case a port is linked to a front-bay hot swap so I can do live C-drive backups or access other drives for whatever reason (3rd party SATA3 controllers generally suck).

    A caveat though, albeit perhaps a minor issue: on some older mbds with lots of PCIe slots (mainly X58 and X79), some slots are routed via PCIe switches, which can add a little bit of latency. There's usually at least one x4 or x8 slot though which goes straight to the CPU, as can been seen for example in the following diagram of the ASUS P9X79-E WS (in a 4960X system I recently built, the directly connected x8 slot holds a 512GB SM961, which belts along at almost 3.5GB/sec):

    https://images.anandtech.com/doci/7613/Chipset%20Diagram.png

    In this case, as mikenth says, one can have several GPUs aswell as a crazy fast NVMe as boot or some other purpose. My 4960X system has an SM951 256GB for the C-drive (only cost 65 UKP) and two 780 Ti GPUs for CUDA which both run at x16.

    There's even a P55 board one can do something like this with (not quite to the same extent, but still surprisingly potent), namely the ASUS P7P55 WS Supercomputer, which via two PLX switches supports x8/x8/x8/x8. I have two of these P7P55 WS boards, great for CUDA crunching; I used one of them with three 980s SLI to bag all of the 3DMark P55 records (except for the DX12/VR stuff, as I'm only using WIn7).

    There's a lot of life in older tech where one has plenty of PCIe lanes/slots to throw around. I have an Asrock X58 Extreme6 I plan on experimenting with at some point, not gotten round to it yet.

    Oh and btw, X79 boards usually support the dirt cheap 10-core XEON E5-2680 v2 (mine cost 165 UKP), a great way to get native PCIe 3.0 and a good performer for threaded tasks (scores 15.45 for CB 11.5 and 1389 for CB R15).

    This of course is why Intel and mbd vendors do not want older products to support bootable NVMe, it would extent their life, which means fewer people upgrading. Thank grud for BIOS mods.

    Lastly, the 950 Pro is a wonderful drive because it has its own boot ROM, so it can be used even on older mbds that cannot be flashed with a modded BIOS for bootable NVMe.

    Ian.
  • JonDol
    Anonymous said:
    Anonymous said:
    ... I have an ancient X58 mb with three M.2 -> PCIe adapters, all populated with Samsung NVMe drives - one 950 for bootimg and two 960's for sw RAID1. The performance is phenomenal -- better on this 7 year old system than what is possible on new DMI-limited mb's. The secret sauce? X58 was a server-class chipset, like X99 / X299 / X399, and had more free PCIe lanes then than Intel-crippled 'consumer' chipsets do now. With any server-class chipset, you can have your cake and eat it too -- get the full x16 performance from one or even two GPUs, and still get the full x4 performance on several other slots populated with M.2 adapters. ...


    I've been doing that a lot recently with X79 setups, typically with cheaper SM951/SM961 drives (excellent cache/scratch drives for editing, etc.), but also 950 Pro or whatever for boot (there are a great many older ASUS boards for which modded BIOS files with NVMe boot support is now available). I bought a 1TB SM961 for my Z68/2700K setup, works very well.

    There's one other advantage of using M.2 adapters in this way: for older mbds with limited native Intel SATA3 ports, it frees them up for more targeted use, in my case a port is linked to a front-bay hot swap so I can do live C-drive backups or access other drives for whatever reason (3rd party SATA3 controllers generally suck).

    A caveat though, albeit perhaps a minor issue: on some older mbds with lots of PCIe slots (mainly X58 and X79), some slots are routed via PCIe switches, which can add a little bit of latency. There's usually at least one x4 or x8 slot though which goes straight to the CPU, as can been seen for example in the following diagram of the ASUS P9X79-E WS (in a 4960X system I recently built, the directly connected x8 slot holds a 512GB SM961, which belts along at almost 3.5GB/sec):

    https://images.anandtech.com/doci/7613/Chipset%20Diagram.png

    In this case, as mikenth says, one can have several GPUs aswell as a crazy fast NVMe as boot or some other purpose. My 4960X system has an SM951 256GB for the C-drive (only cost 65 UKP) and two 780 Ti GPUs for CUDA which both run at x16.

    There's even a P55 board one can do something like this with (not quite to the same extent, but still surprisingly potent), namely the ASUS P7P55 WS Supercomputer, which via two PLX switches supports x8/x8/x8/x8. I have two of these P7P55 WS boards, great for CUDA crunching; I used one of them with three 980s SLI to bag all of the 3DMark P55 records (except for the DX12/VR stuff, as I'm only using WIn7).

    There's a lot of life in older tech where one has plenty of PCIe lanes/slots to throw around. I have an Asrock X58 Extreme6 I plan on experimenting with at some point, not gotten round to it yet.

    Oh and btw, X79 boards usually support the dirt cheap 10-core XEON E5-2680 v2 (mine cost 165 UKP), a great way to get native PCIe 3.0 and a good performer for threaded tasks (scores 15.45 for CB 11.5 and 1389 for CB R15).

    This of course is why Intel and mbd vendors do not want older products to support bootable NVMe, it would extent their life, which means fewer people upgrading. Thank grud for BIOS mods.

    Lastly, the 950 Pro is a wonderful drive because it has its own boot ROM, so it can be used even on older mbds that cannot be flashed with a modded BIOS for bootable NVMe.

    Ian.



    Thank you for all these details. Looks like you are crazy geeks with plenty of time to spare.

    With your examples I got a better insight on our colleague's mikenth previous comment.

    I know there is a lot of potential in older quality hardware and not only I personally try to squeeze the maximum performance for my needs from it but I also encourage everyone to do so and the more I read about the security flaws in IME, IPMI or backdoors in cars crap the longer I wish to keep my older hardware.