Closed

HighPoint SSD7101 Series SSD Review

HighPoint's new SSD7101 series provides users with up to 13,500 MB/s of sequential read performance and comes to market either armed with high-speed SSDs or in a user-friendly DIY version that won't break the bank.

HighPoint SSD7101 Series SSD Review : Read more
21 answers Last reply
More about highpoint ssd7101 series ssd review
  1. So this is a "RAID controller" but you can only use software RAID on this? Wut? I want to be able to boot off of this and use it as the C drive. Does this have legacy BIOS on it so I can use it on systems without UEFI?
  2. I'll tell you what to do with it, contact the DoD and ask them if you can run the same defense imaging test they used with an Onyx2 over a decade ago, see if you can beat what that system was able to do. ;)

    http://www.sgidepot.co.uk/onyx2/groupstation.pdf

    More realistically, this product is ideal for GIS, medical, automotive and other application areas that involve huge datasets (including defense imaging), though the storage capacity still isn't high enough in some cases, but it's getting there.

    I do wonder about the lack of power loss protection though, seems like a bit of an oversight for a product that otherwise ought to appeal to pro users.

    Ian.
  3. gasaraki said:
    So this is a "RAID controller" but you can only use software RAID on this? Wut? I want to be able to boot off of this and use it as the C drive. Does this have legacy BIOS on it so I can use it on systems without UEFI?


    This is not a Raid controller. it uses software raid. the PLX chip on it allows the system to "think" that the x16 slot is a 4 independent x4 slots thats all , and all active and bootable. thats all. The rest is software.
  4. I see this product very useful for people who want huge SSD storage The motherboard will not give you ... and for Raid 1 . the performance gain in real life for NVME SSD Raid cant be noticed ...
  5. Hey HighPoint if you are reading this , make this card low profile please and place 2 M2 SSD on each side of the card (total 4).
  6. mapesdhs said:

    I do wonder about the lack of power loss protection though, seems like a bit of an oversight for a product that otherwise ought to appeal to pro users.

    Ian.


    there is a U2 Version of this card , using it with built in power loss U2 nvme SSD will solve this problem.

    link

    http://www.highpoint-tech.com/USA_new/series-ssd7120-overview.htm
  7. Would have liked some xpoint thrown in for comparison
  8. We're both still waiting for that.
  9. I hope that the lack of power loss protection was the single reason this product didn't get an award. Could you, please, confirm it or list other missing details/features for an award ?

    The fact alone that such a pro product landed in the consumer arena, even if it is for a very selective category of consumers, is nothing small of an achievement.

    The explanation of how the M.2 traffic is routed through the DMI bus was very clear. I wonder if there is even a single motherboard that will route that traffic directly to the CPU. Anyone knows about such a motherboard ?

    Cheers
  10. JonDol said:
    I hope that the lack of power loss protection was the single reason this product didn't get an award. Could you, please, confirm it or list other missing details/features for an award ?

    The fact alone that such a pro product landed in the consumer arena, even if it is for a very selective category of consumers, is nothing small of an achievement.

    The explanation of how the M.2 traffic is routed through the DMI bus was very clear. I wonder if there is even a single motherboard that will route that traffic directly to the CPU. Anyone knows about such a motherboard ?

    Cheers


    All motherboards have slots directed to the CPU lanes. for example , most Z series come with 2 slots SLI 8 lanes each. you can use one of them for GPU and the other for any card and so on.

    If the CPU has more than 16 lanes you will have more slots connected to the CPU lanes like the X299 motherboards.
  11. This is not a new product, it has been around for a few years. Check out amfeltec. They have the same thing long ago. Btw, you need the PLX chip if your board does not support bifurcation.

    If your board does, you can skip the cost and go for a Dell or HP card. ITs does not have PLX chip. Its a very simple card. 16 lanes of PCIE, 4 lanes to each SSD and its cheap.

    http://barefeats.com/hard210.html

    5GB/s way back in 2015......its pcie 2.0 and achi though.
  12. http://amfeltec.com/squid-pci-express-carrier-boards-for-m-2-ssd-modules/

    amfeltec one. Its low profile but no heatsink for the SSDs.
  13. seekaliao said:
    This is not a new product, it has been around for a few years. Check out amfeltec. They have the same thing long ago. Btw, you need the PLX chip if your board does not support bifurcation.

    If your board does, you can skip the cost and go for a Dell or HP card. ITs does not have PLX chip. Its a very simple card. 16 lanes of PCIE, 4 lanes to each SSD and its cheap.

    http://barefeats.com/hard210.html

    5GB/s way back in 2015......its pcie 2.0 and achi though.


    sorry it is a new product .. it is NVME and were waiting for NVME . and it is NVME and Bootable as well .
  14. @samer.forums: I am aware of how slots are connected to the CPU and yet as the recent THW X299 MB roundup shows, unless I've read them too fast, only 2 of the almost 10 reviewed MBs have M.2 slots linked directly to the CPU: the AsRock Gaming i9 and the Gigabyte Aorus 7...
  15. is that really such a big deal? as long as your mb supports enough PCIe lanes directly connected to the CPU, why not just use cheap M.2 -> PCIe adapter cards and get the full performance possible that way? most moderd mb's have so much stuff built-in, there's hardly a need for the PCIe slots for most users other than for graphics cards. just fill those empty slots with M.2 adapters. I have an ancient X58 mb with three M.2 -> PCIe adapters, all populated with Samsung NVMe drives - one 950 for bootimg and two 960's for sw RAID1. The performance is phenomenal -- better on this 7 year old system than what is possible on new DMI-limited mb's. The secret sauce? X58 was a server-class chipset, like X99 / X299 / X399, and had more free PCIe lanes then than Intel-crippled 'consumer' chipsets do now. With any server-class chipset, you can have your cake and eat it too -- get the full x16 performance from one or even two GPUs, and still get the full x4 performance on several other slots populated with M.2 adapters. It will be some time before you can beat this configurations cost and performance using on-board M.2 slots on a consumer mb (at least, an Intel consumer mn; AMD is less stingy and condescending about how many PCIe lanes a lowly enthusiast like you could possibly need).
  16. mikenth said:
    is that really such a big deal? as long as your mb supports enough PCIe lanes directly connected to the CPU, why not just use cheap M.2 -> PCIe adapter cards and get the full performance possible that way?


    Because I dont wish to waste PCIe slots that way, especially when those boards already come with 3 M.2 ports.
  17. JonDol said:
    Because I dont wish to waste PCIe slots that way, especially when those boards already come with 3 M.2 ports.


    1. You're not "wasting" a slot that is unused. If you have a full-size ATX mb the vast majority of users won't have a need for all the slots. If you want maximum M.2 performance, you just found a need for them.

    2. An M.2 slot on the motherboard with poor performance is not categorically different than trying to connect a fast SSD through the motherboard's USB ports, or plugging an x16 graphics card into an x1 slot. Just don't do it if you care about performance.

    This generation of motherboards just doesn't have fast M.2 on-board slots, so forget about them. Pretend they aren't even there. You seem to be hung up on the idea of having your cake and eating it too with a currently available mb with full-speed x4 direct to CPU M.2 slots. You can keep wishing they are plentiful, but it isn't going to solve your problem. Either wait until the next generation of mb's -- maybe X399 ones will have them? -- or go with the solution that is at hand today. Or suffer with substandard M.2 on-board perf. There is no fourth option, unless it is to whine about the state of affairs and do nothing.
  18. mikenth said:
    JonDol said:
    Because I dont wish to waste PCIe slots that way, especially when those boards already come with 3 M.2 ports.


    1. You're not "wasting" a slot that is unused. If you have a full-size ATX mb the vast majority of users won't have a need for all the slots. If you want maximum M.2 performance, you just found a need for them.



    If it's not used today it doesn't mean it will still be unused tomorrow. And too bad I'm not part of that "vast majority of users".

    mikenth said:

    2. An M.2 slot on the motherboard with poor performance is not categorically different than trying to connect a fast SSD through the motherboard's USB ports, or plugging an x16 graphics card into an x1 slot. Just don't do it if you care about performance.


    I strongly doubt that the fastest SSD on USB will be as fast as the slowest M.2 SSD. And btw, one reason I read THW is to make sure to buy not that poor performer hardware.

    mikenth said:

    This generation of motherboards just doesn't have fast M.2 on-board slots, so forget about them. Pretend they aren't even there.


    Oh really ? What would it need to be a "fast M.2" slot ? Are you dreaming of some magic super shortcut that would make them even faster ? Why would them be not fast since they are directly linked to the CPU just like the PCIe slots ?
  19. mikenth said:
    ... I have an ancient X58 mb with three M.2 -> PCIe adapters, all populated with Samsung NVMe drives - one 950 for bootimg and two 960's for sw RAID1. The performance is phenomenal -- better on this 7 year old system than what is possible on new DMI-limited mb's. The secret sauce? X58 was a server-class chipset, like X99 / X299 / X399, and had more free PCIe lanes then than Intel-crippled 'consumer' chipsets do now. With any server-class chipset, you can have your cake and eat it too -- get the full x16 performance from one or even two GPUs, and still get the full x4 performance on several other slots populated with M.2 adapters. ...


    I've been doing that a lot recently with X79 setups, typically with cheaper SM951/SM961 drives (excellent cache/scratch drives for editing, etc.), but also 950 Pro or whatever for boot (there are a great many older ASUS boards for which modded BIOS files with NVMe boot support is now available). I bought a 1TB SM961 for my Z68/2700K setup, works very well.

    There's one other advantage of using M.2 adapters in this way: for older mbds with limited native Intel SATA3 ports, it frees them up for more targeted use, in my case a port is linked to a front-bay hot swap so I can do live C-drive backups or access other drives for whatever reason (3rd party SATA3 controllers generally suck).

    A caveat though, albeit perhaps a minor issue: on some older mbds with lots of PCIe slots (mainly X58 and X79), some slots are routed via PCIe switches, which can add a little bit of latency. There's usually at least one x4 or x8 slot though which goes straight to the CPU, as can been seen for example in the following diagram of the ASUS P9X79-E WS (in a 4960X system I recently built, the directly connected x8 slot holds a 512GB SM961, which belts along at almost 3.5GB/sec):

    https://images.anandtech.com/doci/7613/Chipset%20Diagram.png

    In this case, as mikenth says, one can have several GPUs aswell as a crazy fast NVMe as boot or some other purpose. My 4960X system has an SM951 256GB for the C-drive (only cost 65 UKP) and two 780 Ti GPUs for CUDA which both run at x16.

    There's even a P55 board one can do something like this with (not quite to the same extent, but still surprisingly potent), namely the ASUS P7P55 WS Supercomputer, which via two PLX switches supports x8/x8/x8/x8. I have two of these P7P55 WS boards, great for CUDA crunching; I used one of them with three 980s SLI to bag all of the 3DMark P55 records (except for the DX12/VR stuff, as I'm only using WIn7).

    There's a lot of life in older tech where one has plenty of PCIe lanes/slots to throw around. I have an Asrock X58 Extreme6 I plan on experimenting with at some point, not gotten round to it yet.

    Oh and btw, X79 boards usually support the dirt cheap 10-core XEON E5-2680 v2 (mine cost 165 UKP), a great way to get native PCIe 3.0 and a good performer for threaded tasks (scores 15.45 for CB 11.5 and 1389 for CB R15).

    This of course is why Intel and mbd vendors do not want older products to support bootable NVMe, it would extent their life, which means fewer people upgrading. Thank grud for BIOS mods.

    Lastly, the 950 Pro is a wonderful drive because it has its own boot ROM, so it can be used even on older mbds that cannot be flashed with a modded BIOS for bootable NVMe.

    Ian.
  20. mapesdhs said:
    mikenth said:
    ... I have an ancient X58 mb with three M.2 -> PCIe adapters, all populated with Samsung NVMe drives - one 950 for bootimg and two 960's for sw RAID1. The performance is phenomenal -- better on this 7 year old system than what is possible on new DMI-limited mb's. The secret sauce? X58 was a server-class chipset, like X99 / X299 / X399, and had more free PCIe lanes then than Intel-crippled 'consumer' chipsets do now. With any server-class chipset, you can have your cake and eat it too -- get the full x16 performance from one or even two GPUs, and still get the full x4 performance on several other slots populated with M.2 adapters. ...


    I've been doing that a lot recently with X79 setups, typically with cheaper SM951/SM961 drives (excellent cache/scratch drives for editing, etc.), but also 950 Pro or whatever for boot (there are a great many older ASUS boards for which modded BIOS files with NVMe boot support is now available). I bought a 1TB SM961 for my Z68/2700K setup, works very well.

    There's one other advantage of using M.2 adapters in this way: for older mbds with limited native Intel SATA3 ports, it frees them up for more targeted use, in my case a port is linked to a front-bay hot swap so I can do live C-drive backups or access other drives for whatever reason (3rd party SATA3 controllers generally suck).

    A caveat though, albeit perhaps a minor issue: on some older mbds with lots of PCIe slots (mainly X58 and X79), some slots are routed via PCIe switches, which can add a little bit of latency. There's usually at least one x4 or x8 slot though which goes straight to the CPU, as can been seen for example in the following diagram of the ASUS P9X79-E WS (in a 4960X system I recently built, the directly connected x8 slot holds a 512GB SM961, which belts along at almost 3.5GB/sec):

    https://images.anandtech.com/doci/7613/Chipset%20Diagram.png

    In this case, as mikenth says, one can have several GPUs aswell as a crazy fast NVMe as boot or some other purpose. My 4960X system has an SM951 256GB for the C-drive (only cost 65 UKP) and two 780 Ti GPUs for CUDA which both run at x16.

    There's even a P55 board one can do something like this with (not quite to the same extent, but still surprisingly potent), namely the ASUS P7P55 WS Supercomputer, which via two PLX switches supports x8/x8/x8/x8. I have two of these P7P55 WS boards, great for CUDA crunching; I used one of them with three 980s SLI to bag all of the 3DMark P55 records (except for the DX12/VR stuff, as I'm only using WIn7).

    There's a lot of life in older tech where one has plenty of PCIe lanes/slots to throw around. I have an Asrock X58 Extreme6 I plan on experimenting with at some point, not gotten round to it yet.

    Oh and btw, X79 boards usually support the dirt cheap 10-core XEON E5-2680 v2 (mine cost 165 UKP), a great way to get native PCIe 3.0 and a good performer for threaded tasks (scores 15.45 for CB 11.5 and 1389 for CB R15).

    This of course is why Intel and mbd vendors do not want older products to support bootable NVMe, it would extent their life, which means fewer people upgrading. Thank grud for BIOS mods.

    Lastly, the 950 Pro is a wonderful drive because it has its own boot ROM, so it can be used even on older mbds that cannot be flashed with a modded BIOS for bootable NVMe.

    Ian.



    Thank you for all these details. Looks like you are crazy geeks with plenty of time to spare.

    With your examples I got a better insight on our colleague's mikenth previous comment.

    I know there is a lot of potential in older quality hardware and not only I personally try to squeeze the maximum performance for my needs from it but I also encourage everyone to do so and the more I read about the security flaws in IME, IPMI or backdoors in cars crap the longer I wish to keep my older hardware.
  21. JonDol said:
    Thank you for all these details. Looks like you are crazy geeks with plenty of time to spare.

    I wish. :D A couple of projects I've been working on have stretched out to well over a year, due to two family bereavements and other stuff going on. I actually have very little time to meddle with such things (if only I did, I'd be running a lot more tests to provide useful data). I have hundreds of 3DMark results I've not yet had a chance to tabulate and write up.


    JonDol said:
    With your examples I got a better insight on our colleague's mikenth previous comment.

    Feel free to PM/email or post any other questions.


    JonDol said:
    ... the longer I wish to keep my older hardware.

    Understandable. There is of course a limit to how far one can go with this, especially if one wishes to take advantage of some aspect of newer tech which isn't possible on an older mbd, eg. some board which just doesn't play ball with Win10, which means DX12 can't be used.

    Others might simply want to take advantage of the higher IPC of newer CPUs, or far greater threaded performance despite the upfront cost, but people do tend to overestimate how outdated older hw must be, but then it's not as if the industry as a whole encourages people to minimise their waste by getting the most out of what they have before upgrading. There's also the issue of warranty cover; I'm a great fan of exploiting used parts (just check my ebay history), but some will prefer having a proper warranty. Still, if current pricing of newer tech is a problem, intermediate steps such as Z68/X79/Z97/etc. can be very cost effective, as my LE build shows.

    This is why I spent some time running tests with mixes of old + new hw, since they're more realistic of the real-world decisions people face (though I've not been able to do much more of them this year). Is it worth putting a newer GPU on an old board? What would happen if I upgraded the platform but kept the GPU, ie. is my CPU holding me back? Sometimes sites do roundups focusing on a particular product. eg. Gamers Nexus has done revisits of the 2500K and 2600K, but they can't cover them all, there are too many possibilities. Neither can I, don't have the time, though more recently I've not been able to get any more recent use GPUs because even 2nd-hand prices are just too high (I lack any later AMD GPUs which is annoying).

    Btw, if anyone knows how to get the Call of Juarez benchmark working again, please get in touch.

    Ian.

    PS. For my text-only pages, make sure Page Style is set to "No Style" from the View meny for viewing in Firefox. Not sure how other browsers would handle it.
Ask a new question

Read More

Storage SSD Components Highpoint