Advanced RAID Setup - Help! Bad Performance Results

Aug 9, 2018
16
0
20
I’m in need of advice on the struggling performance of a custom NAS I finally finished building. It took me a month to design and build, and it’s awful. I could have bought 3 QNAP thunderbolt NAS’ for the price this cost me to build, so I am really kicking myself right now.

With so much money spent, I’ll spend more to fix it properly.

Specs of the NAS
Thermaltake Tower 900
I7 8700k, 64gb RAM
Asus Prime z-370 mobo
Gigabyte thunderbolt Card
Dual 10gbe Cards
OS Drive -> WD Black NvME
OS -> Windows 10
-> storage 6x 1tb Samsung evo sata ssd (internal
-> 8 Seagate IronWolf 8tb hard drives -> connected via ableconn Pex-10 sata pcie Card

The SSD’s are in Raid 5, performing fine. Getting about 1100mbs read and write out of the RAID 5.

The main problem is the ironwolf Array. 8 drives in raid 10 configured with storage spaces gave me 300mbs r and 180 write. And I tried for fun inRAId 0, and capped out at 600mbs read and 220mbs write.

That performance is dismal for 8 ironwolf drives.

By my calculations, I should be getting read speed of at least 900mbs and write speed of at least 800mbs in RAID 10.

What can I do?
 
Solution


The issue seems to be not the speed of your NAS hardware, but rather your backup and recover procedures.

Anything less than full drive backups, local, will take forever.
My system has 5 drives. All SSD, totalling a little over 2TB. Each backed up individually, nightly. Either Full or Incremental images, all automated.
All going to a small Qnap NAS box w/ 4x 4TB...

popatim

Titan
Moderator
Get a better sata adapter.
Thats a pcie Gen 2 adapter and only uses two pcie lanes which limits your theoretical max speed speed to 1000MB/s but sadly this card has design issues as you might know if you read the reviews. If you can return it, I would, and purchase a used server grade card. For raid 10 a used Lsi9220 is all you need but don't even think about using raid5 on it (slow). I use an IBM M1015. Writes will be slower due to the lack of a battery backup so if this is a priority to you consider upping to something based on LSI9265. As a bonus it will also do raid 5/6 much much better then the 9220.
 
Aug 9, 2018
16
0
20
I’ll pick up the LSI card. I have a 4 port LSI and a few other cards that worked well, but I needed 10 ports on one card, as I didn’t want to waste a pcie slot.

This isn’t for super fast storage though. I have an additional ssd raid and NvME raid configured with windows that performs incredibly.

This one is acting as a Fileserver for 10gbe. I figured the pcie 2 Card was ok, given that the transport is 10gbe. My goal was to be at 800-900mbs for this one.

An 8/10 port LSI or dual 4 port should be fine.

Thanks for the input!
 

g-unit1111

Titan
Moderator
Not to nitpick or anything but you don't really need an 8700K and 64GB of RAM for a NAS. You could seriously do one on a Ryzen 2200G and 4GB of RAM. In a NAS environment the storage does all the work and you don't really need a strong CPU or a ton of RAM to manage that many hard drives.
 

RealBeast

Titan
Moderator
Agree with all of the above, but if you need a RAID card with >8 ports internal look at the Adaptec 71605 with 16 internal drive support without an expansion plate (4 x 4SATA output using the proper cables) and good performance. The best part is there are tons of used models on EBay that go for a little over $100 with cables.

As with all such cards you need an x8 PCIe slot (so really an x16 running at x8 in a consumer level motherboard).

I would use RAID 6 (that is quite slow for writes) or at least RAID 5 with a global hot spare with that many large drives though due to chance of a URE with one drive death. Also use good backup practices as RAID is not your backup.
 
Aug 9, 2018
16
0
20
Thanks for the additional info!

I know it is overkill, I’ll use it for other things as well. I built a new system, and had this leftover, so I figured
I’d repurpose it instead of sell it.

This raid will only
Contain about 2tb of critical data. For the most part it is going to be used for fast recovery of several OS’, my music design files, and music files. I am using this because last time I needed to restore an OS, restoring from both cloud and network drive took almost a full day. I wanted something that I could use for recovery in under an hour.

This backs up to cloud, network storage, and rotating external storage that is moved offsite once per month.

Would I find much benefit from adding a few ssd’s for caching? I have 3 or maybe 512gb sata ssd’2 that are collecting dust at the moment.
 

RealBeast

Titan
Moderator
An SSD cache for a RAID array will improve write performance, probably more noticeable with RAID 6 than 5 but you would need a RAID card that allows adding a cache drive or two and they are a fair bit more expensive for all brands, usually around $200-300 for any particular card. I really don't think that would be of that much benefit to you.
 

USAFRet

Titan
Moderator


The issue seems to be not the speed of your NAS hardware, but rather your backup and recover procedures.

Anything less than full drive backups, local, will take forever.
My system has 5 drives. All SSD, totalling a little over 2TB. Each backed up individually, nightly. Either Full or Incremental images, all automated.
All going to a small Qnap NAS box w/ 4x 4TB Seagate Ironwolf, RAID 5.

Recovering any individual drive from "Last Night" is about 20 minutes.
The whole system on all new drives in under 2 hours to be completely up and running.

Read more here: http://www.tomshardware.com/forum/id-3383768/backup-situation-home.html
 
Solution

FireWire2

Distinguished
It bothers me to hear people keep saying the RAID5 or RAID 6 is slow. Of course, it slows if you do not know what you are doing.
Here is my RAID 6 with twelve (12) SATA drives. Note this is WITHOUT tunning up just yet

Over 3000MB/s on RAID6 12x drives

Just like everything you have to dig and learn to know how.

@thedrewnorth your 6x 1TB Samsung should be around 3000MB/s as well, where 8x 8TB should be around 2200MB/s. If you are using a correct RAID card
 
No one said RAID 5/6 itself was inherently slow....(Although how 10 effective spinning drives could achieve 3000 MB/sec for anything beyond a few small writes quickly dumped into a cache, vs. sustained throughput would be interesting to know, given most 7200 rpm NAS drives maxing effective throughput individually at approx 175 MB/sec or so, based on CrystalDIskMark sequential 32K reads/writes.)

But I doubt you will be finding many folks *using Windows Storage Spaces' managed RAID 5* thru an inexpensive Host Bus Adapter exceeding the read/write speeds currently achieved...