SSD for Lenove server vs 10k drives

mikeseib

Prominent
Feb 23, 2017
4
0
510
we are using 16 10 k drives in raid 10. we need to upgrade the storage and it has been suggested to use samsung SSD, they would be in the same configuration but 500gb

has anyone done this? any information about performance on the ssd compared to the current setup would be appreciated
 
Solution


If your question is between Evo and Pro, then yes, there is a huge difference in regards to server loads. Both are consumer drives, intended for workstation loads and longevity and (depending on your server's workload) you can easily chew through these drives with a server workload. It isn't really Samsung's fault when the drives start failing a few years in when you've written a PB to them. The PM863 and SM863 are the enterprise grade drives, intended for servers. Longevity is measured in TB Written (TBW).

Samsung 850 Evo 1TB: 150TBW (150TBW/TB)
Samsung 850 Pro 1TB: 300TBW (300TBW/TB)
Samsung 850 Pro 2TB: 450TBW...

mikeseib

Prominent
Feb 23, 2017
4
0
510





Could you give me the part number of the ssd you are using?
 

LordLuciendar

Distinguished
Mar 23, 2011
35
0
18,540


If your question is between Evo and Pro, then yes, there is a huge difference in regards to server loads. Both are consumer drives, intended for workstation loads and longevity and (depending on your server's workload) you can easily chew through these drives with a server workload. It isn't really Samsung's fault when the drives start failing a few years in when you've written a PB to them. The PM863 and SM863 are the enterprise grade drives, intended for servers. Longevity is measured in TB Written (TBW).

Samsung 850 Evo 1TB: 150TBW (150TBW/TB)
Samsung 850 Pro 1TB: 300TBW (300TBW/TB)
Samsung 850 Pro 2TB: 450TBW (225TBW/TB)
Samsung 850 Pro 4TB: 600TBW (150TBW/TB)
Samsung PM863 960GB: 1400TBW (1458TBW/TB)
Samsung PM863 1.92TB: 2800TBW (1458TBW/TB)
Samsung PM863 3.84TB: 5600TBW (1458TBW/TB)
Samsung SM863 960GB: 6160TBW (6417TBW/TB)
Samsung SM863 1.92TB: 12320TBW (6417TBW/TB)

I absolutely vouch for the Samsung SSDs, there is no comparison for SSD technology and quality. We use them in all of the servers and workstations we build. Just do yourself a favor and go with the enterprise grade drives.
 
Solution

LordLuciendar

Distinguished
Mar 23, 2011
35
0
18,540


I realize now you meant between the Intel and Samsung. Intel is good in terms of reliability, but I find them behind in terms of technology and outright speed and massively overpriced most of the time.

For comparison:
Intel 730 480GB: 128TBW (267TBW/TB)

I measure the TBW per TB because ultimately, you're going to write twice as much data to a 1TB drive than you will to a 500GB drive, so the extra TBW will only compensate for the additional space if you're going to use all of the drive. If you go overboard, using 1TB drives when you could get away with 500GB drives, you'll obviously extend the longevity considerably.
 

LordLuciendar

Distinguished
Mar 23, 2011
35
0
18,540
I'm going to take a wild stab and guess you're running a Lenovo x3650 M4 (based on your 16 drive statement), that'd make your storage controller (assuming it's the one it shipped with) a Lenovo ServeRAID M5110, which is PCIe 3.0 x8 (for a total bandwidth cap of 7.7GBps). Assuming you go for SM863 or PM863s, that'd put you at 4.1GBps theoretical maximum bandwidth (read) for a 16 drive RAID 10 or 8.1GBps for RAID 0.

Even with an SSD though, with almost any load you're going to hit IOPS limits first. You'll be lucky to get half the theoretical max on a typical file server workload, and even less with a load like a database server. Not to mention the bottleneck of transport bandwidth (2x aggregated 10Gbps links is still only 2.5GBps, and you lose about 10% overhead to even the most efficient network link, so 2.25GBps).

I keep telling my customers though, its worth it to use an SSD even if you bottleneck the theoretical bandwidth all the way down to the same speed as an HDD. Let's say you run 2x 1TB SATA 3Gbps HDD and 2x 1TB SSD 6Gbps in two identical servers, both with SATA 3Gbps controllers and you access both sets of data over gigabit network connections (so 125MB/s bottleneck, real world closer to 110MB/s on Intel NICs and 90MB/s on Realtek NICs). Your HDD based server array in RAID 0 can hit 200MB/s theoretical max (100MB/s disk transfer limit, limited by the platters), and your SSD based server array can hit 1GB/s (600MB/s limited by the 3Gbps SATA interface), but both are limited to the NIC transfer speed, so when transferring a large, single file (lets say a 1TB VHD), the performance is nearly identical. However, you can enumerate the files in the folder with the VHD instantly on the SSD, on the HDD it might take 10 seconds. When you add additional workloads (like will be present in just about any server environment), the HDD's arm movement is going to slow the disk down to a crawl (4K QD32 on the HDD array drops to 3MBps) where the SSD array is unphased (4K QD32 on the SSD array is still way over the bottleneck point at 800MBps+). Not to mention the lack of moving parts and significant heat lowers vibration in the chassis, helps the system run cooler, and drops power consumption putting less stress on the power supply and electric bill.