SSD Raid 0 slower to boot than single drive

awg0681

Honorable
May 23, 2012
5
0
10,520
Alrighty, my quandary is that my 2 SSDs in RAID 0 are slower booting into Windows 7 64-bit than a couple of other systems that just have a single SSD. Unless I’m somehow mistaken, shouldn’t 2 SSDs in a RAID 0 boot up faster than a single? I’m not including POST times here. I’m talking about from when “Starting Windows” appears on the screen and the orbs start flying in to make the windows logo. On the two systems with single SSDs the orbs don’t get to finish flying in to make the logo before it moves on and the desktop appears. They take roughly 8-10 seconds to get to the desktop from “Starting Windows” whereas my RAID system takes more along the lines of 45 seconds. I have other larger storage drives in the machine and it doesn’t seem to matter if they’re connected or not, boot time is the same. I recently upgraded the system and reinstalled Windows 7 in different stripe sizes; 32, 64, and 128. 32 seemed slowest while 64 and 128 were comparable so I settled on 128 for now. I’m just wondering what am I doing incorrectly to make my system take longer than the others to boot up. Any ideas? System specs below.

RAID System
i7 3770K
Asus P8Z77-V Deluxe
2 x OCZ Vertex 3 120 GB SSD (RAID 0 on Intel 6 Gbps controller, firmware 2.22 but also tried firmware 2.15)
LG Blu-Ray drive
1 TB Western Digital Black (Marvell 6 Gbps Controller)
2 TB Western Digital Black (Marvell 6 Gbps Controller)
Windows 7 64-bit Ultimate

System 1
i3 2100
ASRock H67M
OCZ Vertex 3 MAX IOPS 120 GB SSD (Intel 6 Gbps controller, firmware 2.15)
LG Blu-Ray Drive
2 TB Western Digital Black (forget which controller this is on)
Windows 7 64-bit Pro

System 2
i5 3570K
ASRock Z77 Pro 4M
Crucial m4 128 GB SSD (Intel 6 Gbps controller)
2 x DVD Burners
500 GB WD Green (Intel 3 Gbps controller)
1 TB WD Black (Intel 3 Gbps controller)
Windows 7 64-bit Pro
 

RealBeast

Titan
Moderator
It makes sense that RAID 0 might boot slower due to the raid controller allowing time for detecting drives. My boot time dropped by about 10 seconds when I turned off the Marvell onboard controller since I don't use it -- you could move your WD drives to Intel 3Gb/s ports and do the same since there is no performance improvement for HDDs on 6Gb/s SATA.

Also I don't think that many folks here favor RAID for SSDs, better to use them as two separate drives with AHCI SATA mode. As a side benefit your boot times will improve even more.
 

awg0681

Honorable
May 23, 2012
5
0
10,520
I can understand that taking longer during the POST and detection phase which happens all before windows starts loading, but once you start to see "Starting Windows" shouldn't that portion take about the same amount of time or less than the single drive systems? That's the only portion I'm talking about. I understand that adding in more controllers and other stuff makes a POST et al take longer, but I'm strictly talking about Windows loading.
 

Read his post. The drives had already been detected, and windows had started loading. Also I'm assuming that he is using his raid controller(for his raid 0 setup) and shouldn't turn it off. Have you times your system when it is not setup in raid?
 
Yes, and no.

When THG compared SATA II vs. SATA III SSDs, boot time were almost the same.

However, in RAID 0, then "theoritically" should be faster, if not doubled.

Here's a couple things I see.

1.) The HDDs are on the Marvell ports. DITCH THEM! But them on the native Intel SATA II ports, you'll get no benefit from HDD on SATA III, and the drivers are cr@p! Disable the port in the BIOS. The add-on SATA controller is limited to 5.0Gbps anyways. SSDs really suffer when they are put on them! HDD will never saturate the SATA II bandwidth, much less SATA III. Don't be fooled by a "SATA III" labeled HDD.

2.) Stripe size really won't matter, but I did some research into this, and Intel recommends a stripe size of 16K for SSDs. I have a thread in here addressing that "issue." Got no real answer, but the topic is relevant.

3.) No mention if you are using the latest iRST (Intel Rapid Storage Tech.) drivers for the chipset. Intel may have come out with a newer version than was on the mobo driver install disk. Check it out here. The latest version is 10.8.0.1003 (11/11/11), so maybe not newer.

IMHO, I think you have a driver conflict slowing down the boot, probably the Marvell drivers.

Have you done any SSD tweaks? The start up routine could be bogged down with silly apllications running during boot. I always point persons with SSDs to these sites:

The SSD Review - The SSD Optimization Guide They also have a Windows 7 Optimization Guide in there.

OCZ Blog - SSD Tips & Tweaks

You don't have to do them all, or any. They are just tips.

On all my systems I had with 2xSSD in RAID 0, it'd go from orbs to ready in 2-4 seconds! Still does with just one larger SSD now!
 

RealBeast

Titan
Moderator
I suggest that you also read his post -- his Marvell controller is only on for two WD hard drives, which would run just as well on some of his empty Intel ports, then he can turn it off and save 10 seconds every startup.
 

awg0681

Honorable
May 23, 2012
5
0
10,520
I can easily move the WD hard drives and disable the Marvell controller. However, when I first installed Windows the only drives I had in there were the 2 SSDs in RAID 0 and it still wasn't as fast as the single SSDs on the other systems. I had not specifically disabled the marvell controller in the UEFI for that though so I can't say for sure if the Marvell controller is causing the Starting Windows load time to be slower. Saving the few seconds during the POST and detection phase isn't a huge concern to me, but it will be welcome.

foscooter - I'll be sure to check the iRST when I get to that machine next, but I'm pretty sure it's the latest. I have also gone through those Tips, Tweaks, and Optimizations. I certainly had no illusions that putting the WD drives on the SATA III would make them perform any better, honestly they were just ever so slightly easier to get to at the time. I actually came across your thread on stripe size before I updated my machine because I too was trying to figure out the best stripe size. I ended up finding a fairly comprehensive article where they tested SSDs at various stripe sizes. Their conclusion was 64 or 128 was the best overall.

My main issue is finding out what I did wrong. Like foscooter mentioned, his RAID 0 SSD was ready in 2-4 seconds... why can't I get that?! =] I actually won't be back at that machine until Friday or Saturday and will put these suggestions into action then. I'll let you all know how it works out. In the meantime I'm open to any and all suggestions you guys can come up with.

BTW, I think it must be me. I previously tried a Raid 0 with 2 WD 300 GB Velociraptors at one point (using an ASUS P5W-DH Deluxe and C2D E6600) and couldn't tell any difference there either. RAIDing doesn't seem to agree with me! ;-)
 

Omi3D

Distinguished
Sep 30, 2011
117
0
18,690
Here's a suggestion, maybe applicable, maybe not.

If the Bios is set to raid and the disks were created with iRST earlier version, you might try resetting the raid0 from scratch using the newer iRST software. A relative had this same issue after updating his iRST.. it got slower. He reinstalled the raid array using the newer version (was setup on older version) and it got back to normal bootup speed of about 4 secs. I never bothered to find out why.
 

awg0681

Honorable
May 23, 2012
5
0
10,520
Well as suggested I plugged the spinners into the Intel 3 Gbps SATA ports and disabled the Marvell ones altogether. It took a second or two off the POST/detection phase, but that was not my concern to begin with. Honestly, I didn't expect this to make a difference in the Windows load time. It still takes longer to load Windows than the single drive machines. The orbs are able to make the full logo and pulse a few times before the log in screen comes up.

My iRST version is 11.0.0.1032 which is what came with the motherboard so it has not changed at all since I installed.

I'm at a loss. I have absolutely no clue why this machine is slower. I do not know how it performs with just one SSD on this machine. I'd have to wipe and reinstall to just one drive to find out and I'd prefer not to do that until I have an answer to why this is happening that will correct the issue. So I'm still open to suggestions or hypotheses!
 

mansoflaco

Honorable
Jun 28, 2012
1
0
10,510
When you put 2 SSDs in RAID, you may lost TRIM functionality, which in the long term will end in disk performance being killed. Is not a good idea to put 2 SSDs in RAID, unless you have a very good controller which supports TRIM over RAID. Otherwise, your SSD are likely useless by now.

Further information here:
http://en.wikipedia.org/wiki/TRIM

Hope it helps
 

mrussell

Honorable
Dec 17, 2013
5
0
10,510
THere could be an issue with latency. In a hard disk array,the latency for seek operations is several orders of magnitude greater than the latency in a SSD array. If the latency introduced by the RAID bios is on the order of the latency of the random reads in the SSD system,it could make things WORSE with multiple drives. This would not be particularly suprising as the RAID bios was designed for hard disks. With the hard disk array,the bulk of the latency,on the order of 10ms,is due to the physical disk seeking. Long write operations on an fragmented file can actually be much faster on a hard disk,as there are no problems erasing blocks. You can for instance,record video to a hard disk,at close to the maximum rated speed,filling the entire hard disk,then start at the beginning and overwrite that same data and keep doing it all day in a 3TB ring buffer. Defects will slow you down,but so long as you have a buffer and there are not to many there wont be a problem. On that 3TB drive,if your writing at 512MB/s,you WOULD wear out the SSD in about a year (think of a piece of recording equipment that needs to record ultra high definition stereoscopic 3d digital video,with minimal compression (the best your going to get is a stream compression of something like LZH,reducing the data by at best half))

But normally,the hard disks performance is primarily determined by its latency,which is on the order of 10ms,vs .1ms for the SSD.If the SATA host adapter introduces a ms of latency,and the RAID bios introduces another ms,then its still a major win. Your seek times halve and transfer rates double,so your seek goes down to 10+1+1=12 ,halved=6ms. And your peak transfer rate is a GB/s. SO two disks is almost twice as fast. But the SSD has a seek latency on the order of .1ms,so your latency with a single drive is 1.1ms. But with the raid,its 2.1ms/2 or 1.05ms,which is hardly better than one,although your transfer rate is doubled. But thats not the whole story,because really,with the raid bios,your writes have more overhead than your reads. Say,2ms instead of 1. So lets say writes happen 25% of the time to 75% reads, Then still,the hard disk arrays average latency is 6.25ms. Still a vast improvement,and the difference in overhead is hardly noticeable. Even doing all writes,its 7ms. You'll never notice that at all. But for the SSD,its 1.05ms average latency 75% if the time,and 1.55ms latency 25% of the time, or 1.05*.75+1.55*.25=1.175ms. So the latency to access the disk,assuming our assumptions are met, INCREASES by around 7%. But if the system has to write a lot more than I assumed,like if its booting up,rebuilding the page files and the prefetch folder and superfetch cache,etc,then its going to be far worse. If it did 100% writes,the seek latency would increase to 1.55ms,a drop in seek time of nearly 41%. And we know that windows performance in fact is impacted HEAVILY by write latency. Just look at what a readyboost cache on a USB stick can do for you. Even at the low data rate of a flash drive,the low latency can make a big difference.

One telling sign of where the problem lies will be,is the system still faster than a hard disk. No matter what,it should still be much much faster than the hard disk,as the host adapter and bios latencies should be much much less than the seek times of a hard disk.

Another thing to do is to actually erase the whole disk.Break the raid array configure the drives as AHCI,and make sure that the trim command is supported.Then benchmark the disk (so you know it worked) ,then get the special software your drive manufacturer provides to do a secure erase,assuming they provide one. (normal secure erase utils can make it worse) If not,get a program that can force a trim command to all the sectors on the disk,which does the same thing. Then initialize and benchmark it again. If that was the problem,the performance should be restored. Dont keep doing this. Its not good for the drive to keep erasing every single sector. If each sector is good for 1000 writes,you just took 1000th of its life.If you write,say 100gb to a 250gb disk a day,(say,your editing large files,for instance 24MP 48 bit images in Photoshop or GIMP and using gigabytes of swap space at a time. most gets erased),figure you overwrite the entire disk about every 3 days. (at which point,if the TRIM command does not work,your write performance is going to be trashed) ,but at so at that rate your good for 3000 days or about 8 years.(and thats under hard usage) But every time you do a secure erase you use up 3 days of your drives life, Not something to worry about,but not something to keep doing. (Damn,it didnt work,let me setup this script to do it 10 times in a row..... hmmm that didnt work either,let me update the bios and try that script again... that didnt work either,maybe if I change the drivers.... nope...... maybe if I reset the bios....... nope........ I heard linux can do it.......... Yea youd have to be just dumb enough not to realize what your doing,but smart enough to be dangerous,but if you are,you could spend a couple weeks trying to fix it,and end up doing a hundred write cycles,10% off your drives life. So do what you need to,but dont be stupid.

Next thing to do is to get windows working right on another drive. Turn off all the stuff like superfectch and prefetch and set the swap to off,so its not writing anything that it does not have to. Now let it boot up and see what happens. If it comes up nice and fast,so far so good. Make sure trim is active. If not,fix it now. Benchmark the array. Do you have a serious loss of write operations vs a single disk? THen its latency. If its good turn on the swap and such,and see if it still boots nice an fast and if it keeps rebooting fast. If that fixes it,your good to go. Now that trim works,it should keep working. If it always boots slower,even on a fresh drive with most of the writing turned off and trim working,or if it slows down the instant you turn on all the stuff with heavy writing,then you know you have a problem with latency,but you already knew that from the benchmark,you were just hoping you could get a net gain from the two drives. Buy a raid card made for solid state drives.
 

HairySasquatchFL

Reputable
Nov 27, 2014
1
0
4,510
I'm having a similar issue. I've got an unraided SSD as a boot drive. My system was screaming along until I added 4 HDDs in a RAID 5 using Intel Rapid Storage Technology. It went from taking about 10 seconds to boot to over 2 mins. I recently had a problem with the motherboard that degraded the RAID 5 array and I had to delete it. When the drive were not part of the RAID array and discovered individually I went back to the fast boot times. When I recreated the RAID 5, the boot times got longer again. Note, as the OP said it is not the boot times that are getting longer, it's the windows load time.

I booted into windows recovery mode and found that loading classpnp.sys seems to be adding all of the time. Does anyone know anything about this driver? Please let me know if you have any ideas on why adding a raid volume would make it take so long to load this on boot. I'm going to keep digging and will post again if I find anything out.