THere could be an issue with latency. In a hard disk array,the latency for seek operations is several orders of magnitude greater than the latency in a SSD array. If the latency introduced by the RAID bios is on the order of the latency of the random reads in the SSD system,it could make things WORSE with multiple drives. This would not be particularly suprising as the RAID bios was designed for hard disks. With the hard disk array,the bulk of the latency,on the order of 10ms,is due to the physical disk seeking. Long write operations on an fragmented file can actually be much faster on a hard disk,as there are no problems erasing blocks. You can for instance,record video to a hard disk,at close to the maximum rated speed,filling the entire hard disk,then start at the beginning and overwrite that same data and keep doing it all day in a 3TB ring buffer. Defects will slow you down,but so long as you have a buffer and there are not to many there wont be a problem. On that 3TB drive,if your writing at 512MB/s,you WOULD wear out the SSD in about a year (think of a piece of recording equipment that needs to record ultra high definition stereoscopic 3d digital video,with minimal compression (the best your going to get is a stream compression of something like LZH,reducing the data by at best half))
But normally,the hard disks performance is primarily determined by its latency,which is on the order of 10ms,vs .1ms for the SSD.If the SATA host adapter introduces a ms of latency,and the RAID bios introduces another ms,then its still a major win. Your seek times halve and transfer rates double,so your seek goes down to 10+1+1=12 ,halved=6ms. And your peak transfer rate is a GB/s. SO two disks is almost twice as fast. But the SSD has a seek latency on the order of .1ms,so your latency with a single drive is 1.1ms. But with the raid,its 2.1ms/2 or 1.05ms,which is hardly better than one,although your transfer rate is doubled. But thats not the whole story,because really,with the raid bios,your writes have more overhead than your reads. Say,2ms instead of 1. So lets say writes happen 25% of the time to 75% reads, Then still,the hard disk arrays average latency is 6.25ms. Still a vast improvement,and the difference in overhead is hardly noticeable. Even doing all writes,its 7ms. You'll never notice that at all. But for the SSD,its 1.05ms average latency 75% if the time,and 1.55ms latency 25% of the time, or 1.05*.75+1.55*.25=1.175ms. So the latency to access the disk,assuming our assumptions are met, INCREASES by around 7%. But if the system has to write a lot more than I assumed,like if its booting up,rebuilding the page files and the prefetch folder and superfetch cache,etc,then its going to be far worse. If it did 100% writes,the seek latency would increase to 1.55ms,a drop in seek time of nearly 41%. And we know that windows performance in fact is impacted HEAVILY by write latency. Just look at what a readyboost cache on a USB stick can do for you. Even at the low data rate of a flash drive,the low latency can make a big difference.
One telling sign of where the problem lies will be,is the system still faster than a hard disk. No matter what,it should still be much much faster than the hard disk,as the host adapter and bios latencies should be much much less than the seek times of a hard disk.
Another thing to do is to actually erase the whole disk.Break the raid array configure the drives as AHCI,and make sure that the trim command is supported.Then benchmark the disk (so you know it worked) ,then get the special software your drive manufacturer provides to do a secure erase,assuming they provide one. (normal secure erase utils can make it worse) If not,get a program that can force a trim command to all the sectors on the disk,which does the same thing. Then initialize and benchmark it again. If that was the problem,the performance should be restored. Dont keep doing this. Its not good for the drive to keep erasing every single sector. If each sector is good for 1000 writes,you just took 1000th of its life.If you write,say 100gb to a 250gb disk a day,(say,your editing large files,for instance 24MP 48 bit images in Photoshop or GIMP and using gigabytes of swap space at a time. most gets erased),figure you overwrite the entire disk about every 3 days. (at which point,if the TRIM command does not work,your write performance is going to be trashed) ,but at so at that rate your good for 3000 days or about 8 years.(and thats under hard usage) But every time you do a secure erase you use up 3 days of your drives life, Not something to worry about,but not something to keep doing. (Damn,it didnt work,let me setup this script to do it 10 times in a row..... hmmm that didnt work either,let me update the bios and try that script again... that didnt work either,maybe if I change the drivers.... nope...... maybe if I reset the bios....... nope........ I heard linux can do it.......... Yea youd have to be just dumb enough not to realize what your doing,but smart enough to be dangerous,but if you are,you could spend a couple weeks trying to fix it,and end up doing a hundred write cycles,10% off your drives life. So do what you need to,but dont be stupid.
Next thing to do is to get windows working right on another drive. Turn off all the stuff like superfectch and prefetch and set the swap to off,so its not writing anything that it does not have to. Now let it boot up and see what happens. If it comes up nice and fast,so far so good. Make sure trim is active. If not,fix it now. Benchmark the array. Do you have a serious loss of write operations vs a single disk? THen its latency. If its good turn on the swap and such,and see if it still boots nice an fast and if it keeps rebooting fast. If that fixes it,your good to go. Now that trim works,it should keep working. If it always boots slower,even on a fresh drive with most of the writing turned off and trim working,or if it slows down the instant you turn on all the stuff with heavy writing,then you know you have a problem with latency,but you already knew that from the benchmark,you were just hoping you could get a net gain from the two drives. Buy a raid card made for solid state drives.