Solution
Yeah, pick an Intel instead; Intel X25-V 40GB or Intel X25-M 80GB; they are the best value and quality for the money, and much faster than many other SSDs.

phoenix777

Distinguished
Jun 30, 2009
145
0
18,690

sub mesa

Distinguished
I think so, yes.

However, many consumers are confused because they often compare speeds based on sequential throughput (i.e. when copying large files). The sequential write speed of the Intel SSDs is on the low side. But really, that's the last thing you need on your system drive.

The Intel SSD is very fast on random write, however. So in terms of performance its one of the top SSDs together with Micron and Sandforce based SSDs. It also does excellent wear leveling and should be one of the most reliable SSDs both on paper and in reality due to the large production volume of the controller; where any bugs or inefficiencies would be uncovered; such as happened with some OCZ SSDs.

Intel will release a new generation of SSD controllers and NAND flash memory, probably by christmas this year. That's still a long wait, however, and given the advantages and limited cost of an Intel 40GB drive, i feel it's an excellent investment.
 
It really depends on how price sensitive and performance hungry you are. Personally, I think anyone who buys a quad-core computer hoping to get good performance is crazy not to use an SSD for the system drive. No point in having all those cores hanging around waiting for the disk to deliver data.

But I can completely understand why other folks who don't use their systems as heavily as I do and who are more budget conscious might not want to jump in just yet.
 
Good Choice
As sub mesa and sminlal (As per normal) presented good reasons for SSDs.
One point on your quest of Raid0 for 2 SSDs. Raid0 while it improves the sequential read/writes, it does little for the random 4K read/writes. And currently you lose windows 7 trim support. I think this will be fixed down stream and you can manually run a program (ie intel toolbox) to duplicate the trim function.
 

sub mesa

Distinguished

I think that's mainly true on the Windows platform, due to some RAID drivers always reading the stripesize even if only a part of it was requested; a low-level optimization that can ruin random I/O performance. Also, RAID0s running on XP were all unaligned; causing single disk random I/O performance.

With proper alignment and a good RAID driver, striping should be able to scale well in random IOps; especially with SSDs. I invite you to check the article i'm in the process of writing - it's not complete yet. But it does show very nice gains with random I/O with normal HDDs and software RAID:
http://submesa.com/data/raid/geom_stripe
http://submesa.com/data/raid/geom_stripe/page2
(page2 is the easiest to compare i guess)

However, depending on the driver quality of the RAID engine, you may not get the same performance increases on the Windows platform. And the absence of TRIM is a great loss; you would have to sacrifice valuable storage capacity and never use it to give the SSD its free blocks.

As a side note, i may be upgrading my server system with new SSDs; i was thinking of 4 x Intel X25-V 40GB in RAID0 with ZFS. Before i take them into actual use i will benchmark them and see how SSDs scale in random IOps when being striped. I think the performance increase would be close to the maximum, given enough outstanding queues.
 


But random IOs/sec do not faster access times make.

Random IOs/sec is a good thing if you have a workload that has a queue depth greater than 1 (i.e., multiple concurrent I/Os have been issued to the drive). That's not a terribly common scenario in Windows.

With low queue depths, for random I/O performance access times are what really matter, and RAID 0 doesn't really help with them.
 

sub mesa

Distinguished
That is true, let us look at an Intel benchmark:

asssd_achi_mode.jpg

Intel in AHCI

The 4K read benchmark is bottlenecked by read latency by the SSD. This score cannot be enhanced by RAID or multiple flash channels; it likely will only increase with faster (less nm) memory produced with newer fabs.

Once there are multiple queued I/Os, however, they can be processed in parallel and the performance rises up to factor 10; this can be seen on the 4K-64thread read benchmark. RAID0 can (theoretically) double this number.

However, for writing you do not need a higher queue depth. As my own RAID0 benchmarks confirm, write buffering in the HDDs itself causes the queue depth to play a much smaller role, and RAID0 random write scales without having the queue depth go higher than 1.

Still, i am mainly interested in random reads. You could say when using RAID0, you will see the 4K value stay the same, while the 4K-64thread score should nearly double.

Now the real question probably is, how much will you actually gain from multiqueue read performance on Windows. I remember StorageReview had traces where the average queue depth was only 2 or 3; while some other sites used gaming traces that were average 8. I think on Windows 7 in a proper configuration and modern games/apps you should be able to benefit from additional queued I/Os. Would like to see an in-depth article about this; perhaps things are improving in both apps and OS to allow higher queue depths. Perhaps things are not as bad as they used to be. At least i hope so for the many windows users, as SSDs are excellent parallel I/O devices it would be a shame not to make use of that; just like your multicore CPUs.

For my own personal use, getting it loaded with queued I/Os is no problem for ZFS. They will also serve requests for my workstations, as all the 'system disks' of my workstations are processed by the server instead, via iSCSI which also supports up to 255 outstanding queues. The queue depth is something you can see with a monitor program called "gstat" (or geom statistics). If i start the game WoW on my gaming pc, the queue depth is saturated (128 queued I/Os) - though it fluctuates. At least for my usage, RAID0 with several Intel 40GB SSDs might make sense. I admit i'm also keen on benchmarking such a setup, just for fun and excitement. :)
 

fish_86

Distinguished
Jan 21, 2010
98
0
18,640
How do you guys feel about some of the OCZ SSD's? I pretty much have been convinced that intel has a great reliable SSD, but how does OCZ compare? They seem to be very good when looking at the charts.
 

sub mesa

Distinguished
OCZ does great in sequential benchmarks, but less good in random I/O benchmarks as they use controllers from JMicron, Samsung, Toshiba and Indilinx. They cannot make use of the Intel controller as Intel doesn't hand that jewel over to anyone. Though some select Kingston models did use the Intel controller.

OCZ reliability is also less than Intel i think; seen quite a few OCZ drives die without apparent cause, while i haven't heard too much stories about Intel SSDs even though they are very popular and widely sold over the world.