Getting an Intel X25-M - Can I use it to its potential?

silenkiller

Distinguished
Jan 30, 2006
66
0
18,630
I want to make sure I'm not bottlenecking a Intel X25-M if I decide to get it. What dictates this? Southbridge? Anyway, here is my current motherboard and other various parts.

ABIT AB9 Pro LGA 775 Intel P965 Express ATX Intel Motherboard
North Bridge Intel P965 Express
South Bridge Intel ICH8R


Western Digital Caviar RE WD2500YS 250GB
HD

3 Gigs of Patriot Extreme Performance 240-Pin DDR2 SDRAM DDR2 800

BFG Tech BFGR76256GTOCE GeForce 7600GT 256MB 128-bit GDDR3


So anyway, should I or do I need to be upgradin gmy motherboard before I buy this rather expensive flash drive? I want to extract full performance.

Also one other question - when loading games/OS, is there any write to the HD or is it just read? The Intel X25-M has a much slower Write speed than its SLC brother Intel X25-E. I'd gladly buy the most expensive one if I'm not going to see gains in the MLC one.
 

sub mesa

Distinguished
Hardware RAID has a bottleneck in the number of IOps, so it won't scale indefinately. These include high-end Areca controllers. Software RAID using the CPU and chipset-powered SATA ports will elliaviate this bottleneck. But this shouldn't be an issue with just one disk. Just don't put it on a PCI bus. :)

I wouldn't buy the X25-E as its much more expensive and only attractive if you do alot of random writing, such as in web/database servers and the likes. You can always put two or more X25-M's in RAID0 for the same price, and have more storage space available.

Games will read mostly, though not all games are written well to utilize the disk properly, or have to do alot of single-threaded CPU-bound processing. But, games like World of Warcraft would see a significant gain of using an SSD. Checkout some SSD reviews/benchmarks to get a general picture.
 

silenkiller

Distinguished
Jan 30, 2006
66
0
18,630
Thank you for the response. Very helpful. You suggest Raid0 but not through hardware. I'm not familiar with software raid. So I can essentially plug both SSD drives into the mobo through SataII cables and create a Raid0?
 
Hardware RAID generally scales much better than software RAID, so I'm not sure why sub mesa is recommending the other way. Of course, a good RAID controller is quite expensive. Honestly, unless you need the capacity, I'd lean towards just a single SSD, as it will easily have adequate performance.
 

sub mesa

Distinguished
Hardware RAID scales very good indeed, but eventually its slower CPU would bottleneck an extremely fast setup of multiple SSDs in striping RAID. For random read IOps, the 800MHz Areca controllers are limited to about 70.000 IOps, which is a shame if your array can do triple that amount when using software RAID. Again, this is only an issue with an extreme setup of more than 6 SSDs on an Areca controller. At this point, the IOP will be bottlenecked. The original 500MHz Areca controllers also have a sequential write bottleneck around 400MB/s, which is rather low. Reading goes 800MB/s.

For performance, a single SSD without RAID is fine really; you won't really notice alot about SSDs in RAID0 on Windows; the queue is not high enough to saturate the disks. Ultimately a software bottleneck. :)
But unlike HDDs, you should be able to use the SSDs in RAID0 without much risk. So you can buy 2x 80GB for the same price as 1x 160GB and potentially get a faster disk, and the ability to split the RAID and have 2 bare disks again for usage in 2 systems.
 
Software RAID is often limited to far less than 800MB/s and 70kiops though, so I still think that if top end storage speed is what you are after, hardware RAID is the way to go. I agree that a single SSD is the way to go for now though. As for the without much risk comment, I actually would bet that current SSDs fail more often than HDDs (aside from certain problematic drives, such as the 7200.11 series). Hard drives are remarkably reliable, and from my experience at least, are not what usually fails first in an older computer.
 

sub mesa

Distinguished
Software RAID is limit-less, its only limited by the interface speed, memory bandwidth and raw processing power. All three should be sufficient in ordinary systems, and speeds well in excess of 1GB/s are piece of cake under linux/bsd with software RAID and fast disks. 4 SSDs may all be that's required to break the 1GB/s barrier.

cjl, i praise your experience with mechanical disks. However, my own experiences and those of others may vary. Even if the failure rate is just 1% per year (and its higher according to manufacturer's specs!) its too unreliable to trust your data to it, without using redundancy or backups. An SSD should never fail, never. If it exceeded its write cycles, it should still be readable, but not writable. Again, no dataloss here.

I'm not saying SSDs on the market now are 100% reliable, but there is no inherit weakness of the drives; unlike with any mechanical piece of equipment. If its mechanical it will fail, its just a matter of time. That's kind of different for non-moving electronics. When those fail, its usually static electricity, bad power supply or physical trauma.

The only real equipment that breaked for me, are power supplies and harddisks.