Sign in with
Sign up | Sign in
Your question
Solved

Opinion 2 SSD in Raid 0 or Larger single drive

Last response: in Storage
Share
June 3, 2011 2:30:19 PM

Hello. I am doing some research about SSD and I was wondering what people's opinion are about whether it is better to use 2 120 GB SSD drives (probably the Corsair Force Series 3) in Raid 0 or just have a single 240 GB SSD drive of the same series. If I do my shopping right, I can get the 2 smaller drives for a lot less than the single 240 GB drive, so that is part of why I am considering this.

Thanks.

More about : opinion ssd raid larger single drive

a c 316 G Storage
June 3, 2011 3:58:39 PM

According to most of the SSD reviews I have read, larger drives in the same series are significantly faster than the smaller drives, because they are essentially in RAID0. Actually, the larger drives have more memory channels running in parallel.

RAID0 might be a money-saving solution, but it increases the chance of a failure, and the chance that such a failure would leave the drives in a state where the data can't be recovered. So do frequent backups.
m
0
l
a c 311 G Storage
June 3, 2011 4:18:11 PM

For typical home use and gaming it is safe to say that a single large solid state drive is better than two smaller ssd's in a Raid array. The problem with large capacity drives is cost. That's why users purchase a 120GB to 160GB ssd instead of a larger capacity 240GB to 300GB ssd. If you can afford it, then go for it.

m
0
l
Related resources
a c 415 G Storage
June 3, 2011 5:17:26 PM

I'll add my vote for the "one large drive" option. When you use RAID you prevent the Windows 7 TRIM commands from reaching the drives, and this means the drive won't be able to do garbage collection automatically. Over the long haul, this can cause your write performance to degrade.
m
0
l
a b G Storage
June 3, 2011 5:37:00 PM

+1 for Larger single drive. You will not notice much of a difference with RAID 0 SSD drives and a single larger one. Plus RAID 0 SSD drives degrade in speed overtime as TRIM and GC is not supported in RAID 0 SSD volumes yet.
m
0
l

Best solution

a c 143 G Storage
June 3, 2011 8:00:16 PM

I would like to present a third option, if cost is a factor for a large single drive. I would purchase two SSD and run them as separate drives. You will maintain the speed of a single drive though but avoid the risk inherent with a RAID0 setup. You install the OS & Apps on one drive and games / storage on another drive or combination of all with a standard hard drive.
Share
a c 415 G Storage
June 3, 2011 11:41:30 PM

If you buy two 120GB SSDs, neither of the drives by themselves will be as fast as a single 240GB SSD (although it will be possible to perform I/O to both simultaneously). This means that activity confined to one drive (booting, for example) will take longer with one of the smaller drives than it would with a larger drive. It's not going to be a very big difference in practical terms, though.

There may be exceptions to this if you mix and match SSDs from different manufacturers or with different controllers, but within a given model line of SSDs the larger models are usually faster because they use the same controller with more flash chips, and with the additional chips they can perform reads and writes in parallel.
m
0
l
a b G Storage
June 4, 2011 5:22:02 AM

2 smaller drives will always beat a single larger drive due to SSD being all about the channnels used. With the smaller drives it's more about the IC configs than the channels populated and could be thought of as "lanes" used instead.

So, number of channels used is the biggest bottleneck(aside from the nand chips inherent limitations) followed by the memory IC density/chip count used to achieve the desired capacity. So, again.. more channels used are better than more lanes used and smaller raided drives will gain where it matters down low. Now you can split the pie amongst as much as you want all at once?.. and the pieces are much larger to go around.

And an FYI about these Sandforce controllers. TRIM just allows GC to utilized a bit quicker during "on the fly recovery"(which only kicks in during low fresh block availability anyways) and GC is still pretty lazy with trim marked blocks still just being set aside for later use when GC has the right powered on/low activity state requirement to make use of them. BUT if idling on occassion for recovery, allowing larger amounts of slack space, or not benchmarking/video encoding the things to death?.. TRIM is definately NOT needed with this controller.

So in a nutshell here... backup your data(as you should anyways), set up an R0, enjoy the speed, idle it, enjoy the speed, idle it, enjoy the speed. Pretty simple actually and very few go back to single drives after an SSD array.

Specially with those Sandforce controllers since 6 drives in R0 can have the same latency as just one would have. Plus incompressible data's linear write speeds will be slightly higher with much better small file low end grunt for increased multitasking ability.

Sandforce raid is awesome and shouldn't be doubted unless one is speaking from experience.
m
0
l
a c 415 G Storage
June 4, 2011 2:44:16 PM

groberts101 said:
TRIM just allows GC to utilized a bit quicker during "on the fly recovery"(which only kicks in during low fresh block availability anyways) and GC is still pretty lazy with trim marked blocks still just being set aside for later use when GC has the right powered on/low activity state requirement to make use of them.
Without TRIM the SSD has no way to know that the blocks in a file that's been deleted are now free. If the SSD doesn't know that the blocks are free then it can't use them for garbage collection, period.

The SSD manufacturers provide utilities that can scan the file system and tell the SSD manually which blocks are free, but that's a manual process and I still haven't heard definitively whether they work any better with RAID controllers than TRIM does.
m
0
l
a b G Storage
June 5, 2011 6:34:07 AM

not true at all or the drives would bever be able to recover dirty blocks with older systems/OS's that were non-trim compliant, or raids(Revo and IBIS included). These(and many others these days) DO have the ability to reclaim those blocks after a data map comparison.

Heck.. even my Indilinx array could be beaten down to a near stuttering mess and with just 10 hrs of overnight logged off idle?.. be right back to near fresh speeds the very next series of test runs.

So, no.. firmware has evolved quite a bit in the past year and a half and does more than you realize these days.

And none of those utilities work to reset the Sandforce controllers maps to make it think that the blocks are clean and available(writing 11's like AS Cleaner does). Only GC or an SE can do that.
m
0
l
a c 415 G Storage
June 5, 2011 4:46:21 PM

groberts101 said:
So, no.. firmware has evolved quite a bit in the past year and a half and does more than you realize these days.
I think you're confusing the tracking of used/unused pages on the flash memory chip allocated by the SSD controller with the free vs. allocated blocks (logical block numbers, or LBNs) allocated by the file system. The SSD has no knowledge of the file system and no knowledge of which LBNs are in use or free at that level. The only way it can tell that an LBN is used is if the host writes to it - from that point in time onward it must assume that the LBN contains data that the host may need again at some point. It may shuffle that LBN around to different flash memory cells as part of it's garbage collection, but it's required to keep the data in case the host asks for it again.

Over time, as more and more LBNs are written to, this leaves the controller with fewer and fewer unused LBNs to use in garbage collection. There are only two ways to tell the controller that an LBN is no longer in use: TRIM and a data security erase operation (which marks ALL LBNs as unused).
m
0
l
a c 311 G Storage
June 5, 2011 5:07:52 PM

fmenton66 - What will you be doing with your PC? Professional use, hardcore gaming, or just typical use? It makes a difference.
m
0
l
a c 353 G Storage
June 5, 2011 5:57:36 PM

2nd what johnny said " Use makes a diff"
I'm with sminlal - avoid raid0 untill they fix the trim issue
I pretty much go with Tecmo34, 2 seperate drives, not in Raid0, Followed very closely with sminlal's choice of 1 large drive.

I think some times we tend to look at the benchmarks more than real life day-to-day use. ie:
Booting - So you shave 5->10 sec off, how offten do you boot. Myself Once or twice a day, others leave on 24/7. Time is only shaved in loading the operating system, not the post. This is for going from Sata2 to Sata 3 SSD. Using the same SSD; The savings of Large, small, or a raid 0 is not going to be greater, maybe in the order of 2->5 Sec.
Program load. Once you click on the shortcut and the time it's available , If you can not move your mouse there before the program is ready - It makes No diff. When I click on a Spreadsheet recent link. The program and spredsheet are availabile before I can move my mouse to a cell to edit it - And that is a SATA 2 SSD.
Game playing - Yes the program/Maps/Tables will load faster if on an SSD vs a HDD. But will that translate to a noticable diff going from (a) two small SSDs -> 1 large SSD -> 2 SSDs in Raid0 (All say Vertex 3). If you cannot perform an operation in the difference of time, then moot point.
Program load time.


A 2nd consideration - speed verse reliability/quality.
I get the feeling that OCZ is going for performance at the cost of quality. A SSD should be pretty much plug-an-play. The emphasis here is on the word PLAY, annd that's not game play time. It's the possibility that you may PLAY getting it to work. You can get a good feel for this reading the OCZ forums.

Case in point: Order 2 Agility-3's. Put one on my desktop (Z68 Asrock Extreme4) and it was as plug-&-Play as it should be.
Recieved my new Notebook (Samsung RF711-S)1, SB also). FORGET IT. Win 7 would bomb out when expanding files. Looked at OCZ forum, so downloaded the newest firmware + toolbox. Put it on my I5 Desktop - There OWN program could not find the SSD; However Little old Win 7 could see it, initialize/partition/format the drive. O, now OCZ could find the SSD, but It already had the lastest Fireware.

Bottom Line here is OCZ vertex 3 maybe the KING, but untill OCZ get's their act together, I'll be looking at a little less performance and more at Quality and customer service - ie fixing their toolbox and their firmware.
m
0
l
a c 311 G Storage
June 5, 2011 6:38:30 PM

Interesting option suggested by tecmo34 and Retired Chief - 2 seperate ssd's but not in a Raid array. That is somwhat similar to one of the possible ssd configurations for use with Adobe Photoshop products. One ssd for Windows and Photoshop. Second ssd used as a scratch disk. Then use a hard disk drive to store raw and finished images.
m
0
l
a b G Storage
June 5, 2011 9:43:22 PM

sminlal said:
I think you're confusing the tracking of used/unused pages on the flash memory chip allocated by the SSD controller with the free vs. allocated blocks (logical block numbers, or LBNs) allocated by the file system. The SSD has no knowledge of the file system and no knowledge of which LBNs are in use or free at that level. The only way it can tell that an LBN is used is if the host writes to it - from that point in time onward it must assume that the LBN contains data that the host may need again at some point. It may shuffle that LBN around to different flash memory cells as part of it's garbage collection, but it's required to keep the data in case the host asks for it again.

Over time, as more and more LBNs are written to, this leaves the controller with fewer and fewer unused LBNs to use in garbage collection. There are only two ways to tell the controller that an LBN is no longer in use: TRIM and a data security erase operation (which marks ALL LBNs as unused).


all I can say to sum it up for you is.. you're wrong about raids ability to recover. I written over 50 Terabytes worth of data to test the TRIM and GC recovery algorithms of AT LEAST 40 different drives using 5 different controllers. A solid 10+ of which were specific to Sandforce throttling/recovery algorithms. Most of my testing was done in raid.

Intel and Marvell had some issues not to long ago but Indilinx and Sandforce in particular have been going strong for more than a year now.

Let's try to drive it in a bit more here. Fully degrade an Indilinx based raided volume with all sorts of benchmark and temp storage trash once. Just hammer the piss out of it for good measure.

Now ilde the machine to let GC work and guess what happens? You can follow up with a subsequent write session up to the same amount of free space listed in Windows. Couldn't do that if your antiquated first gen firmware version was true. No way... no how and was the initial problem with these things. Where the hell do you think all the manual garbage collection tweaks/trim tools came from. No actively present GC/recycling tool to keep degradation from setting in and staying put. Those days are gone buddy and no one could sell a drive that recovered that slowly. Then they'd sure as hell be on some huge "raiders blacklist" and common knowledge by now. Old news.

So,.. in a nutshell here? Put up the stats or test results after you turn trim off or run raid for a few months. Could even accellerate the tests by filling 95% of the drive up. Then write the P outta what's left?.. rinse and repeat with idle time in between. What happens? Yep.. drive keeps returning that trashed nand to the free block pool for future writes without any read-write-modify. That's what GC does for your drive these days. TRIM just accellerates the process and allows the drive to save time through elimination of some of the map comparison it would need to rely on without it.

And an FYI here again.. when it comes to Sandforce.. TRIM marked blocks are NOT recycled on the fly or used immediately(unless immediately needed due to an empty free block reserve) and are simply marked/mapped and set aside for later use when efficient GC conditions are met(low activity/powered on). Non-TRIM compliant solutions do just fine and many have avoided reinstallations when heavily degraded by just implementing some heavy initail idle time followed by an occassional logoff idle to keep them on the right track. These drives DO recover without TRIM. It's a solid fact based on experience from many.
m
0
l
a c 415 G Storage
June 5, 2011 10:28:01 PM

groberts101 said:
These drives DO recover without TRIM.


Not according to this: http://www.anandtech.com/show/4346/ocz-agility-3-240gb-...

Edit: Well to be fair - they DO recover, but not as much

The other thing to bear in mind is that transfer rates are only part of the picture. Without TRIM, the controller's GC mechanisms don't have as many unused LBNs to work with, and as a result they require more work to scavenge write-ready flash pages. The net result is that the write amplification factor goes up and your drive's lifespan goes down.
m
0
l
a c 353 G Storage
June 5, 2011 11:44:48 PM

Johnny
I have the patriot Pro 120 Gig as a boot drive and stuck my Intel 80 G2 drive in as a Scratch drive and put all my most often used files on it. Workes great.

However; Yesterday I puled it out to stick in my new Notebook since the NEW OCZ Agility III will NOT work!! Have played with it the better part of this afternoon - Still no workee. Thinking about just returning it and get on that does play nice with SB notebooks, maybe the intel one.
m
0
l
a c 311 G Storage
June 6, 2011 12:10:12 AM

Retired Chief - we're getting off subject but from what I read in articles and forums it appears laptops and notebooks have a harder time getting along with ssd's. This is especially true with the Macbook Pro's. Intel 320 seems to be the best bet.
m
0
l
a c 353 G Storage
June 6, 2011 12:59:45 AM

^ Yes, on getting off subject. Only pointed this out as it identifies a weakness and a failure to address known problems by OCZ, But they are the most recommended SSD because of their performance. Relaibility is only from Manuf, not real world yet and that kind of bothers me with raid0.
m
0
l
a b G Storage
June 6, 2011 7:25:24 PM

I went with 2X 120 GB Vertex 2 drives. After 6 months in RAID mode, they're running absolutely great. Sure, the transfer rate is "only" ~370MB/s now as opposed to ~500MB/s when I first set the system up, but it still works great. It boots in about 16 seconds (the RAID card takes 8 seconds to initialize) and Rift and WoW loads are <3 seconds. Photoshop CS4 64-bit loads in 4 seconds. Most everything else loads as close to instantly as you can get.

TRIM isn't as important as you may think. I bet dual drives would work even better with dual C300 or M4 drives, as they aren't as dependent on compression routines and TRIM and garbage collection.
m
0
l
June 7, 2011 3:46:49 PM

Thanks for all of the responses. Alot of good information. Just to let everyone know, the PC will be for mixed usage (documents, web browsing, email, and gaming). However, my main focus for this question is to improve performance for gaming. I think I may try the separate drive solution and use drive 1 for OS and office apps and use drive 2 for games. Finally, for documents, media files, and less critical apps I have a mechanical hard drive.
m
0
l
June 16, 2011 1:55:56 PM

Best answer selected by fmenton66.
m
0
l
!