This is possible. It may be a little more complicated if you are going to use the original SSD that is in the fist computer as part of the RAID array in the second computer. If this is not the case then the information will be written onto the RAID drives as if it were just one drive. the RAID controller will place the information onto the SDDs correctly in stripes with no problem. However you can not put your entire OS onto the new drives.
You can read about it here. http://www.makeuseof.com/tag/move-full-operating-system...
You must make a "clone" of the OS>
If you are going to use the original SSD in the array, then i would suggest putting all the information you want to transfer onto a different hard drive or large disk and moving it over after a fresh install of the os.
Yes the two SSD would add up to about 64gb (give or take some). The performance increase will be around 50%. Although, most SSD are fast and efficient and you usually don't need to mess with raid0.
Hope this helped
The drive change will happen in the same computer (same mobo, CPU, etc). The only change will be swapping a single SSD with 2x SSDs in RAID0.
I will not be using the original SSD in the array. (I'll be keeping it to create backup images on).
Reinstalling the OS would defeat the main purpose - I want to retain some of the in-program settings and changes that will be on the drive. Some of these cannot be copy&pasted elsewhere; if I did a fresh OS install, I would need to make the setting changes and create the content from scratch.
So about 50%, hmm. I'd still like to know if it's possible.
Have you thought about a larger sized single SSD....even 64GB is generally regarded as a sub-optimal size by today's standards since it will be used up pretty quickly.
If you were going down a RAID route for your OS it's much better if you can install to RAID first, especially considering the small sizes you are talking about (32GB x 2).
With a backup image (e.g. Acronis or Ghost) I'm fairly sure it should be easy and fast to restore to a pre-built RAID later (my own Acronis restores have been to single drives in the past).
Edit: As Robohazard mentioned above, you source image needs to reside away from the RAID disks (e.g. USB pen drive).
I'll only be needing space for win7 h.p. 64 (about 20GB), and a few other very small programs totaling less than 1GB... so 32GB of space should be enough. And.. I can upgrade to a larger size later.. just initially I need to minimize costs as much as possible.
I was thinking of that, creating the RAID0 right at the beginning. Again, it's a question of cost. I'm just seeing what my options are now.
...or I might go with RAID1 mirroring. I think I need to figure out more on what I want, before I see if it's possible
But it does seems that it is much easier to set up any type of RAID at the beginning.
1) Raid0, makes little sense for SSDs, Was great for HDD that contained OS + programs + All your date which include large Photos and Video.
a) Raid0 does increase Sequencial performance (inproves read/write of Large data files), but this is the LEAST important parameter for a OS +program drive. DOES NOT decrease access time and VERY little improvement for the IMPORTANT 4K random performance which is IMPORTANT for a OS = Program drive.
b) a 128 gig SSD is faster than a 64 gig drive and a 64 gig drive is faster than a 32 gig drive. This negates much of the performance gain of using two smaller drives in raid0 vs buying a single drive - Very little cost difference, ie 2 SSDs Say 64 gigs cost the same as 1 SSD that is 128 gigs.
ON space that is USEABLE, 64 gig SSD as an example:
1) 64 gig SSD, formated size is approx 60 gigs (lost 4 gigs there).
2) The rule of thumb for HDDs was leave min of 10% unused. With SSDs that started out as 10%, then went to 15%, and now the recommendation is 20->25% UNUSED. Using the 20% number that NOW leaves the 64 gig drive down to 64-4-12 or 48 Gigs. This is the reason that the recommended MIN size is 80 with a recommended size of 120/128 gig.
PS that 32 gig SSD only allows for approx 24 Gigs of usable space.
1) For a OS + program drive over half of the are less than 32 K, Typicall Strip size is 64 K for a raid0. And these files are randomly placed - they would never be sequencially called during boot time or when a program loads.
2) Access time is determined by the device, NOT how it is connected, ie raid0 or none raid0, and it is the access time + read write time that determins random 4 K performance.
3) Sequencial performance is for a large file that is sequencially placed on the drive.
For example a .VOB (DVD move file) is 1000 MBs, a large jpeg/bitmap photo is say 10 Megs. Files are broken up into 4 K clusters, so for a 10 Meg photo that = 2500 clusters. If these clusters are arranged one ofer the other then read is at the High sequencial perfromance. These type of files are the only files that benifit from High sequencial performance, ad yes Raid0 give a good boost to these - HOW MANY dvd movies or Photos do you pan on putting on a SMALL SSD. PS a single Blu-ray file is up to 35 Gigs (That is for one file)
If a person is doing a lot of photo editing or working with movies, then Raid0 would be benefitcial BUT THEY would use a pair of HDD, not SSDs unless they Have DEEP pockets, or less brains than money - LOL.
Bottom line for a OS+program drive the performance gain would be noticable when runing a benchmark - BUT NOT when loadig the OS or loading a rogram.
I am posting my evolving learnings on the other thread.
The 2 most relevant/helpful things so far are that SSDs can process up to 10 I/Os in parallel, compared to the one-at-a-time of my current rotational drive.
Some SSDs only have 8 channels, and others have 16.
-This aspect alone would almost solve the disk queue length issue.
The other things is the number of IOPS of SSDs compared to a 7200rpm drive. One test (involving 8kb sizes, sequential, random, and read and write ops) calculated 8100 IOPS for a SATA 3GB, compared to 184 IOPS for a 7200rpm drive.
What you pointed out could be one of the main limiting factors - access time.
What is the best way to reduce this?
chipset? CPU? memory speed?
I am already planning on a Z75 or Z77 board, with an i5 3570k at 3.4GHz or 4.0+GHz, and 1600 memory.
Access time is internal to the SSD to change it simply redesign and replace the Internal Controller. Not trying to be funny.
With mechanical HHDs access time was a function of RPMs + platter magnetic domain density and the controller. As RPMs increased (5200->7200->10K), access time decreased. It was also a function of where the data was on the Platter, access time for the outer diameter of the Platter had much faster access time than the data close to the center (referred to angular velocity).
I had a pair of WD Black 640G drives, access time was listed as 12.6 milliSec - and that is very close to what benchmark showed. I placed these two drives in raid0 - exact same access time. I then placed them in a Raid0 using what is called short stroke - This DID DECREASE access time down to approx. 9.5 milliSec.
Short Stroke is when you set up the array you only use about 30% of the HDD, the remaining 70% is wasted. Reason this cut access time it force all files to use the outer 30% of the platters - simple rational, as I already stated for Rotating platters the outer Circumference is faster
SSDs could care less where the data is located. Think of saying I want data at x, y, and Z coordinates (Z is because the cells are multilayered). This is why access time is measured in tenths of a millisec instead of 12.6 millisecs. With SSDs It simply an address with HDDs it has to A) physically move the head in/out (X) and wait for the data to pass under the Head.
You catch on fast. And yes that is why large data centers use ssd's for high i/o servers. They just cant be beat. And typicallly with ssd's the more storage they offer the higher the i/o, write, and read speeds (usually due to more channels)
If you were close I would let you experiment with a samsung 830 128gb I have.
edit - I forgot to mention write latencies are higher and vary greatly witht he ssd> I've seen anywhere from .1ms to .99ms. (wow)
And yes that is why large data centers use ssd's for high i/o servers.(wow)
That's not really true. Data centers don't use SSDs. Not yet at least. They are WAY too expensive and immature of a technology for data storage on an enterprise scale. Even when you include the cost of powering and cooling HDDs. Maybe one day though..... but not for a long time.