I have a Raid 5 array running on Sil3132. 5 1TB Drives and everything seems fine and dandy to create the array. But when it comes time to rebuild the array, it always seems to fail, giving me an error in the logs that reads...(Sense Data 70:00:04:00:80:000000....00 (ScsiError) ) and then goes on to say that a drive drops from the array. It goes through the rebuild process and the dropped drive is then converted to a spare.
I then delete the array and create a new one which takes 2 1/2 days. Once it's up and running it works fine until something causes it to fail again. Originally it was on reboots it would try to rebuild again but I have been able to fix that problem. Lately it has been during the transfer of files to the array, a drive will drop and the rebuild process will start and never be able to complete. Always ending in a Sense Data ..... (ScsiError). I've tried switching cables as well as PCIe slots. This only seems to be a temporary fix until the next time something happens and it tries to rebuild.
Also, any alternatives to storing mass amounts of data and a way to protect it from drives failing? I got a lot of stuff I want to protect and since every drive dies at some point, I would like a way to keep it without having to use backup tapes.
If it is the same member of the array having the error, would it be logical to try plugging in a new drive to replace the one that keeps having the error and seeing if that helps?
Silicon Image is "shitRAID" - do not use it.
Especially the RAID5 support is bad; consider a Silicon Image RAID5 array to be more prone to failures than a single disk without RAID. Heck, i think you would be safer off running Silicon Image RAID0 instead.
But the real solution is to move away from FakeRAID with terrible quality RAID drivers, and move to something sexy and stable.
ZFS comes to mind here, but that would imply a NAS which shares files over the network, and is built in a dedicated computer with lots of memory and multicore CPU. If you're interested in that, i'll give you some nice pointers.
If not, please focus on a backup instead. RAID can never replace a backup, RAID can only provide redundancy. RAID also introduces another layer to your storage chain that can fail, potentially making your storage setup much less reliable due to the RAID.
To do RAID5 properly on windows you would need hardware RAID + TLER/CCTL enterprise RAID disks. On anything else than Windows you can use Software RAID and normal (non-RAID edition) disks.
I would very much appreciate some advice on ZFS, never heard or messed with it. I've got a tower with Core 2 Quad q6600 and 6 gigs of RAM running the RAID setup now which may be able to do the job. I'd love to put together another rig though.
What's the best way to back up mass amounts of data as it is around 1.65 TB's and growing. Split it up on to 1TB drives and keep the drive in a safe place?
any recommendations on a RAID5 controller that is not crap as another option?
Oh and i'm working on a web-interface like FreeNAS for FreeBSD; so once installed you could do (mostly) everything using the web-interface. If you don't know FreeNAS look at it as well; its an OS totally dedicated to serving as NAS.
ZFS is both a filesystem and a RAID engine; ZFS does RAID0, RAID1, RAID5, RAID6, RAID7 (actually, modified versions of these).
ZFS is something special, not for just anyone. It doesn't run on Windows. It runs great on FreeBSD and OpenSolaris; two server operating systems that casual Windows users won't be able to work with.
The idea is that you build a home server, install FreeBSD to it, configure it to act as NAS, then connect with your Windows PC to the FreeBSD machine and you get a drive letter like D: with access to your files.
If you require a DAS (Direct Attached Storage) - then you need something that works with Windows. Generally Windows and RAID5 for the casual user means problems, problems and data-loss. I recommend against it; but ZFS is great.
As for backups; easiest is to just copy directories yourself. So if your 1.6TB is in directories A B C D E, then you would copy ABC to disk1 and DE to disk2; manual method. Least that can go wrong; no RAID and stuff.
I read the articles and I think I am going to try and switch over. It seems like it's pretty awesome. Do you have any recommended tutorials on how to set this up (as far as details go)? I got the gist of everything through the articles and I have experience with command line interfaces so I'm not worried about that. Just simply a good tutorial to follow as a guideline. I found several through google searches but didn't know if there were any good ones that stood out from the rest. Seems like most are going to say pretty much the same things.
Thanks again for this recommendation, it appears this is going to solve my problems in the long run. (I'm disregarding the future speculations that ZFS will not be well maintained since the buyout. It seems to be quite efficient and reliable as is without too many changes or additions being needed.) Hopefully I can abandon this fakeRAID if I can get ZFS working properly. Going to take some time and some $$ but sounds well worth it.