OK, so first: Hello Tom's Hardware Forums! Second: sorry if this issue has come up before, my Google-fu is weak with computer problems. Right, onto the issue at hand...
My desktop has two 128GB OCZ Petrol SSDs in a RAID0. It worked fine for 1.5 months, then I left it off for 2 months (at university) and when I came back the array wasn't working (one drive had become a non-member disk, so the array was missing a drive). We tried to save the data but eventually just recreated the array and reinstalled Windows. All was well for a week, then it started freezing, failing to find an OS, windows persistently doing Startup Repair (fruitlessly), drives disappearing, one BSOD. Both drives show "Error Occurred(0)" on startup (when they're not missing), although the RAID volume claims to be "Normal" and "bootable".
I should probably mention that as this is purely a gaming machine, I'm not concerned with saving the data; it's just the annoyance of reinstalling everything.
I'm not really sure what to do. My options as I see them are:
1) Do some tech-magic suggested by you good people to fix it. (Preferable, if possible...)
2) Use OCZ's warranty to get the drives fixed or replaced. (Worried that OCZ will give me some equally low-quality replacements.)
3) Ask the retailer for a refund, then do (4). (Possibly against their policies, I think, because the faults developed after delivery.)
4) Buy a single, larger SSD, pay for a reliable brand and not use RAID.
Any advice or comments would be greatly appreciated...
Wow, I never noticed how out-of-date my BIOS was (F6 to the current F10). Both of those updates seem to require a working system to start with (certainly the BIOS), so I'm going to attach an old 60GB HDD to the machine, install Windows onto that (again!), and apply the updates.
I assume I'll have to destroy the RAID array to have access to the individual SSDs to apply their updates?
Running them singly would help with reliability, and I agree that SSDs are fast enough without RAID, but I didn't want to deal with two separate drives: I think my Steam folder could get bigger than 128GB by itself, and I'm not sure how to cope with that.
The reason I went RAID in the first place is because 2x128GB drives was cheaper than 1x256GB. My advice to anyone thinking of upgrading to SSDs cheaply: get one. Don't do RAID like I did, even if it's cheaper.
Still trying with the BIOS update... it's proving harder to do than expected. (I think one of my HDDs didn't work, but the other did, but the update exe isn't 64bit compatible, and QFlash can't find the BIOS files that came with the exe... Argh!)
PS: Thanks for such quick replies guys! I'll come back when I've managed to apply the BIOS update.
Well, one week later and here I am! It appears that one week was a good guess, because problems returned yesterday: the machine can't find the OS to boot from. One of the drives briefly disappeared too, but they aren't reporting any errors.
I'm looking into getting OCZ to check them, or getting them replaced. Until then I'll try running them without RAID, see if that works. Feels more like a workaround than a solution, so I'm still open to suggestions.
I've got an OCZ Petrol 128gb and have been having problems with it. I use it as my OS/Boot drive (Windows 7 Professional)
First of all it would stop being recognised by the OS after boot after about 6 weeks use, and then it would stop being recognised by the BIOS on boot.
The internet is awash with similar stories.
I was going to return it and went out to buy a replacement (i needed to fix it asap) but couldn't find the intel one i wanted at the right price in the stores i visited, so i thought i'd have a tinker before ordering something on the internet in the evening.
I pulled it out of the computer, installed windows on an old spinning platter HDD and then tried to see if it could be accessed if i plugged it into a USB drive (i.e. a caddy with a SATA to USB interface)
I then check disk'd it, but it kept on getting stuck.
i plugged it back into the PC, still boot driving off the old HDD and now it could see it. i updated the firmware from 3.12 to 3.15 (note that 3.15 is not necessarily a destructive update; rather it's only destructive if you're updating from anything earlier than 3.12.) and then got it check disked on boot. picked up problems and it fixed them.
reset it as the boot drive and it worked fine for a month.
got a BSOD with a bad pool header STOP yesterday. it then wasn't being picked up by the BIOS. out of the PC again, into the USB drive again, recognised, chkdsk, it stalls, back into PC, boot from HDD, SSD found again, upgraded from 3.15 to 3.15 (don't know if it did anything), chkdsk on boot, errors found and fixed, reset to boot drive and now running fine.
i suspect the firmware update isn't necessary. i'd like to speculate that the following is happening: a dodgy sector is found on the drive, the firmware fails to handle it and goes beserk and is unable to mark that the drives needs a chkdsk. this somehow corrupts the drive so it can't be recognised by the BIOS, but can be recognised through pnp via USB. chckdsk may stall but it manages to mark it for a need for a full check disk on boot and this allows it to be recognised on boot and fixed.
That could be a load of old crap, but it's worked twice now, and i'd be interested to know if anyone can replicated my "fix".
It's possible we have the same problem. Mine is complicated by RAID though... Should I try running chkdsk while the disks are in a RAID volume?
Further developments at my end:
This is very odd. The drive isn't fine, it alternates between normal, non-member disk and invisible, without me rebooting. I'm also getting long pauses during startup (I think these happen just before it displays something to do with the drive). I tried deleting the RAID volume but somehow, it's still there.
I may try turning the PCH SATA Cotrol Mode from RAID(XHD) to AHCI without deleting the volume first. I think that should disable all the RAID stuff, so I can try them separately.
After some googling, it appears that running chkdsk on RAIDed drives is not recommended. I suppose it's just one of those things that doesn't work with RAID!
It seems the retailer is going to get them repaired and is already organising a return, but they say it could take 4-6 weeks. This means I won't have anything to contribute to the thread for a long time, so I'll probably let this thread sink. Thanks for all the help guys!
After some googling, it appears that running chkdsk on RAIDed drives is not recommended. !
Simply absurd. This recommendation only works for people who are emotionally attached to a piece of hardware so don't want to see evidence that it is faulty. It sounds silly but it happens all the time. Google for people who create pages of forum submissions on how they have tried dozens of "fixes" and spent hours on end trying to get a hard disk (or other hardware) to work. Eventually the weaseling will come with words like "driver", "firmware", "controller" or "software problem".
It is impossible for a home user to determine which subsystem of a hard disk is at fault without hardware debuggers, SATA compliance certification hardware and intricate technical knowledge of the hard disk. For instance a firmware update could just be covering up a hardware fault. So even a firmware update doesn't necessarily mean the problem supposedly being fixed was caused by the firmware.
Take that rubbish back to the shop and demand a refund.
I was going to let this thread sink, but since you've brought it back up...
The retailer has just given me a refund for both drives (at the price paid too, which is ~£30 more than they're selling for now!), so I'll be replacing them with a single ~256GB drive. Given that they were going to send them to OCZ to repair I assume OCZ has declared the fault unfixable. So, to anyone in the future reading this: If you have problems similar to mine (SSDs in RAID, drives disappearing from arrays, etc.) I recommend you send them back to the retailer/manufacturer, it's probably not a problem you can fix.
Reading over this a couple of comments.
1) On chkdisk. Both defrag (Which you should NEVER run) and chkdisk are aimed at HDDs which use a magnetice recording media. Chekdisk is aimed at finding defective sectors and marking them bad and attens to relocate the sector/cluster. SSDs use Blocks and I believe has an internal check algorthium for finding bad cells.
Bottom line here is I would not run Chkdisk.
2) on raid0. Raid0 only improves on Sequencial performance, the least important prameter. For an OS + program drive it is the smal file 4 K random read/writes that are important. Two major disadvantages for raid0.
A) as you found out, a single drive falure kills retriving data from both drives, and requires reloadiing everything.
B) Currently TRIM is NOT supported for member drives of an raid array. While SSDs do have Garbage Collector algorthum, IT WORKS a He** of a lot better when trim is passed to the drive.
C) If raid0 is require - make dam* sure you buy the most reliable drives, which adds to cost.