Lately I've been monkeying around with my system and trying out some hard drives and have noticed a lot of errors/problems with them as of late. I've had three hard drives go bad in the past few months. Most of them were old drives that I had lying around (WD 160-500 GB's), so I'm not sure if it's just the drives were ready, or my motherboard is killing them.
Here's what I'm seeing with HD Tune Pro on one of the HD's that's been acting up. These errors/problems just started popping up recently.
I have the P55-UD3R rev 1 board running the F6 BIOS and have tried using a few different SATA cables to rule that out. I'm wondering if there's any way I can test to see if there's a problem with the actual SATA ports on the board. Anyone know of any way I can test those to see if the mobo is the culprit?
Have you confirmed you are using the correct and latest chipset drivers from Gigabyte's website?
Have you confirmed if there is or is not a newer BIOS that maybe addresses a hard drive problem and updated to that one?
Have you run your tests using ALL of the ports on the board? Time consuming but might confirm one or more are bad.
Have you checked the Western Digital website to see if maybe your drives have issues which could be fixed by a firmware update? Seagate went through this last year on a series of drives.
Also, you will want to also check your RAM, all other connections to the motherboard (remove and re-seat everything firmly) and check the power output from your power supply (use the Gigabyte utilities to display the voltage output and see if you are getting power drop outs).
I had a motherboard a long time ago that corrupted my drives when I installed Windows (actually right after it was installed and I logged in for the first time - BAM!) and did indeed have to get rid of it.
I, frankly, have failed to see any difference, but I've only tested superficially, and have been told my file mix is incorrect... (The comical thing is - I've been told my file sizes are both too big, and too small by different people!) There is a fairly comprehensive piece here:
The main benefit I see is that I get 'hot-plug' capability on my AHCI eSATA port, but, as little as I actually use it, neither would it be a big deal to reboot when connecting something... Some people report some odd-ball problems using AHCI on various chipsets, but I've never tried to do a comprehensive search to see whether there are any commonalities - too lazy, and 'hands-full-enough', I guess
been looking into this, because i'm experiencing the same problem.
we ( our company) have 25 maxdata favorit 300 xs core 2 duo systems.
they've been running for 4 years, never had any problems.
her the cronologic events:
* in one week, 10 of them started to break down ( harddrive fysical problems, bad sectors, disk missing etc/. etc.)
All those 10 systems broke down in a time period of 3 weeks., strange so many fail at the same time, but acceptable, they run all day, total poweron hours is around 10000 hours.
* i started replacing harddrives ( we had 3 2nd hand drives lying around, and one new one) , cloned the operating system from a working model, and placed them back.
* 2 of them broke down within a day again, bad sectors or unreadable disk.
* at an error check of the new drive i found a lot of bad sectors. , strange for a new disk, but no worrys, it can happen.
I ordered 8 new harddrives. started cloning again, and placed them back.
in the meantime i returned also the drive i mentioned, and got a replacement for it.
one day later i get 5 back with the same problems, all brand new disks broken.
still waiting to see if i get back a few more... But i do know one thing,
i can't explain why they are failing.
so far i tried 4 different brands, a total of around 15 replcaments ( yes i know , the above numbers don't add up)
i've been thinking about virus like activities, molistation by the users ( they are in a public library, where the public can use the systems)
faulty poweradapters, dirty electricity...
i really don't know.
For now i understand i do not need to attempt to rescue those systems. because the only result is, there will be more harddisks shredded.
What i do want to know, is why does this happen
ps. we have 3 locations in the building where we have those systems.. the failing systems are all from the same location, however there are on that location still 5 machines that work perfectly