I was an intern for maxtor about a year ago, and was going to work for them before they anounced being bought by seagate and then my job there dried up (right when I was graduating with my BSEE).
I can tell you there is a reason for the change in performance since the old hard drives. It's all due to density. You'd think that the more bits/second you pass by the head the faster you could read right? So higher density would give you better performance... but here's the issue... they dont write 1's and 0's to hard drives anymore. They USED to write 1's and 0's, using an up or down magnetically oriented field and then using peak detection (current caused in the head by the magnetic field is a negative deriviative, so when it just passes by the head you get a spike in voltage as the field on the disk is DC and the only AC part of the effect is beacuse it's moving past the head). Now they use what is called PRML and are switching to another form of a similar thing, it's rather complicated, so let me start explaining......
PRML: This is a genious idea by HD guys to read data off a disk that is smashed too close together, but it causes lots of performance hits. When density started increasing on hard drives, they had issues with the fields of adjacent bits interfiering with eachother, so if you wrote 111 you'd be fine, but if you wrote 101 you wont get a nice separated +1 spike -1 spike +1 spike, you might get +.5 spike -.3 spike +.5 spike (spikes are no longer large enough to register under peak detection). So they said... hey we have faster and faster channel chips (main big chip on the hard drive that controlls the reading/writing for the most part). So... why not over sample the signal? Like refreshing your computer screen, but because of the mathmatical principle called nyquist rate, you have to sample AT LEAST 2x the frequency that the bits are bassing under the head. So you over sample the signal, and to lock the sampling in you use a phase lock loop at the start of each sector..... then you over sample the data, and it looks like the 1's and 0's are interfearing... now here's the part that takes ALOT of processing power... You then go through what is called a Vitirbi (sp?) tree. It's a logic tree that changes what branch it branches to based on the MSE (mean squared error, a probability based number that tells you your error of the read value). This logic tree looks at the interference of the 1's and 0's (which have been written to the hard drive messed up because that's the only way you can do it). Then it looks at how much "error" is in the interference VS a randomly generated bit patter, then goes through the tree until the MSE is as small as possible, and hands over that value as the "data". AKA it guessed at your data using a fairly mathmatically sound method... works most of the time, cuz your hard drive works right? Yep... but that takes alot of processing power and time... and lots of floating point or Q based calculations...
the 620 gig drive maxtor should have out by now had a data frequency of 1 gigabyte/second at the outside of the disk, and the channel ran at 2 gigahurtz (to meet nyquist rate) (dont worry 0 point sampling is prevented by the PLL). The channel almost had more transistors in it than the processors in computers at that point in time (when I was working on it).
The next problem is this: When your data is packed so close together and your data's magnetic field is so small, it's easy to accidentally effect it, the higher the density, the higher this problem is. When you read an adjacent track on the hard drive, or especially write, you can damage the tracks next to it, so before a write to a track you have to read the adjacent tracks, then do your write, then re-write the adjacent tracks as well, and the higher your density the more you have to do this... causing more performance issues. (usually the read degradation is small and they just ignor it). There is ECC (error correcting code) in each sector that can correct a large data vault (I cant really tell you this number as it's a trade secret for each company, but I can tell you that it's upwards of half the bits in the sector).
Some of the hard drives have changed from using PRML to a new "changed" version of PRML, it's different in that instead of using a vitirbi tree it uses a node based forward and back propagation method, with propagation to nodes determined by a modified MSE, I cant give out too much info on it... but it's supposed to be more accurate, and at times can take less time to calculate, but it can also take longer, depends on how much effort you want on trying to get your "scambled" data out. Not to mention that the data is actually scambled when it's writtin on the disk, the data is multiplied by a psuedo random value (mathmatically chosen to prevent large number of adjacent 1's, or 0's, like 111111111, etc) and then unscrambled when read.
The good news: New recording methods can improve performance, bit density (hd size) and reduce processing overhead.... welcome to perpandicular recording! This technology has been worked on for years, and seagate finaly (notice AFTER they merged with maxtor they released this)... hehehe... I'm not bias, I swear... anyways... Normally data is written in a horizontal fashion on the disk, HOWEVER this new method alows the data to be written in a virtical fashion, so the field is oriented virtically through the disk. The center of the disk is a material that allows magnetic field flux through it, so you can still write to both sides of the disk... Because of the way the field fluxes work out, you no longer have as much bit interference, and you can back more data together... and get this.... When you write a 1 you might write (+1 for say field pointing up and -1 for field pointing down) +1+1+1+1+1, and for 0 -1-1-1-1-1... when each time the head passes over the series of field chunks, you get a spike... and if you filter all those spikes together... you get a nice smooth signal... welcome back to 1's and 0's!! So with perpandicular recording you get increased disk size, back to writting 1's and 0's and better data integrity, with increased performance due to less overhead (no PRML). YAY!
You guys should have tested one of seagates perpandicular recording drives in the line up..... i'd like to see what it will do, and look forward to looking forward to what they can do in the future.
Never think that technology has reached it's end... humans always somehow come up with a new way of getting what they want done... for what ever reason they want it.