When one drive has a 3.0 seek time, and another has a 0.1 seek time, thats 30 times within one second difference. To me that seems like the same thing. How many times does the harddrive use the seek time advantage though, on adverage?
It's just such a small window of time... Does it only count on seek time when the harddrive has to access a new file? Or does seek time come in everytime the pin has to move?
On a FFXIV benchmark, my Velociraptor had a 10200 load time. I was looking at other results, and I didn't find anyone with as low a load time... most people were between 12000 and 30000. Is that because I keep the drive low, and healthy with misc programs?
I'm just trying to understand why seek time is so important. When a program has to load 100 files... the difference between 3.0 and 0.1 is still only 1 second, right?
Seek time is the time required to find a file. When you have to read or write just one file, or if you're reading or writing very large files, then the seek time is only a small proportion of the total time and it's not that significant.
But if you have to read a lot of small files, then seek time becomes very important. That's sort of thing that's going on when you boot your system or when you start programs. We're talking about hundreds to thousands of files.
A typical hard drive has seek times in the ranges of around 10ms or so - that means it can find 100 files in a second or 500 files in 5 seconds. An SSD has seek times that are 100X faster - it takes 0.05 seconds to find 500 files. That's almost a 5 second difference, pretty significant.
There are some things I do in Visual Studio that used to take 10-15 seconds that are now pretty much instant with my SSD.