Sign in with
Sign up | Sign in
Your question
Solved

Why 50K IO/s during Random processes?

Last response: in Storage
Share
a c 257 G Storage
March 25, 2011 7:29:23 PM

There seems to be quite a bit of emphasis on IO/s during random read and write processes. I don't know why.

I have looked at quite a few benchmarks over the past couple of months. The ones that are supposed to somehow reflect real world applications all indicate that the IO/s peak at about 4,000 IO/s during some specific processes. It is less for some processes. I have yet to see any real world application benchmark go over 5,000 IO/s.

Anand over at AnandTech suggested 20K IO/s may be more than necessary based on current usage models.

My search led me over to the mainstream/professional side and I found similar information. It has been suggested that the development of ssd's has reached the point where the human eye and brain can no longer differentiate performance between some of the newer ssd's.

Intel and Crucial must know something too. It is reflected in their newest ssd's.

More about : 50k random processes

a b G Storage
March 25, 2011 7:38:55 PM

I'm not having any problems limited to 120 IO/s (depending on block size) from a mechanical HDD, I also dont know why 50k IO/s would be needed for a consumer gaming machine, in a server enviroment I can see why but not consumers..

Best solution

a c 415 G Storage
March 25, 2011 9:12:11 PM
Share

On a desktop system there aren't that many workloads that burden a hard drive with concurrent I/Os. Most desktop applications basically feed I/Os to the storage subsystem one after another. That limits the maximum rate at which the SSD will have to respond to requests, and it means the ability to do extremely high I/O rates is often overkill. But during system startup when a lot of different processes are initializing themselves, the queue length of outstanding I/O requests can get pretty significant - and it's at times like that where SSD performance becomes more of a differentiator.

I suspect this will change over time as application designers start to get smarter (or use smarter tools) to extract more parallelism out of modern multi-core systems. I've written some homebrew utilities which process multiple files using an approach where I just spawn off threads to open and process as many files in parallel as possible - they just hum when run against an SSD. But they tend to perform worse on hard drives where the thrashing of the heads costs more than the ability to process the data in parallel.

Server environments, of course, are another matter altogether.
a c 257 G Storage
April 3, 2011 3:04:38 PM

Best answer selected by JohnnyLucky.
!