Short stroking, what to put first? working dir OR OS?

Hi everybody,

In another tread ( i read about short stroking. I looked a bit on the web, and i saw some diagrams of speed vs location of the harddrive, and in many cases it can make a 30-40% difference.

Maybe its a stupid question, but i want to ask it anyway.

What is better to put in the fast partition of the drives? The operating system or the programs and working directories. (for a linux installation the swap partition comes first, that makes sense but what about the rest?

For me its not a problem to wait some seconds longer for booting, if i can speed up the performance for my simulations. So is it better to put the working directory of the simulations before the operating system?

Is the operating system reading and writing a lot after it has been loaded? or are the programs doing it "alone"?

And i would like to add a second question, is the gain in speed only there if it concerns a splited volume, or also with normal partitions whithin the volume?
In my case i made a Raid 0 of 2 hard drives and i can make max 2 volumes out of it in the raid config. And then i can split them up in the installers again.

Just i was thinking about the best order?

Any suggestions?

Thanks in advance!
8 answers Last reply
More about short stroking first working
  1. Really its more of a question of "Do you want to access your files faster... Or do you want your bootup to be faster and windows in general to do everything faster"
  2. In a true short stroke there is only one volume. You use only 30% of the available disk space. The remaining 70% is unused. If creating directories in this space it Would probably make only a slight performance difference (probably not a noticable diff) in the order of which directory is first.
  3. In a true short stroke, only 30% of the drive space is used, the remaining 70% is unused. It is doughful that you would notice an diff in real life performance based on which directory in this 30 % is placed first.
  4. ok so then im speaking about a modified short stroke.
    I think the name is not really so important.

    also the 30% is a bit arbitrary you can take any percentage you want there is no discrete performance change its gradual.

    at least that is how i understood it from other things i read about this.

    the first diagram makes the point the best, (its also actually the harddrive im using.

    since the curve shows differences of about 50% between begin and end of drive. I think the performance is noticable, specially for long simulations on many cores that last weeks, any percentage is a win.

    The main idea of the question is what to put first? OS on the faster part, or OS on the slower part? even if i only use the first 30% that i can repartion again, the question remains.

    if there are people who really tried this, or have experiance with it, testing both on the same system, or having enough theoretical background, i would really appreciate a discussion.

    Thanks in advance
  5. PS I've tried the "modified" short stroke with different sized stripes and different cluster sizes. 2 x 640 gig WD blacks. OS and programs on the first volume and My files on the 2nd volume. Windows 7 will treat this as to seperat HDDs (HDD0 and HDD1 even thoe it is a single HDD.) Typical access time for the WD 640 blacks is around 12.5 mSec. Tested HDD0 and it cut access time to about 9.5 mSec.

    Raid0 in itself does not improve small file (random 4K) nor decrease access time - Unless short stroke is used. What raid0 does is improve large file performance with data that is sequencially placed on the drive.

    Just as with an SSD, performance is only for read/write performance and does noting to improve program performance. So in your case it would probably be better to place the files you most often access in the outer edge of disk platters (HDD0) and the OS in the 2nd Platter.

    Use to be a big user of raid0 - stopped when SSDs came out.
  6. Thanks,
    those kind of numbers are the ones that i am interested in.
    The second part of the question how much program performance has to do with read and write times probably depends to much on the application.

    Im wondering and i will try to find out why some programs use their own kind of temp swap file, if the ram is full, and others, get loaded in to the swap part of the virtual memory. from one questions comes the next.

    Also i was stupid yesterday i thought all worked, I splitted up my raid 50/50.
    so in the partion manager i saw 2 almost equal drives.

    But in the end it wher just the 2 separate hard drives, and not the logical drives.
    Im havving some problems to see the partions i made in the intel embeded raid config. in the partition manager i just see the 2 separate drives?

    Is that normal? well i will start another question.

  7. When I set up my raid0 (2 x 640 Gig WD Blacks) using the Raid Bios. I created the first volume using about 30 % and the default strip size of 64 K. the remainder of the drive set up 2nd volume using 128 K stripe.

    Installed Window 7 on First Volume (Windows saw tis as Drive 0) usd default cluster size (4K). After Windows installation was completed, went into windows diskmanager. Disk manager "saw two HDDs (HDD0 and HDD1). Partitioned HDD1 into two partitions, First one was a small partition to hold most of my "small files" (ie word docs, Excel spredsheets ect) so used default cluster size of 4 K. The 2nd partition I used 32 K cluster size to format the disk. reason was that this partition would hold Large file structures suck as DVD movie files (dot VOB's are normally around 1 Gig) and Blu-ray files which are from 10 gigs -> 35 gigs for a single file. Also stuck my Photos here as jpeg can be from 2-> 5 gigs (equivalnt Bitmaps are larger).

    On cluster size and large files. Example for a typical DVD 1 gig file. 1 Gig with 4K clusters is equal to 250 K clusters wereas with 32 k Cluster size the same 1 Gig file is only 31.25 K clusters. Reduces the FAT size and also reduces the chance of framentaion (HDD only, as Fragmentation is not a problem with SSDs).

    If you are truely looking for performance, I would suggest looking into a SSD(s) for OS's and for directories that would normally be <256 Gigs. Use HDDs only for larger directories and/or to hold large file structures. (can be raid0 or just normal setup).

    SSDs are typically 40->70 times faster than a HDD. They are also less prone to failure when compared to a Raid0 HDD setup. I've seen reviews that a Single SSD will beat a Raid0 setup using 10 HDDs. For Random 4K with a 0.1mSec access time thee is NO way with a 12 mSec access time can even come close.

    My systems now use a 128 gig SSD for OS, a 250 gig SSD for my small files structures and a singe 1 TB HDD for back up and my large data structure files.
  8. :-)

    yes performance costs. Thats a bit the problem here. Our budged is not big. but we want as efficient as possible computation power, at low cost.
    we have 8 cpus 3GHz, 16Gb ram, 800MHz FSB, 2x500 Gb hds, and we are still under 400 Euros. We also got a little SSD Samsung 830 64GB, but we cant always use it for the server. But i want to test it.

    Also then i was thinking what is the best option.
    puting the OS on the ssd, or the swapdrive, (or both) or the working directory of the programs (NASTRAN and LSDYNA write a lot to disk during our simulations and i dont know whats the bottleneck)

    so besides the order on the hard disk i am also curious wher to put what in the case of the aditional little SSD.
Ask a new question

Read More

Hard Drives Storage