stripe size for SATA RAID 0 video editing system

i've got a mid-level grasp of this whole raid issue after reading so much online, but does anybody have some input on stripe size for a video editing system? I will be using the drives for other media as well, but i'd like to have it set up best for editing.

i've read an article saying a larger stripe size is more appropriate for this. is 32kb too small?

i've got two 320gb Barracuda 7200 16mb cache drives.
raid 0 on an ASUS K8N-e Deluxe Mobo.
using Sil Image 3114 SATARaid controller.

any input is appreciated.
13 answers Last reply
More about stripe size sata raid video editing system
  1. what size are the files that you will be editing?
  2. Small stripe sizes conserve drive space and perform best when working with smaller files; larger stripe sizes consume more space but are preferable when transferring larger files. 64k would be a decent number to start at, 128k will serve well too.
  3. I would recommend either 64K or 128K as well. These are ideal sizes for systems dealing with larger files, like video files.

    By the way, there is no concern over conservation of drive space here. Stripe size is not the same thing as cluster size (also called allocation size). Larger stripe sizes do not waste any drive space. Larger stripe sizes are more efficient when dealing with large files, reducing the RAID system overhead, and allowing the attached physical drives to make better use of their on-board caches. The disadvantage is that files that are smaller than the stripe size will be written to one drive only, and will not get a transfer rate benefit.
  4. 128k is best for video editing.
  5. the file sizes will largely vary but could commonly from 200mb-1gb. the programs will be continuously accessing them as playback/editing goes on.

    i've got it set at 64k right now and am hesitant to put anything on the drives until i figure this out. i'm debating putting it to 128k.

    i posted another issue on drive speeds b/c i think mine are acting slowly and i'm not experienced enough with RAID/SATA to figure out the controllers issue. sorry for repasting but it applies.

    my HD Tach speeds are as such:
    Long (32mb zones):
    Random Access: 12.9ms
    CPU Utilization: 10% +/- 2%
    Average Read: 101.3 mb/s

    Short (8mb zones):
    Random Access: 12.9ms
    CPU Utilization: 0% +/- 2%
    Average Read: 102.3 mb/s

    128k short (8mb) segments:
    128k long (32mb) segments:

    under SCSI/RAID Controllers:
    Nvidia nforce ATA Raid Class controller
    SCSI/Raid Host Controller
    Silicon Image SiI 3114 SATARaid Controller

    2 320gb Barracuda 7200rpm 16mb SATA-II (operating at sata-I)
    asus k8n-e deluxe (with raid SATA-I support onboard)
    1.5gb ram
    amd athlon 64b 3000+

    bios settings:
    Silicon Image Mode: Raid (vs SATA/Disabled)
    Internal SATA IDE Interface: Enabled
    Raid Option Rom: Enabled
    Primary Master as Raid: Enabled
    Secondary Master as Raid: Enabled

    (as per post:

    could i have possibly set up/installed the controller wrong to make the drives operate less efficiently? or is this not possible?
  6. try 128 raid and run the long test, but again. Not all hardware performs the same.
  7. i've reran the test at 128k with little to no differences.
    posted hdtach speeds are in the first message above.

    am i wasting my time here? can i get this any quicker.
    thanks for all the help everyone (despite repetitive postings)
  8. with 200MB-1GB files go with 128k.
  9. thanks...still no luck on speeds :(
  10. Your not going to get much higher, basically you taking the average read of your Seagates ( which is around 47) and multiplying by the number of drives. 100mb is average for that drive.

    I have 3 15k u320's and my read is in the 240mb (because the drives I use are rated around 90mb), but my bust is about 480. There not much wiggle room with what you have, but you have maximized performance.
  11. well, even tho thats not the best news it makes sense.
    you get what you pay for.
    appreciate the input.
  12. If the Silicon Image RAID controller is implemented via the PCI bus on the motherboard (which it probably is), your transfer rates will be capped to the maximum that the PCI implementation on that motherboard can handle.

    PCI can do a maximum of 132MB/sec theoretically, but with overhead and implementation differences, top speeds of 90MB - 110MB/sec are common.

    I would wager no matter what you do, you're hitting the PCI transfer limit, and 100MB/sec is all you're going to get.
  13. Thanks Joe... I was getting ready to explain controller architectures. Also I'm not sure if your drives are rated for NCQ.
Ask a new question

Read More

Hard Drives NAS / RAID Video Editing SATA Storage