Hello, I just want to know if anyone had tried a RAID0 + SATA + NCQ setup? (the hdd, the controller and the software have all got to support NCQ)

I'm just wondering, could NCQ and RAID0 be used at the same time? I'm just a bit worried about the following scenario:

If you have 2 SATA NCQ HDD, you can run them in RAID0 but not with NCQ. To use NCQ, you cannot run them in RAID0.

Or am I worrying too much? Thanks! :P
4 answers Last reply
More about raid0 sata
  1. Greetings,

    Here's a well written and empirical analysis
    of RAIDs in various configurations:

    This article may answer your question directly:
    they have concluded that NCQ will only benefit
    file servers and not single-user workstations.

    We recently assigned C: to a single WD 74GB Raptor
    on an ASUS P4C800-E Deluxe, and never looked back.

    It works marvelously fast with a 2.8GHz P4 512K L2 cache
    and 800MHz FSB (Northwood core).

    We got hit pretty hard by a virus last year
    on an aging Windows 98/SE machine.

    We now depend a LOT on Drive Image 7 to create and
    restore "image" files of our C: partition on the
    new ASUS motherboard with Windows XP/Pro.

    This is the fastest way we know of recovering from
    a destructive virus or worm.

    This software (now acquired by Symantec and re-named
    "GHOST") does not appear to work if C: is on a RAID 0,

    We're planning right now to build an experimental
    machine which will also have a single HD for C:,
    plus a RAID 0 with 2 x SATA HDs @ 40GB each
    (80GB total "striped").

    On this special-purpose RAID 0, we plan to
    store ONLY the Internet Explorer cache, and
    possibly also the Windows swap file.

    Because the IE cache tends to get large,
    the more so as we browse the Internet,
    Drive Images of C: grow larger accordingly.

    By moving the IE cache to different drives,
    C: stays quite static.

    Moreover, the Windows swap file is volatile and
    does not need to be saved between shutdown
    and startup. So, it too can be assigned
    to such a RAID 0.

    And, for our database, we will go with a
    single large 300GB PATA Maxtor with 16MB cache
    (which we just bought at Office Depot at 50% discount)
    and possibly add future SATA drives of similar size.

    Another way of insuring "snappy" program launch
    speeds is to make sure you have extra RAM,
    which reduces the need for swap file I/O
    in the first place.

    I hope this helps.

    Sincerely yours,
    /s/ Paul Andrew Mitchell
    Webmaster, Supreme Law Library
  2. Yes, that has answered my question, thanks.
    So SATA+RAID0+NCQ do all work together, it is just a matter of how much performance gained depending on how my system is run (server or single user).
    Even if it is only marginal, I would still like to think that SATA+RAID0+NCQ would run faster than just SATA+RAID0.
  3. Yes, and that's the main reason why it has already
    been developed and implemented for file servers
    using SCSI subsystems, e.g. Ultra SCSI 160 & 320.

    Head movement can be very time-consuming, particularly
    if it happens a lot. If a controller can keep a
    read/write head positioned over the same or nearby
    tracks, the cumulative amount of time being spent
    moving the head from one track position to another
    can be reduced considerably.

    Several years ago, we prepared presentation to a
    minicomputer user group, in which we documented
    these important statistics:

    (1) rotational speed and rotational latency
    (latter is 1/2 the time required for one rotation)

    (2) time to read or write one sector (which is
    derived from sector density and rotational speed);

    (3) access time, or the time required to re-position
    the read/write head in worst case, average, and
    best case situations.

    As CPU speeds have climbed into the stratosphere,
    the microelectronics used in IDE controllers have
    improved apace.

    You can see how Native Command Queuing is ONLY
    going to help if there are a LOT of disk I/O
    requests pending, and those requests are spread
    across disparate cylinders that require a
    lot of head movement to access completely,
    i.e. process the entire set of such I/O requests.

    Now, we can appreciate how server environments would see
    this condition quite often, but workstation settings
    would not see it as frequently.

    What we want to test here is the benefit, if any,
    that may be obtained from assigning the Internet
    Explorer cache to a RAID 0 of some kind, most
    probably using 2 x SATA drives.

    The only way to evaluate RAID is to do a scientific
    comparison of a single Raptor, for example, with
    multiple identical Raptors in a RAID 0 setup.

    When a single Raptor is compared to 2 x 7,200 SATA
    drives, there are two factors which explain the
    outcome (at least): differing rotational speeds, and
    RAID v. no-RAID.

    This is not good scientific method.

    Don't forget: P-ATA/133 also has a lower maximum
    of 133MBps as compared to S-ATA's higher maximum
    of 150MBps.

    It is better to design experiments which
    measure the effect that a single factor has on
    the outcome variable, i.e. data throughput.

    Clearly, the Raptor is superior not only because
    of its fast rotation speed (10,000 rpm v. 7,200 rpm);
    also, Western Digital found that the load on the
    bearings was much greater if it used the same
    platter size as they were using on their 7,200
    IDE ATA/100 drives.

    So, they reduced the Raptor's overall capacity
    by significantly reducing the platter diameter.

    This, in turn, reduced the average time required
    to re-position the read/write head from one track
    to the other: obviously, the worst case -- from
    inside track to outside track -- would automatically
    be much lower solely because the platter diameter
    is smaller.

    Now, setting aside NCQ for the moment, one 74GB Raptor
    is going to perform differently from two identical
    74GB Raptors in a RAID 0 setup: the latter is capable
    of a much higher throughput, necessarily.

    Of course, other factors will also affect the outcome
    variable, such as whether the controller is doing
    computations in hardware, firmware, or software.

    Nevertheless, CPU's have become so fast, the penalty
    to be paid by doing these calculations in software
    appears to be negligible now.

    Our hypothesis (as yet untested) is that the Internet
    Explorer cache, on average, exhibits the very same
    or similar conditions which benefit greatly from
    NCQ on file servers with multiple I/O requests

    If you look at complex web pages which display a lot
    of separate graphic images, each subregion of that
    web page will map into a different Windows data file.
    And, a web "page" is often much larger than a single
    CRT "screen".

    Even if a single data file is allocated sequentially
    on a hard disk, it is very UNlikely that multiple
    data files will be allocated sequentially on that
    same hard disk. In this instance, we are referring
    specifically to the disk subsystem which hosts the
    Internet Explorer cache, and possibly also the
    Windows swap file.

    We'll let you know what we find: I am not completely
    convinced that NCQ will NEVER benefit a high-performance
    workstation which accesses the Internet frequently
    and which is storing upwards of 20GB of files
    at any given moment of time in its browser cache.

    Sincerely yours,
    /s/ Paul Andrew Mitchell
    Webmaster, Supreme Law Library
  4. Tom's Hardware evaluates NCQ with Serial ATA here:

    Sincerely yours,
    /s/ Paul Andrew Mitchell
    Webmaster, Supreme Law Library
Ask a new question

Read More

Hard Drives SATA Controller Storage