Sign in with
Sign up | Sign in
Your question

VelociRaptor or SSD?

Last response: in Storage
Share
July 9, 2009 12:50:26 AM

What is everyones opinion on upgrading my hard drive with the new SSDs now in play and the fact that Windows 7 is optimized for SSDs. Should I just go inexpensive and get a or a few VelociRaptors or is it a really good idea to wait and save for an SSD?

More about : velociraptor ssd

a b G Storage
July 9, 2009 1:32:39 AM

imo
ssd aren't worth the money as of yet. Velociraptor for os and programs paired with a black for storage is the best performance without wasting money
July 10, 2009 9:55:04 AM

What's the heat/loudness and lifetime of the Velociraptor? Would I be sacrificing in those areas for a bit of extra speed?
Related resources
July 10, 2009 12:39:46 PM

505090 said:
imo
ssd aren't worth the money as of yet. Velociraptor for os and programs paired with a black for storage is the best performance without wasting money


i would still say go for a Western Digital Black 640GB x 2, it would be plenty fast enough
a b G Storage
July 10, 2009 12:40:09 PM

neither heat nor noise is an issue
a c 127 G Storage
July 10, 2009 3:02:12 PM

Well i would say heat is an issue. That's why Western Digital don't want to sell these Velociraptors without 3.5" heatsink caddy to retail channels. Only server-markets like the blade-market will get bare disks, since these casings are cooled properly and the size of the heatsink is an issue.

Its usage isn't that high though, only 4W in idle, but 4W in a tiny spot without cooling would add up and may overheat the drive. That's why they come with heatsink. Regular 2.5" disks use only 0.7W in idle. They are quite silent, though.
a b G Storage
July 10, 2009 4:57:30 PM

actually they come with a "heatsink" because they are 2.5 drives and desktops typically use 3.5 and a mounting bracket was required. If heat was an issue they wouldn't be used in a server environment. Further the velociraptor uses less power and generates less heat than a caviar black which is one of the two most recommended drives the other being the seagate .12
July 10, 2009 9:11:01 PM

Why would somebody get the 640 Black instead of the Velociraptor? The black is 7200 RPM.
July 10, 2009 9:19:19 PM

I just bought new stuff for a new computer and I have a 1TB Caviar Black. But I was planning on using that for storage and putting the OS on another 250GB drive that I already have. Will I see a much better performance if I get a Velociraptor or Caviar Black? Or even... SSD???
July 10, 2009 9:48:08 PM

Think of it this way:

In graphics, you can spend very little and get the job done. You can spend a "midranged" price and get good performance for the price, OR you can spend an outrageous ammount of money for that little extra performance over the midrange (in the large scheme, and slightly exaggerated, but you get the point.)

So, you can get a 7200 RPM Caviar Black and get the job done fine and nice, or you can opt for the Velociraptor and get better performance and spend more, or you can go all out and get an SSD, it's your choice. However always think what you need more, speed, or capacity, you can't have both really, and the Velociraptor is the balance, the SSD is high perf, low capacity, and the 640 Caviar Black is your high capacity, "low" performance, although you won't see a huge difference between a Caviar Black and Velociraptor. Honestly either a single 640 Caviar Black, or 2 of them is your best price/performance option.
a b G Storage
July 10, 2009 10:27:16 PM


The best combo for price is the velociraptor for os and programs and a caviar black for storage
July 10, 2009 10:44:55 PM

Just remember all reviews show v-raptor just a few MS (mil-a-second) faster than a 640 black or 640 AAKS.

A MS is a blink of an eye, so blink your eyes 8-10 times that’s how much quicker a v-raptor is to a black
or AAKS.

1 second = 1000 ms
a b G Storage
July 10, 2009 11:42:59 PM

djaabrams said:
Just remember all reviews show v-raptor just a few MS (mil-a-second) faster than a 640 black or 640 AAKS.

A MS is a blink of an eye, so blink your eyes 8-10 times that’s how much quicker a v-raptor is to a black
or AAKS.

1 second = 1000 ms

First, an eyeblink is a lot slower than a millisecond (from what I can find, a typical blink is 300ms or so). Second, the Velociraptors have roughly a 7ms seek time compared to the Caviar Black's 12. That is only 5ms, but keep in mind that your hard drive is doing thousands of operations for even a relatively simple task. So, the better way to look at it is that the Velociraptor will do more than 1.5 times the operations per second if they are of a somewhat random nature (such as loading an OS). It is a definitely noticeable difference. On the other hand, large sequential transfers (such as moving a single large file) will not be noticeably faster. This is also why SSDs can feel so fast and responsive - they do not have significantly higher transfer rates than a typical high performance hard drive setup, but they have access times of under 1ms compared to a velociraptor's 7ms.
a c 415 G Storage
July 12, 2009 3:16:27 AM

klezmer41 said:
Why would somebody get the 640 Black instead of the Velociraptor? The black is 7200 RPM.

One thing I'd be very interested to see is a performance comparison of a pair of RAID-1 650 Blacks vs. a single Velociraptor. It seems to me like you might be able to get similar or possibly even better performance for a cheaper price by taking advantage of motherboard RAID.
July 12, 2009 6:56:38 PM

It all comes down to how much money you want to spend. You should get 4 SSD's and put them in RAID 0. 128GB SSD's are pretty reasonably priced now-a-days. The Intel drives are still crazy expensive for what little you're getting over other makes. With 4 drives, capacity shouldn't be an issue. You should still keep a couple internal or external 1TB drives for storage.
a b G Storage
July 12, 2009 8:26:56 PM

To save some money for now, buy the Caviar Blacks or RE3 series and
do some cost-effective optimizations:

Even with lots of RAM, Windows XP will still swap out a program
e.g. when the minimize button is clicked on an open window.

To shorten a HDD's armature strokes as much as possible,
XP's pagefile.sys should be assigned to the lowest numbered
sectors on its own hard drive, NOT on the C: system partition.

This can be accomplished by running the Contig program
on a newly formatted primary partition: remember also to
switch OFF XP's Indexing Service on that dedicated partition
to achieve a completely contiguous swap file:

contig -v -n D:\pagefile.sys 2048000000

attrib D:\pagefile.sys +A +S +H

Then, move this swap file from C: using the proper sequence
in "My Computer".

The Contig software is freeware, available from the Internet.


A related freeware program is PageDefrag, but
it does not always defrag pagefile.sys
(for reasons explained in the on-line documentation);
hence the need to resort to the Contig software
instead of or in addition to PageDefrag.


Another very cost-effective optimization is to
download and install RamDisk Plus 9.0.4.0
from www.superspeed.com .

This version allows ramdisks to be created
in unmanaged Windows memory with the
32-bit version of XP/Pro.

After creating one or more ramdisks,
move your browser cache(s) to that
memory-resident partition and
enjoy the incredible speed-up that results.

Off-loading your spinning platters will also
help them to last longer, particularly
laptops HDDs spinning at 5,400 rpm.


In the long run, it makes more sense to conserve
money now, because SATA/6G is the newest standard,
and flash SSDs are the only file system devices which
come even close to saturating a SATA/3G interface
(with the exception of SDRAM-based storage).

That's because THE FASTEST SPINNING PLATTER is still moving
data past the read/write heads at no more than
150MB/second e.g. SAS HDDs spinning at 15,000 rpm
with perpendicular magnetic recording ("PMR").

Look for SSD storage that will exceed the former
SATA/3G bandwidth -- probably in 3-6 months,
when those motherboards and RAID controllers
become more widely available.

The SSD manufacturers are in a heated race
for market supremacy right now, and the next
leaders will be those who offer real throughput
exceeding 300MB/second over SATA/6G channels.

Similarly, the USB 3.0 standard is 5Gbps, which
is right behind SATA/6G in raw speed.


To catch a glimpse of this future:

Google i-RAM +4X +"RAID 0"


MRFS
July 12, 2009 8:35:20 PM

sminlal said:
One thing I'd be very interested to see is a performance comparison of a pair of RAID-1 650 Blacks vs. a single Velociraptor.


do you mean raid 0?
a c 179 G Storage
July 12, 2009 9:11:25 PM

How much space do you need, and for that type of data?

For the OS and program drive, you need 32gb plus program room. 64gb-80gb as a starter, and up to 160gb.
For this, you want primarily fast random read and write of small blocks, since that is what the OS does much of.

The SSD is matchless in the read part, but most SSD's struggle with overload on random writes. Among the MLC( much cheaper than SLC) drives, the Intel X25-M is the only one that seems to be issue free. That is changing, as cache is added to buffer writes, and ssd's become cheaper. We might see major advances by the end of the year. Waiting on a SSD would be good for the value concious. As a early adopter I have two X25-m's in raid-0, and they are performing well. I initially got one, but it was not big enough. I tried to sell it on e-bay, and go back to my velociraptor, but could not get my price. I came across a secone one at a good price so I got it. Primarily so I could get a single 160gb image for my OS, applications, and data.
At $300 each, they are pricey.

The velociraptor is a very good drive. From experience, it is quiet, cool, and fast. Here is a link to some benchmarks comparing it to other drives, including 15k server drives. It is second only to a fast SSD:
http://www.storagereview.com/php/benchmark/bench_sort.p...
It is priced about $200

The WD caviar black 1tb drive is a very good solution also. The fastest 10% of the drive is close to the velociraptor in performance.
At $100 it is a bargain performance drive, particularly if you only use the fastest 10% for high performance needs.

What to get?
If your os and all your live data will fit on a single fast drive, go that way if you can. I find managing multiple drives to be a pain, except for backup.
If you will have large amounts of data, like video files, then get a fast OS drive, and one or more 1tb drives for storage.

There is generally no real world(vs. synthetic transfer rate benchmarks) performance advantage to raid of any kind.
Go to www.storagereview.com at this link: http://faq.storagereview.com/tiki-index.php?page=Single...
There are some specific applications that will benefit, but
gaming is not one of them. Even if you have an application which reads one input file sequentially, and writes
it out, you will perform about as well by putting the input on one drive, and the output on the other.

If you have the funds, remember a saying:
"the bitterness of the product is remembered long after the sweetness of the price is forgotten"

Hope this helps.
a c 415 G Storage
July 12, 2009 9:30:02 PM

sminlal said:
One thing I'd be very interested to see is a performance comparison of a pair of RAID-1 650 Blacks vs. a single Velociraptor.


08nwsula said:
do you mean raid 0?

No, I mean RAID-1. I wouldn't be interested in the reliability tradeoff you have with RAID 0.

Both RAID-0 and RAID-1 benefit from being able to handle nearly twice as many read I/O requests per second, which is the most important thing for shortening load times.
a b G Storage
July 12, 2009 9:32:46 PM

Even though this factor may sound overly simplistic,
we also pay attention to the ratio of cost / warranty.

In most instances we have examined, that ratio
is LESS for HDDs with 5-year warranties.

If a HDD with a 3-year warranty fails in year 4,
that problem is quite different from a failed HDD
that enjoys a 5-year warranty.


MRFS
a c 179 G Storage
July 12, 2009 9:37:46 PM

sminlal said:
No, I mean RAID-1. I wouldn't be interested in the reliability tradeoff you have with RAID 0.

Both RAID-0 and RAID-1 benefit from being able to handle nearly twice as many read I/O requests per second, which is the most important thing for shortening load times.

To get some of the benefit from multiple concurrent reads with raid, you will need a separate hardware based raid controller, not the mobo type.

Fortunately hard drives do not fail often.
Mean time to failure is claimed to be on the order of 1,000,000 hours.(100 years)
Raid-1 does not protect you from other types of losses such as viruses,
software errors,raid controller failure, operator error, or fire...etc.
For that, you need EXTERNAL backup.
If you have external backup, and can afford some recovery time, then you don't need raid-1 and you don't have to worry about increased risk of raid-0.
a c 415 G Storage
July 12, 2009 11:04:00 PM

geofelt said:
To get some of the benefit from multiple concurrent reads with raid, you will need a separate hardware based raid controller, not the mobo type.
I keep seeing references to motherboard RAID not being "hardware". This is something I've been trying to research and understand, but I haven't found what I consider to be a definitive answer yet.

I did find this page which suggests the ICH10R chipset itself doesn't perform the RAID functions but merely acts as a repository for configuration information which is then used by the ICH10R OS drivers to do the actual work. The fact that motherboard RAID is not widely available in most Unix flavours supports this hypothesis. But it would be really nice to see a good, technical description of the ICH10R RAID implementation...

But that having been said, even a fully OS-level implementation of RAID is certainly capable of issuing simultaneous I/O requests to multiple drives. There's nothing in "software RAID" to preclude this. Indeed, Windows can issue multiple I/O simultaneous I/O requests not only to multiple drives, but to each individual drive. This is why the "queue length" of drives gets to levels above "1" on busy disks.
a b G Storage
July 12, 2009 11:12:47 PM

RAID parity computations are performed more efficiently
on dedicated hardware-based RAID controllers e.g. Areca,
Intel's IOP348.

With the dawn of multi-core CPUs, however, the idle
cores end up doing the same work with almost identical
speed, and without frequent scheduling interrupts.

Some of the newest RAID controllers have dual-cores too:

http://www.intel.com/design/iio/iop348.htm

Two Intel XScale® processors for optimized performance

* High-performance RAID system-on-a-chip with an integrated 3 Gb/s SAS/SATA II controller
* Fourth generation Intel XScale® processor with core speeds up to 1200 MHz and 512 KB L2 cache
* 8 port, 3 Gb/s SAS/SATA engine supporting industry standard SSP, STP, SMP, and direct attached SATA
* Hardware RAID 6 acceleration with near RAID 5 performance
* Pin compatibility with Intel® IOP341 I/O processor, Intel® IOP342 I/O processor, Intel® IOC340 I/O Controller, Emulex IOP 504 I/O processor*, Emulex IOP 502M I/O processor*, and Emulex IOC 504 I/O Controller*
* Emulex's Service Level Interface (SLI*) technology providing a driver compatible API
* Multi-ported 400/533 MHz DDR2 memory controller supporting up to 2 GB of 64-bit ECC protected memory
* Three application DMA units with XOR, RAID 6 P+Q, CRC32C
* Dual- or single-interface PCI-X* and PCI-Express* host bus interface options
* Dual 128-bit/400 MHz internal buses, providing over 12 GB/s internal bandwidth



MRFS
a c 127 G Storage
July 13, 2009 12:41:57 PM

Yes, but these IOP processors aren't really comparable to general purpose CPU's like the usual AMD/Intel chips.

Everybody thinks calculating parity is slow, but they don't know even a 25 dollar single core cpu can do about 3-5GB/s of parity calculations. So this argument is not very valid.

1) the CPU is much more powerful than the IOP in the Areca hardware controller. If you got good software, there is no way for Areca to beat the host-system, because the host-system is ALOT faster.

2) although the IOP isn't as fast, its dedicated to its task and smart firmware tuned for performance

3) since no advanced RAID functionality exists for Windows, you are basically stuck with Hardware RAID

4) there is nothing hardware RAID can do what software can't, performance wise.

In theory, software RAID is even superior because its more flexible and 100% hardware independent.
a c 127 G Storage
July 13, 2009 12:48:47 PM

sminlal said:
I keep seeing references to motherboard RAID not being "hardware". This is something I've been trying to research and understand, but I haven't found what I consider to be a definitive answer yet.
Onboard RAID = driver RAID = fake RAID

There is nothing in the hardware that accelerates the RAID of whatever. A motherboard RAID controller is just an ordinary IDE controller with bootstrap firmware, all the RAID-specific processing is done by Windows-only drivers. That's why if you boot Linux, it would just see a normal controller and bare disks. Whereas on Windows your bare disks are hidden and only the array is available; but this is because of the driver software!

Aside form this, there is nothing wrong with software RAID. Its just a pain that software RAID only really has powerful implementations on linux and unix, not on windows. On windows you get poor man's RAID, for three reasons:

1) windows does not offer advanced RAID functionality itself
2) windows XP and older create partitions with misalignment for RAID arrays; causing lower performance
3) windows does not have any advanced filesystems which allow advanced write-back mode (writing the first 2000MB will go at RAM-speed).

Ofcourse for every disadvantage there is a substitute. The Intel ICHxR chipset RAID drivers are among the best available to windows; with RAM write-back mode if you enable 'write caching' feature, and the misalignment is fixed by using vista or win7. But you can't use XFS/ZFS/JFS/ReiserFS or any other advanced filesystem on Windows; you're stuck to good ol' NTFS which is becoming very dated with regard to protections against corruption/damage.

I did find this page which suggests the ICH10R chipset itself doesn't perform the RAID functions but merely acts as a repository for configuration information which is then used by the ICH10R OS drivers to do the actual work. The fact that motherboard RAID is not widely available in most Unix flavours supports this hypothesis. But it would be really nice to see a good, technical description of the ICH10R RAID implementation...

But that having been said, even a fully OS-level implementation of RAID is certainly capable of issuing simultaneous I/O requests to multiple drives. There's nothing in "software RAID" to preclude this. Indeed, Windows can issue multiple I/O simultaneous I/O requests not only to multiple drives, but to each individual drive. This is why the "queue length" of drives gets to levels above "1" on busy disks.
[/quote]
a b G Storage
July 13, 2009 1:53:18 PM

> In theory, software RAID is even superior because its more flexible and 100% hardware independent.

You make a lot of interesting points, that are even more salient
in this era of multi-core CPUs & hyperthreading.

There are other factors that will affect the performance of
software RAID implementations:

(1) efficiency of the coding, which can vary enormously
depending on the knowledge and skill of the programmer(s);

(2) bus latencies: the results of an LGA775 CPU's
calculations must travel through the FSB, then from the
Northbridge to the Southbridge, before the I/O requests
can reach the storage device;

(3) the fastest "hardware" RAID calculations occur on a
dedicated processor situated at the other end of the PCI-E bus,
read "closer to the storage device";

(4) processes executing on a general-purpose CPU
are subject to more frequent interrupts and OS
scheduling decisions.


Your points appear to give much more potential
to software RAID on the Core i7 architecture,
which easily achieves a memory bandwidth
of 25,000 MB/second -- even higher when
RAM is overclocked.

The memory controller, as in recent AMD CPUs,
has moved from the Northbridge to the CPU,
allowing for an enormous increase in overall
computational efficiency.

I'd be interested to see a controlled comparison
of software RAID on a high-end Core i7 machine
with, say, one of Areca's or Highpoint's most powerful
hardware RAID controllers, all other things being equal.

With overclocked Core i7 CPU and RAM, and
with the addition of hyperthreading in that CPU,
a well coded software RAID should compete
quite well with the fastest available hardware RAID
controllers.


MRFS
a b G Storage
July 13, 2009 2:03:31 PM

> 3) windows does not have any advanced file systems which allow advanced write-back mode (writing the first 2000MB will go at RAM-speed).


There are third-party solutions, like those developed
and sold by SuperSpeed LLC in Sudbury, Massachusetts:

http://www.superspeed.com

e.g. RamDisk Plus, SuperCache and SuperVolume.


MRFS
a c 415 G Storage
July 13, 2009 6:13:28 PM

Thanks guys on your comments regarding the ICH10R RAID implementation. I think I understand exactly what's going on with it now. It's essentially a software RAID implementation that includes BIOS and hardware support for configuration and booting, so that (unlike pure software RAID) it can boot from a redundant volume set in which one of the member drives has failed.

The capabilities and CPU loads for this would be pretty trivial for conventional disks with modern processors - mechanical disks are so slow compared to the CPU that there's really no cause for concern. It is most certainly NOT the same issue as PIO mode where CPU needs to execute entire instruction sequences to transfer every BYTE of data.

SSDs could perhaps be more of a concern if you were using a large RAID-5 set, though. But it seems to me like it would be a bit of an oxymoron to put SSDs in a RAID-5 set (high performance drives in a low-performance RAID configuration). And if you can afford to do that then you can afford a dedicated controller card...
a c 127 G Storage
July 26, 2009 1:21:14 PM

Hey guys, sorry i never got back to this interesting thread. Also sorry my previous message is improperly quoted. Unfortunately a bug prevents me from editing my messages on these forums.

MRFS said:
> In theory, software RAID is even superior because its more flexible and 100% hardware independent.

You make a lot of interesting points, that are even more salient
in this era of multi-core CPUs & hyperthreading.

There are other factors that will affect the performance of
software RAID implementations:

(1) efficiency of the coding, which can vary enormously
depending on the knowledge and skill of the programmer(s);

Indeed. Look at the crappy RAID5 implementations that onboard RAID Windows-only drivers give you. Alright Intel might have decent drivers but nothing more than decent. The write caching function is also dangerous to use, because NTFS journaling will offer no protection against this large RAM writeback buffer.

Now look at what Linux and BSD offer you. Top-notch RAID drivers with excellent performance, scaling and features. Especially BSD deserves a praise here, i've never seen software RAID5 outperform Areca in terms of sequential write, but its RAID5 engine can as i tested myself with 8 disks. The CPU load was rather high, and without 10GbE (10 gigabit ethernet) it won't be very useful. But the question was whether software RAID can be superior to hardware RAID and i must say yes. There is nothing hardware RAID can do in terms of performance, that software RAID can't. It has a powerful CPU, lots of memory bandwidth and full-speed I/O if you use the chipset connectors.

However, as hardware RAID controllers are designed for server operation, the IOps is something thats very important. Because Microsoft doesn't have the wisdom or competence to do I/O properly on their operating systems, hardware RAID controllers began to use low-level optimizations, such as its own read-ahead, own caching (even though the RAM is much larger and more logical to do this kind of thing) and request reordering - not using serial order but something like NCQ in the controller firmware itself; it can group clusters of I/O. For example, if you write with three applications you have three I/O streams. If the HDD head has to switch to three positions on the disk every time, this is going at sub 10MB/s speeds. But if you cluster I/O so that you write 100MB here and 100MB there, instead of switching every request (128KiB), this is going to be much faster, close to the maximum sequential read/write rate of the volume.

So to make a story short, hardware RAID like Areca can still deliver more IOps, because its tuned very well by experience from benchmarking in the server-sector. Here they can't just focus on sequential speeds like they do in the consumer sector, as MB/s is all what 99.9% of the population knows about. In the server sector they are judged by hard figures in IOps that tell you quite well how a device would perform given a workload. In my testings, Areca did twice as many random IOps but a little less in sequential read/write. So its still a good controller and faster for many tasks. Yet, if all you want is storing large files over the network than software RAID5 on Linux/BSD is a perfect solution. It saves you alot of cost by not having to buy a good RAID controller. And it also enables you to extend the number of SATA ports with non-RAID SATA controllers via PCI-express x1 for example. These controllers go for like $25 and while they may provide windows RAID drivers, to BSD and Linux they are just SATA controllers as that's all the hardware really is.

Quote:
(2) bus latencies: the results of an LGA775 CPU's
calculations must travel through the FSB, then from the
Northbridge to the Southbridge, before the I/O requests
can reach the storage device;

Yeah Intel chips were really slow when looking at true SMP performance. They didn't scale as well, because the intra-core communication was still an old bus architecture. AMD has switched to HyperTransport as on-die communication and an on-die memory controller (IMC) long ago since the K8 so isn't affected by this problem. I would also say AMD is the cpu of choice for a NAS, as they have chips that are extremely low-power when idling while still being fast and cheap. The new 45nm Athlon X2 250 is a nice one, and runs at 3.0GHz.

Quote:
(3) the fastest "hardware" RAID calculations occur on a
dedicated processor situated at the other end of the PCI-E bus,
read "closer to the storage device";

Well since all data has to pass the controller which has its own latency, you could also say this is a bottleneck in latency. It will take more time before the actual data comes back to the host system, this time is the actual performance of the storage subsystem. I send a request, how long do i have to wait either for acknowledgement (in case of writing) or for getting the data i requested (in case of reading). If the storage subsystem would have been infinitely fast, the service-time would always be zero.

One good argument would be that the host CPU may be overloaded with other tasks, hampering I/O. But in reality, its often the other way around, the host CPU is sitting on its bums because its waiting on I/O to be done. In Linux you can see this graphically with some monitors as the IOwait percentage. The percentage of "wasted" CPU time because the cpu has to wait on I/O. If you start an application, you generally see 2% real cpu usage and 98% IOwait. It takes about 15 seconds for the application to start. Now close it, wait and start it again. Since all is cached, the HDD is not used anymore. Oh noes! Now we see 100% cpu usage for about 1 second then the application starts. In this case, the CPU actually could do some work, because it wasn't going on vacation because the storage-ship takes a month to return with the goods. If you understand my analogy. :) 

Quote:
(4) processes executing on a general-purpose CPU
are subject to more frequent interrupts and OS
scheduling decisions.

Ay should have read ahead, as i'd already discussed it above. Also notice quadcore is getting more common and single-core is practically not sold anymore in desktops except thin-clients. Also schedulers on FreeBSD (ULE) is alot better at SMP than traditional schedulers are. I know both the Linux kernel and MySQL database benefitted from using the FreeBSD threading system. Though i'm not aware of the details.

Either case, i think the host-system is powerful enough to do I/O even though its general-purpose and shared, it does have SMP that would decrease its processing latency, and when it processes it processes alot faster than the Intel IOP cpu chips on Areca controller. XOR-ing goes at memory speed, its not cpu intensive at all. With advanced raid5 on BSD, the real cpu usage appears to be seperating and combining requests, with involves alot of memory copies. The XOR itself is only a fraction of that.

Quote:
The memory controller, as in recent AMD CPUs,
has moved from the Northbridge to the CPU,
allowing for an enormous increase in overall
computational efficiency.

I'd be interested to see a controlled comparison
of software RAID on a high-end Core i7 machine
with, say, one of Areca's or Highpoint's most powerful
hardware RAID controllers, all other things being equal.

I've done such things in the past. Generally software RAID0/1 scales 100% with the number of disks. The RAID5 driver does use some serious cpu power but i was only using a dual core chip and still got 450MB/s write, which i consider outstanding. Its just too bad FreeBSD doesn't support other filesystems like XFS, JFS, ReiserFS and the likes, or only has read support for it. Maybe i'll do a nice comparison sometime and post it here or elsewhere.

I don't think the actual speed of the hardware is a concern to software RAID. Its more the design, you have to be smart to compensate for the slow HDDs; seek times are killing performance so if you can avoid seeks with tricks in the software then sure this is a good thing. But it doesn't always cost any meaningful additional cpu time, its more about being smart.

For example, the nVidia RAID5 driver is known for its poor performance. But if you compensate for all alignment issues and write in a scenario where all the stars in the universe are aligned in the just the proper way, you can write at 400MB/s+ with that driver. This is because with such a configuration, the data written would be exactly one full stripe block. If you got 4 disks in RAID5 with 128KiB stripesize, that would mean (4 - 1) * 128KiB = 384KiB stripe block. If you send I/O write requests with this size at the right offset, it would be very fast. But ofcourse this is not a practical solution, so in the FreeBSD RAID5 driver this is done automatically. The driver combines I/O into blocks that are exactly the "magical size" (full stripe block size), so it achieves very high performance. Areca does just the same, by the way. Any RAID5 driver with high performance will have to. So in this case its about being smart. And alot of potential in software is not being used; the real key to performance is not in the hardware anymore, instead its in the software.
July 26, 2009 7:13:19 PM

i'd wait. just get 500-750gb WD caviar black now and wait couple months for the new SSD to go down in price. i'm building a new i7 desktop and that's what i decided to do in the end.
October 17, 2009 1:27:19 AM

bmxjumperc said:
What is everyones opinion on upgrading my hard drive with the new SSDs now in play and the fact that Windows 7 is optimized for SSDs. Should I just go inexpensive and get a or a few VelociRaptors or is it a really good idea to wait and save for an SSD?


I've upgraded to a SSD from a velociraptor and would not go back! (actually, The velociraptor now stores my videos from mediacenter)

boot times are crazy and whatever the benchmarks do not show, the actual use of a SSD as an OS drive (mine is a 128MB) is a totally out of this world experience compared to spindles.

no chkdsk, no defrag... antivirus that scans without you even noticing. Just make sure you have either an indilinx or Samsung controler based drive. I guess intel is fine too but they are more enpensive.

Gaming? office work? both are much faster with a SSD I just received a few C2D laptops with ssd and the users literally fight over them... they don't know what makes thos laptops so fast but they are ready to ditch theyr C2Q desktops any day to have one of those!

if you say it's not worth it, you just haven't tried it.
October 17, 2009 1:37:56 AM

klezmer41 said:
I just bought new stuff for a new computer and I have a 1TB Caviar Black. But I was planning on using that for storage and putting the OS on another 250GB drive that I already have. Will I see a much better performance if I get a Velociraptor or Caviar Black? Or even... SSD???


The SSd will totally beat every other setup.

October 22, 2009 10:07:31 PM

I 2nd snotling on the fact that once you go SSD you cnt go back. Boot times are insane..the whole feel of a computer can completely change with an SSD connected. It brings new life into systems. I put an Intel X-25-m g2 in my mom's old ibm thinkpad...and its like system was blessed with a gift of god or something. XP boots in seconds rather minutes..its crazy..

They are pricey but if u can afford it..do it..A Saturn Ion and an Aston Martin both can get you home...but which 1 can get you home faster?...You have to pay for that faster time...same goes for SSDs
!