4-6 disk raid 0 for hd editing is areca worth premium price?

Kirika

Distinguished
Aug 24, 2006
16
0
18,510
I just built a dual woodcrest machine for video editing. However the deal for the raid card seems to have fallen through and the system is over original budget so looking to save a few bucks. for hd capture currently have 4 disk raid 0 arrat ysubg 320 gig seagate 7200.10 sata 2 hds.

the onboard raid isn't all that hot right now. need a descrete controller. I was wondering for raid 0 if the areca 1120 was worth the price premium over like a highpoint 1820a. the areca is like 479.99 at new egg while the highpoint is 207. over double the price :(.
 

PCcashCow

Distinguished
Jun 19, 2002
1,091
0
19,280
Just a suggestion, if you attempting to put to use that many disks in a raid 0 just for capture, you just asking for a headache. If you going to do live captures, you'll need the space, ripping data like that on a raid 0 bonkers. One drive craps the bed and you done. You may want to atleast isolate your OS and and keep the a raid 0 for capture, then a raid 1, 0+1, or 5 for storage. You can get an empty NAS storage device to populate with those drives for the same price as that areca. You'll gain a lot off goodies that way along with redundancy.
 

michaelahess

Distinguished
Jan 30, 2006
1,711
0
19,780
Just make a ghost image of the original setup you put on it. Likely hood of a crash is almost NIL if you treat them right! RAID0 doesn't need a standalone card nearly as badly as a RAID5 would. I've got a Highpoint 2320 in RAID5 that is quite fast, and you don't need the XOR engine on it even. The 1820a would be more than adaquate, even overkill possibly.

Anyone that says a RAID0 is not reliable:

A. Has never used a lot of them

B. Treats their drives to 100+ degree temps regularly

C. Buys the cheapest crappiest drives possible and expects too much from them (cough*maxtor*cough)

When it comes to real time video capture, NOTHING will beat a RAID0 array. How do you think ILM does it? Massive RAID0 FiberChannel arrays. But of course they are smart enough to do backups :)
 

Kirika

Distinguished
Aug 24, 2006
16
0
18,510
I know to back it up i have a usb 400 gig that I will back the capture array up on. don't see myself filling it.

the os is on a seperate 150 gig raptor and i have seperate data and work drives.

Thing is can a NAS sustain 178 mb / sec?
 

michaelahess

Distinguished
Jan 30, 2006
1,711
0
19,780
Sure, if you're willing to pay for it :)

I'd say you are set with the plan you already have, just check some reviews for RAID0 performance and choose a card, I get sustained 120-140 reads and 110-130 writes out of 2 drives on a RocketRAID 1640 depending on which benchmark you use, just as a generalized point of reference.
 

PCcashCow

Distinguished
Jun 19, 2002
1,091
0
19,280
Anyone that says a RAID0 is not reliable:

A. Has never used a lot of them

B. Treats their drives to 100+ degree temps regularly

C. Buys the cheapest crappiest drives possible and expects too much from them (cough*maxtor*cough)

When it comes to real time video capture, NOTHING will beat a RAID0 array. How do you think ILM does it? Massive RAID0 FiberChannel arrays. But of course they are smart enough to do backups :)

So while only a small percentage of the world is doing HD capture, you still group the rest of us into your pompous opinion about the majority that say raid 0 is not reliable? I made a suggestion and stated raid 0 was good for the rip, just for the storage was flat and needed parity and redundancy or both.
 

michaelahess

Distinguished
Jan 30, 2006
1,711
0
19,780
I'm just saying RAID0 is perfectly reliable if used right. Too many people say it's crap. I don't want people getting turned off from the technology just because of a couple people having problems with it.

It's like saying Jaguar sucks at making cars and telling people not to buy them because they had a poor repair history.

Never been called pompus before, thanks! :D
 

PCcashCow

Distinguished
Jun 19, 2002
1,091
0
19,280
Sorry for the comment, but yours were crappy too. I love raid 0. It's all I use, but I have NAS & Backup EXEC 10d to cover my ass. I just like for people here who are not that familiar with raid to know as much as possible about them. No doubt that it great, but I feel that some of the misinformation (like kind that is turning of people like you said) is the direct cause of some kids on here that have posts of broken arrays, or arrays disappearing completely. Anyway, didn't mean to step on your toes.
 

sp6yd6er6

Distinguished
Jun 4, 2006
79
0
18,630
Very good analogy. I have not yet attempet RAID 0 but agree with you that you should NOT be having any serious issues if you buy quality products and know how to use them correctly.
 

jjw

Distinguished
Mar 29, 2006
232
0
18,680
Sure, if you're willing to pay for it :)

I get sustained 120-140 reads and 110-130 writes out of 2 drives on a RocketRAID 1640

I/O read/write are not mb/s, 178mb/s would be 42% faster than the max theoretical for gigabit ethernet. A NAS that would support this rate would have to have a 10 Gbit interface.

As for which one to buy, I haven't owned either, but you could save some money by going to a 4 port RAID card (if you only need/have 4 drives)
 

nanoprobs

Distinguished
Dec 19, 2005
52
0
18,630
Raid 0 is awsome if u want speed and if u do some H.D intensive applications. I run a x4 80GB SATA in raid 0 using a PCI card on a PIII 1Ghz system, I can create new ifo from *.vob files and burn dvds at the same time. None of which slows down at all, got to love the speed of raid 0, although if 1 H.D craps up then ur done.

Also found out that if i run 1 SATA H.D only with windows on it, the H.D tends to be hot to touch (in summer and running H.D intensive applications), but if i use 4 SATA H.D and run them in raid 0 mode they tend to be cooler to the touch. Maybe it's because the load is shared between 4 drives? so they don't have to work so hard when they are working as one?
 

michaelahess

Distinguished
Jan 30, 2006
1,711
0
19,780
I didn't mean to say my speeds were NAS like or vice-versa. You can get speeds in the hundred MegaByte range over networks, but you need to have 10Gb connections and enough adapters/drives to saturate it, like I said, it's not cheap stuff but it is possible. I'm sure the military and other government agencies use it somewhere :)

Edit: Here's something that gets 400MB/s over 4 1Gb connections, just an example:

http://www.sgi.com/pdfs/3859.pdf
 

Madwand

Distinguished
Mar 6, 2006
382
0
18,780
I got around 230 MB/s average read on a 4-drive NVRAID 0 setup using 16 KiB stripes (across 1.2 TB, measured using HDTach). IIRC, writes weren't any/much slower, but I don't have records.

What's the on-board RAID implementation that you've tried? What sort of transfer rates are you seeing? Have you tried different stripe sizes?
 

croc

Distinguished
BANNED
Sep 14, 2005
3,038
1
20,810
Sure, if you're willing to pay for it :)

I get sustained 120-140 reads and 110-130 writes out of 2 drives on a RocketRAID 1640

I/O read/write are not mb/s, 178mb/s would be 42% faster than the max theoretical for gigabit ethernet. A NAS that would support this rate would have to have a 10 Gbit interface.

As for which one to buy, I haven't owned either, but you could save some money by going to a 4 port RAID card (if you only need/have 4 drives)

Huh? 178mbs is 10 times faster than 1000mbs? Must be the new math...
 

croc

Distinguished
BANNED
Sep 14, 2005
3,038
1
20,810
I think I've got hp msa 1000 coming up spare... I'd be able to let you have it cheap, but it'd be bare bones. ie, no drives...

And the FC card is not part of the deal.

But it IS fast...
 

croc

Distinguished
BANNED
Sep 14, 2005
3,038
1
20,810
And to get back to you.... Unless you are using ATM @ OC12, getting to that speed BETWEEN chassis is most difficult. WITHIN a chasis, 4GB speeds are do-able as we speak. Check out the controller I 'offered' to the original poster...

10 Gb NICs would lead to (calculates on the back of an envelope) about 1 GB total throughput before one ran into a collision domain. That wouldn't support a StorageTek with one arm and 8 drives if it was in 4x compression. (Ours has two arms, 16 drives online, and our BU subnet is constantly saturated...)

End of rant.
 

TRENDING THREADS