Raid 0 / Jbod?

ikarasu

Distinguished
May 24, 2011
2
0
18,510
I have 4 hard drives I want to combine into 1 big drive (Multimedia storage), the onlything that scares me is if I lose 1 drive, I lose them all. I don't want to lose 8 TB of backed up files, and have to re-back them up if 1 fails.

I've heard with Jbod, it just uses 1 disk, then when its full, it continues to use the second disk, so if 1 Disk fails... you only lose the data on that one disk.

Now, some websites say Jbod is the term... others say Jbod just means "Just a bunch of disks" and it doesn't mean combining them. So it makes googling/trying to look up the information a bit hard.


I bought a Raid controller That supports raid 0,1,5,10. It's a cheap one, and I'm pretty sure it's just a software raid solution. I'm looking to set it up to just combine the disks in a way that if 1 fails, they don't all fail.

From what I gather... if I select Concatenated for configuration, will it do what I'm attempting to do, create combined disk without the risk of losing all the data if 1 drive fails? If not... is there any way to do it that way? (I'm using windows, if it matters)

Thanks for any help!
 
Ok a misunderstanding of how RAID levels work.

JBOD -> one drive per logical unit, aka "Pass Thru" mode. This presents the configured drive as a logical drive unit to the host OS, no form of abstraction is done.

RAID 1 -> Mirror, we all know this

RAID 0 -> has two common modes of operation. Span and Slice. Slice is where the single logical drive is broken into even slices across all member disks. This ensures maximum performance as files are spread across all disks. Spanning is the opposite, each disk is appended to the previous disk. Disk 1 won't be used until Disk 0 is filled up and so forth. This has bad performance as you will rarely read data off multiple disks, but allows you to recover data in the event of a breakage of the RAID members.

RAID 5 -> Distributed Parity Raid. The disks are treated just like RAID 0 striped except an extra parity bit is added and the parity is rotated so that its distributed across all member disks. One disk can be lost and the parity data will be used to reconstruct the data that disk held. The pro is that you can lose an entire physical disk and your data is still safe. The con is that you lose one disk's worth of capacity and that your write performance will take a severe hit (Software RAID). Read performance will be similar to that of RAID0.

Ok lets talk about software RAID5. While modern CPUs are more then powerful enough to do the XOR operations to generate the parity bit, the BUS and delay involved severely impacts the write performance. Every block of data must have a full parity set generated whenever its written, and when writing large amounts of data this will create lots of I/O's to the CPU, causing the drive to experience a severe performance hit. On a four member RAID5 array, each member being 1 TB and 7200 RPM, I had a sustained write speed of 40MB/s. Same array in RAID0 had a sustained write speed of 128MB/s. The HBA was a SiL3124 on a PCI card, and thus I was really hitting the ceiling of the BUS (132MB/s). And while the CPU itself never went over 16% utilization, the data I/O was causing the drives to lag. I experienced similar performance on a PCI-e SATA RAID so I know it wasn't the PCI bus causing the issue.
 
To clarify JBOD, it can be configured with one or multiple physical hard drives. The intent of JBOD is as an easy means to take many smaller physical drives and combine them all into one large logical drive that is presented to the OS. Example; 3 physical drives, an 8GB, 13GB, and a 64GB drive configured as JBOD will be presented to the OS as 1 logical drive with a total of 85GB of space.
 
CM what you referenced is known as concatenating disks, a flavor of RAID 0 and mostly integrated into spanning these days, although some adapters make a distinction.

JBOD is exactly that, Just a Bunch of Disks. It tells the HBA to present the physical disk as a logical disk without any logic. JBOD with those disks would present exactly thee disks to the host OS, one being 8, 13 and 64GB in size. It won't do any concatenating or form of abstraction.

For example suppose you had six 146GB SAS disks you could do the following,
1x JBOD
5x RAID5

The Host OS would see two disks, one being the 146GB and the other being the parity striped RAID5 array.
 

r0ck3tm@n

Distinguished
Sep 27, 2009
136
0
18,690


ikarasu, I think what you want is a RAID 5 array or a RAID 10 array. Look those up in Wikipedia for clarification. I recommend RAID 10 unless you have a hardware RAID controller. Your storage space will be cut in half with RAID 10 but it should work well and keep working if any one drive fails.

My RAID controller, an Adaptec RAID 2405, is combining 4 disk drives to make three RAID arrays, a RAID 0 array for my operating system and two RAID 10 arrays for storage. I could have choose one big RAID 10 array had I wanted to do that.
 
Nope, sorry...JBOD'ing physical drives of varying sizes into one large logical volume is NOT any flavor of RAID0.

JBOD is a non standard RAID configuration that is sometimes used to turn several odd-sized drives into one larger useful drive, which cannot be done with RAID 0. For example, you could combine 3 GB, 15 GB, 5.5 GB, and 12 GB drives into a logical drive at 35.5 GB, which is often more useful than the individual drives separately.

JBOD does concatenate the drives, however making a JBOD array from 3 physical drives of varying GB sizes DOES NOT present 3 disks to the host OS, it will present one large volume that equals the total GB of each individual drive.
 
Use RAID 5. You'll need 1 more disk assuming you have 4 2TB disk. the formula for raid 5 is (N-1)xS(min). N is the number of drives and S(min) is the size of the smallest drive. 5x 2TB disk should give you a final size of 8TB. should a disk fail you can then just replace that disk. This array will tolerate only 1 drive failure before you lose everything so make sure to replace the disk immediately.

Keep in min RAID is designed for Redundancy. This is the ability for a array to stay operational in an event of a failure. It is NOT a backup solution. The data can still be corrupted or attacked by a virus of sorts. In either event the data is lost.
 



Except you can't array JBOD period. JBOD stands for Just a Bunch of Disks, its a way for an adapter to present disks directly to the host OS without any abstraction.

What you just described is RAID0 Spanning. Spanning does not require that the member disks of an array be of the same size. Some adapters require that a spanning RAID0 Array be defined as same sized or differential sized, thus the difference between "Spanning" and "Concatenation", this is not part of the standard though and is left entirely up to the manufacturers. The take away is that JBOD is not an array of any type at all, its direct mapping from target drive to host OS. This is defined and standardized, you can't reinterpret it.
 
CM,

I believe we both are correct here depending from what point of view one takes. After plugging away I found that there is a discrepancy on what to call concatenated disks. The earlier RAID HBA's required all members of a Spanned RAID0 array to be of the same size, this requirement is not part of the RAID standard modes but we a limitation of the hardware at the time. Later HBA's came out that could concatenate member disks of different size but to distinguish from the older method they made it a new option on a RAID0 Array (Span vs Concatenate). Again from the RAID0 standard there is no requirement for the member disks to be of the same size, so both are just RAID0 spanning.

Now here is where it gets funny, somehow the definition of JBOD has come to include RAID0 arrays who's members were of different sizes. This is confusing as that is already included inside RAID0. This becomes apparent if you ever work with UNIX, especially Solaris (what I predominately work on). Sun (Oracle) still refers to concatenated disk slices as RAID0 Spanning without and need for same sized member slices, JBOD is just disks that aren't abstracted. This is also the same for the Sun StorEdge series of disk enclosures / SANs. Its the same for EMC and Clarion SAN equipment as well. But when I wonder into the Windows and consumer HW world I see the term "JBOD" being referenced for concatenating disks and defined as a non-RAID disk architecture. I'm beginning to suspect the wide spread use of software raid for this discrepancy.

In short we're both right depending on what your working with.

To the OP,
You have to determine how much space / performance / safety you want. I'll rate them as best I can.

RAID0 Spanning
Performance: Best
Space: Best
Safety: Worst (depending on reliability of disks)

RAID1+0 Spanning with Mirroring
Performance: Middle
Space: Worst
Safety: Best

RAID5 Software
Performance: Worst (better then single drive, I/O's to the CPU will cause disks to lag during writes)
Space: Middle (3 out of 4 disks worth for you)
Safety: Middle, near to that of RAID1+0

How much space do you actually need, don't look at total space on the disks just what you need for storage. That will drive which mode you want to operate in. My caution on RAID5 is that with software raid the I/O's to and from the CPU across the bus without any form of controller caching will kill write performance. It'll be higher then we a single disk but no where near what RAID 0 / 1+0 can achieve. It tends to be the happy medium between the performance of RAID0 and the reliability of RAID10.
 
RAID 0 had two modes originally defined. RAID 0 strip and RAID 0 spanning. Later manufacturers kinda did their own thing. You can still see them if you load up an older LSI Logic MegaRaid / Dell PERC adapter or look at the Solaris RAID implementation (originally Veritas Volume Manager).
 

ikarasu

Distinguished
May 24, 2011
2
0
18,510
Didn't know this would get so much discussion... ;P

Thanks for all the replies/help.

I found a simple way to do what I want to do... (Which is combine all the disks into 1 drive, and if a drive fails, I only lose 2 TB of data, not all 8TB).

I converted my Sata card from raid to SATA ports only... then downloaded a program called Drive Bender.

It's pretty much a replacement for Windows home server drive extender.

It Creates a virtual drive, And a folder on each hard drive, and it stores 1 file on 1 Hd, 1 file on the next, and so fourth - Taking up equal space amongst them all, and combining them into virtually 1 hard drive.

And if 1 HD fails... you just lose whatever files are on that hard drive. Can add/remove hard drives whenever you want.

It sorts the files on the Hd in a weird way... but through the virtual drive it looks normal. So far it does everything I want...and it's "free", since it's in beta. (End of beta is supposed to be june though).