Hi-Rely RAIDFrame: External Disk-To-Disk Backup Via eSATA

Impressions

Keeping in mind that the RAIDFrame was designed for disk to disk backups, I tried to look at it from a hardware enthusiast’s perspective. With that said, the whitebox-like design of the chassis is appreciated. Being able to open the case and seeing how simple the system is gives you the impression that the RAIDFrame is easily supportable by whoever buys one. In fact, you can probably build a device similar to the RAIDFrame, but for the effort already invested in the design and assembly of the RAIDFrame and RAIDPac devices, getting something as portable and redundant as the RAIDPacs would be difficult.

Overall, I like the device. The one thing I would have liked even more would have been a tool-less design that provides an easier assembly and disassembly of the unit. It only takes three screws to remove the top cover; however, it would be nice not to have lift a screwdriver to get inside the case. On the other hand, in order to access the disk drives inside a RAIDPac requires the removal of twelve screws (six on top and six on the bottom) that attach the RAIDPac’s outer sleeve to the plastic inner core. Removing all twelve screws is a must if you need to replace a disk drive. Simply be careful not to lose a screw or over-torque the screwdriver and strip the plastic threads holding the case in place. Once removed, you can access the drive cartridges from the front of the RAIDCore’s plastic case. Also take care not to damage the SATA cable linking the drives to the external connector. I realized this as I stripped some insulation off of one of the data cables as I re-attached the rear face plate on one RAIDPac.

Still, with all the screws used to keep the all metal RAIDFrame chassis and its components together, the construction of the device overall is sturdy. From the metal levers used to release the RAIDPacs from the chassis to the snugness of the RAIDFrame’s railed slots, we're impressed with the manufacturing standards Highly Reliable follows in sourcing the parts for this product. 

To remove a RAIDPac, you need to push the black release lever located above each of the docking bays on the RAIDFrame.To remove a RAIDPac, you need to push the black release lever located above each of the docking bays on the RAIDFrame.

With regard to physical security, the RAIDFrame provides tiny eyelets on the levers used remove the RAIDPacs from the chassis. The holes are small, and you need provide your own locks to secure the RAIDPacs in their slots. However, the simplicity and accessibility of this design prevents removal by blocking the lever from being pushed into the RAIDFrame. This is a sensible implementation, but you need to find the right sized locks. As of this writing, these are not available through Highly Reliable.

Somewhat related to security, a feature that I wasn’t too keen on was the use of a "key" to remove the drive trays from the RAIDPac’s plastic shell. The provided key is a cylindrical pin inserted into the front panel of the exposed RAIDCore. Inserted, the pin releases the locking mechanism that holds the disk cartridges in place. I’m not sure how much additional security this unlocking measure can provide, especially if you just need a jumbo sized paperclip to get the drive out. After all, you just had to remove twelve screws to get this far.

In order to access the hard drives inside the RAIDPac, you need to insert the provided “key/pin” into the designated hole and push up.  When needed, I found a paper clip was just as useful.In order to access the hard drives inside the RAIDPac, you need to insert the provided “key/pin” into the designated hole and push up. When needed, I found a paper clip was just as useful.

One small drawback that I ran into when I first started up the RAIDFrame was a power-related problem caused by shaking during the unit’s shipping. Every time I inserted the second RAIDPac, the first RAIDPac would shut off. As it turns out, the Molex power connectors that come with the included power supplies got a little loose as the system was sent from Reno to Los Angeles. After a quick email from tech support, it was a simple matter to unscrew and lift the top cover off of the RAIDFrame and tightly re-attach the loosened connector to the plastic receptacle that sits behind the RAIDPac bays. For a do-it-yourself type of user, this shouldn’t be a big deal. For a tech novice or someone who doesn’t expect to get their hands a little dirty after buying a storage system, this could be a problem. Still, once re-attached, I haven’t had to re-open the RAIDFrame enclosure.

Molex and SATA connections plugged into the RAIDPac docking boards from within the rear compartment of the RAIDFrame.  Though the connections are in place, we had one molex connector that was loosened during shipping.Molex and SATA connections plugged into the RAIDPac docking boards from within the rear compartment of the RAIDFrame. Though the connections are in place, we had one molex connector that was loosened during shipping.

In the time that I had the RAIDFrame in my possession, I also experienced a drive failure. One of the RAIDPac’s Seagate Barracuda drives included with the system stopped working at start-up. The drive would spin up, but the drive’s cartridge’s indicator light would go red as the RAIDPac continuously beeped. The good thing is that the test data was still accessible on the RAIDPac’s remaining two drives (given its RAID 5 configuration). After several attempts to restart, reinitialize, and rearrange the drives, I confirmed the drive was not coming back by any simple means. Suspecting that the bad drive could be one of the 1TB drives that caused so much commotion earlier this year, I called Seagate support and they directed me to the serial number verification page. After punching in the drive’s serial number, I was forwarded to the firmware update page for this particular model. After downloading the .iso and booting up the test system off of the burned CD, the firmware update still could not find the drive. To keep things going, Highly Reliable sent me an entirely new RAIDPac.

During the testing process, the center hard drive in RAIDPac #2 failed to come up.During the testing process, the center hard drive in RAIDPac #2 failed to come up.

As mentioned earlier, one of the nice pluses I give the RAIDFrame Backup System is the Field Service Access feature, where you can take any RAIDPac and plug it directly to your computer just as you would a second hard drive. I went ahead and shut down my test machine, disconnected all other drives except for my boot drive, removed the metal sleeve and rear panel from one of the RAIDPacs, and plugged an available Molex power connector and SATA connecter into the exposed RAIDCore. After booting up the test system, the drives in the RAIDCore spun-up, beeped, and were accessible via the Windows Explorer. Since the RAIDPac was configured for this machine, the assigned drive letter came up according to the HRDM2.ini file. As a quick confirmation, I was able to create a directory on the newly-attached drive and add a text file to confirm write functionality.

One of the RAIDPacs was set up and connected directly to our test PC’s motherboard.  Field Service Access (FSA) would be a handy feature to have if you don’t have access to the rack-mountable RAIDFrame.One of the RAIDPacs was set up and connected directly to our test PC’s motherboard. Field Service Access (FSA) would be a handy feature to have if you don’t have access to the rack-mountable RAIDFrame.

Create a new thread in the US Reviews comments forum about this subject
This thread is closed for comments
23 comments
Comment from the forums
    Your comment
  • paxiam
    A good solid well written article. Keep up the good work.
    3
  • yyrkoon
    Perhaps I missed something, But I am curious as to how they got 3 drives per port on the port multiplier. I am familiar with port multipliers, but not the technology they use here apparently.

    Also, on a side note, I think this device would be more useful if it allowed you to run RAID 10, but with two out ports. Perhaps even multilane would be in order in this case ?
    -1
  • yyrkoon
    Let me clarify what I said above. I feel the device would be "better" if they allowed it to be configured to run RAID 10 using two RAIDPacks.
    0
  • jeffunit
    Nice hardware, but a bit spendy. My software based raid 5 array has higher performance over gigabit ethernet, uses commodity parts, and is much cheaper. I am sure my array costs well under $1000 with 8 750gb drives. Since I am using supermicro hot swap sata drive cages, all I have to do is press a button and the drive comes out. 4 screws, and the drive is removed. Takes well under 5 minutes to remove a drive and put in a new one, and it just takes a philips screwdriver.

    And why are there two 450w power supplies? Even if the box is full, that is 30 watts per drive, which is a crazy amount of power. If they stagger the drive spin up, they would never need more than 225w tops.
    0
  • climber
    This should be a 6Gb/s SATA 3.0 design with port multipliers. If you wanted to back up serious amounts of data to this thing you'd never finish, it would always be in backup mode.
    -1
  • micky_lund
    wow...this is really cheap stuff
    0
  • ZakTheEvil
    Highly Reliable Systems? The company name is Highly Reliable Systems???
    0
  • Anonymous
    so if one of the three drives should fail inside a RAIDpac, you have to eject the whole RAIDpac to replace that drive?
    That sets the RAID offline ... a RAID 5 should be allow hotswapping a failed disk.

    And two PSU but not redundant ? ... doens't seem very HIGH RELIABLE
    0
  • thehighrelyguy
    Although some of these points were mentioned in the article, not being on the front page, these important features may have been missed by a few readers and may answer some of your questions.

    1. Each RAID pack has an integrated RAID 5 / RAID 0 controller. This means the RAIDPACs can operate completely standalone without the addition of any special controllers or driver software. Thus, left with nothing but a RAIDPac, you could connect it via eSATA to your motherboard and restore the data.

    2. The RAIDpacs use standard SATA hard drives. At present, 2.0TB drives are available making the available capacities 4TB in RAID 5 and 6TB in RAID 0 per RAID pack.

    3. There is also a 1 bay RAIDFrame available which can use RAIDPacs interchangeably with the 5 Bay if necessary. The one bay has both ESATA and USB connections for portability and ease of connection. The one bay is substantially cheaper than the 5 bay.

    4. The dual ATX power supplies were chosen over specialty redundant power supplies because they are the most widely available power supply in the world making field service for this unit in the dead of the night, practical. The 1 bay has this same feature. By using two instead of one, if a power supply dies, you're not dead, the unit is still usable although some bays may not function.
    0
  • thehighrelyguy
    yyrkoonPerhaps I missed something, But I am curious as to how they got 3 drives per port on the port multiplier.
    The RAIDFrame is not a single RAID system. The RAIDFrame 5 bay is 5 RAID systems. Each hot swappable RAIDPac is a volume. Port multipliers allow up to 5 volumes (drives) to one SATA channel. Hence one SATA channel, 5 RAIDPacs.
    0
  • thehighrelyguy
    jeffunitNice hardware, but a bit spendy. My software based raid 5 array has higher performance over gigabit ethernet, uses commodity parts, and is much cheaper. I am sure my array costs well under $1000 with 8 750gb drives. Since I am using supermicro hot swap sata drive cages, all I have to do is press a button and the drive comes out. 4 screws, and the drive is removed.


    The RAIDFrame 5 Bay is not a single RAID system. It is 5 RAID systems. Each RAIDPac is self contained and needs no hardware or software to connect to another computer system's SATA port in emergency situations. Stand alone drive's from a RAID system like yours are not this way. They still require your RAID's system hardware and software in order to be accepted by a host.
    0
  • jeffunit
    thehighrelyguyStand alone drive's from a RAID system like yours are not this way. They still require your RAID's system hardware and software in order to be accepted by a host.


    Perhaps you misunderstand.
    I was referring to my published article on tomshardware http://www.tomshardware.com/reviews/build-file-server,2358.html
    In it, I build a *software* raid system.
    If the computer takes a dive, you can put the drives in another box,
    install linux & raid software, and you have all of your data.
    Takes about an hour.

    And my software solution, does 204 mbytes/sec write and 320 mbytes/second read, on the local disks, which is roughly 3 times faster than this $4000+ system. Also, I can saturate gigabit ethernet with reads or writes on remote systems, which is faster than the RAIDFrame, and can be hooked up to many computers, vs one with the RAIDFrame esata port. All for well under $1000.

    Doing raid-5 on 3 disks is really silly, as your overhead is 1/3. Also my system can serve files to linux, windows, or mac, all at the same time, unlike this system.
    0
  • thehighrelyguy
    jeffunitI build a *software* raid system.If the computer takes a dive, you can put the drives in another box,install linux & raid software, and you have all of your data.Takes about an hour.


    Jeff, I do understand. However, I'm not sure you understand what I'm saying. As you point out, you put your drives in another box, that box I would assume is a computer, and you install LINUX on it, consuming that machine. This takes over a whole computer in order to read your disks. Our RAIDPacs simply plug into any computer motherboard with SATA ports like any other SATA hard drive. There is no software to install whatsoever nor any other hardware required. In other words, a RAIDPac is... just another physical a hard drive as far as a computer is concerned. Our system is DAS (Direct Attached Storage). Your solution is a NAS (Network Attached Storage). You're comparing apples (not the computers) and oranges. BTW, I do believe your solution is fine if someone requires a NAS. I've been a big LINUX fan since 1992, see my early LINUX promo products (http://linux.techass.com/products/).
    0
  • thehighrelyguy
    thehighrelyguy I've been a big LINUX fan since 1992, see my early LINUX promo products ().

    http://linux.techass.com/products/
    0
  • sublifer
    thehighrelyguyhttp://linux.techass.com/products/

    Revive phoenix! Now is a really good time for it :) You've got a lot more to work with these days.
    0
  • jeffunit
    thehighrelyguyJeff, I do understand. However, I'm not sure you understand what I'm saying.


    I understand just fine.I know you have a DAS system. However, when my NAS system is faster, cheaper, and more versatile, the DAS system isn't so desirable. You can get a motherboard, cpu, memory, case, and power supply for $200-$300. Just add the hard drives or cages, and you are good to go.

    I can use raid-5 or raid-6. I can use raid-10. I can support multiple operating systems. I can add as many drives as I want. I can build multiple arrays. If I wanted to I could team my gigabit network or I could use a 10gb/sec card. I can hot swap my drives.

    If your system were faster or cheaper then it would offer something significant. Why is your DAS so slow? It should be able to saturate a SATA connector. For so much money you should have sata-III and be able to saturate that too. Can you hot swap? For $4k for a starter configuration, I expect hot swap power supplies and a whole lot more flexibility and performance.
    0
  • thehighrelyguy
    jeffunitI understand just fine.I know you have a DAS system. However, when my NAS system is faster, cheaper, and more versatile, the DAS system isn't so desirable....


    Jeff,

    Highly Reliable Systems (http://www.high-rely.com) makes a NAS also (http://www.high-rely.com/HR3/includes/BNAS/BNAS-HRS201.php). I'm quite aware of the benefits and drawbacks of NAS's. In fact, here is our own whitepaper explaining some of the benefits and drawbacks. (6th article down, http://www.high-rely.com/HR3/includes/whitepapers.php) And yes, each RAIDPac is hot pluggable. I guess your NAS is also. Although I wouldn't consider plugging and unplugging the AC cord and network connector in the same thing league.

    And that's my point. Yes, your NAS is faster and so is our NAS but, again, you're trying to compare two totally different things. Comparing your NAS to the RAIDFrame is like comparing a sports car to a truck.

    Also, our system is FIVE independent RAID 5 systems. Yours is one. You would have to build four more then add up the costs. Yes, still less expensive but again, not the same thing.

    Our RAIDPacs are designed to be swapped by office workers, not IT people. Thus a lot of design effort went into making RAIDPacs simple to change. RAIDpacs have a special water resistant transport case that has been certified by Iron Mountain. RAIDPacs are used by film crews to move films from field to studio.

    Does your NAS attach directly to someone's $12,000 Windows 2008 server and fall under the security provisions of that server's operating system and configuration without any administration changes or added security risks? Not likely, it has to have a network connection and be configured as a SAMBA network share or maybe iSCSI if possible. Does your NAS store 6TB of DATA complete with access hardware and no software drivers required in a rugged, portable, hot pluggable, interchangeable package slightly larger than the size of three 3.5" drives? Not likely. If it does, I can almost guarantee you the performance will not be what you're experiencing currently and the cost will definitely be more.

    I do believe you understand the difference between DAS and NAS. But I don't believe you understand who our customers are. They are the US Military, Medical Centers, Universities, City Governments, large companies with large data centers, small companies with large backup requirements and heavy security needs.

    They also want a company that will warranty their system for up to 5 years. They want a technical support department that is responsive and available.

    The RAIDFrame has been shipping for almost two years. And, the hardware platform has remained stable. Try buying the exact same consumer grade motherboard just 6 months after you bought the last one. Good luck finding it. When companies invest heavily in their IT infrastructure they don't want their replacement parts to be unavailable in 6 months and have to upgrade or migrate.

    All of these things whether they be part of the hardware or not are costs and benefits that our customers need and buy with our systems. I'm sorry if the RAIDFrame doesn't meet your expectations but I think your priorities are different.
    0
  • Anonymous
    “If it’s not off-site it’s not a backup”. I assume you don’t tuck your full blown linux box with RAID arrays under your arm and head out the door every night? Your NAS is a great storage system and provides a great local copy of your data, which protects against lots of types of failures. But it isn’t a backup unless the data is offsited regularly. RAIDPacs are intended for daily plug/unplug and transportable backup scenarios. They aren’t intended for general purpose storage (although they can be used for that). Hope this Helps visualize what we’re doing.
    0
  • zeki
    Is the price for reals??? Its so expensive. Like he stated this is for people who have no clue what they are doing (office people) and not for IT people. Its sad to see that our tax money goes and buys these for our universities and military when there are cheaper alternatives that have more features and more space for the price.
    0