Hi-Rely RAIDFrame: External Disk-To-Disk Backup Via eSATA

Impressions

Keeping in mind that the RAIDFrame was designed for disk to disk backups, I tried to look at it from a hardware enthusiast’s perspective. With that said, the whitebox-like design of the chassis is appreciated. Being able to open the case and seeing how simple the system is gives you the impression that the RAIDFrame is easily supportable by whoever buys one. In fact, you can probably build a device similar to the RAIDFrame, but for the effort already invested in the design and assembly of the RAIDFrame and RAIDPac devices, getting something as portable and redundant as the RAIDPacs would be difficult.

Overall, I like the device. The one thing I would have liked even more would have been a tool-less design that provides an easier assembly and disassembly of the unit. It only takes three screws to remove the top cover; however, it would be nice not to have lift a screwdriver to get inside the case. On the other hand, in order to access the disk drives inside a RAIDPac requires the removal of twelve screws (six on top and six on the bottom) that attach the RAIDPac’s outer sleeve to the plastic inner core. Removing all twelve screws is a must if you need to replace a disk drive. Simply be careful not to lose a screw or over-torque the screwdriver and strip the plastic threads holding the case in place. Once removed, you can access the drive cartridges from the front of the RAIDCore’s plastic case. Also take care not to damage the SATA cable linking the drives to the external connector. I realized this as I stripped some insulation off of one of the data cables as I re-attached the rear face plate on one RAIDPac.

Still, with all the screws used to keep the all metal RAIDFrame chassis and its components together, the construction of the device overall is sturdy. From the metal levers used to release the RAIDPacs from the chassis to the snugness of the RAIDFrame’s railed slots, we're impressed with the manufacturing standards Highly Reliable follows in sourcing the parts for this product. 

To remove a RAIDPac, you need to push the black release lever located above each of the docking bays on the RAIDFrame.

With regard to physical security, the RAIDFrame provides tiny eyelets on the levers used remove the RAIDPacs from the chassis. The holes are small, and you need provide your own locks to secure the RAIDPacs in their slots. However, the simplicity and accessibility of this design prevents removal by blocking the lever from being pushed into the RAIDFrame. This is a sensible implementation, but you need to find the right sized locks. As of this writing, these are not available through Highly Reliable.

Somewhat related to security, a feature that I wasn’t too keen on was the use of a "key" to remove the drive trays from the RAIDPac’s plastic shell. The provided key is a cylindrical pin inserted into the front panel of the exposed RAIDCore. Inserted, the pin releases the locking mechanism that holds the disk cartridges in place. I’m not sure how much additional security this unlocking measure can provide, especially if you just need a jumbo sized paperclip to get the drive out. After all, you just had to remove twelve screws to get this far.

In order to access the hard drives inside the RAIDPac, you need to insert the provided “key/pin” into the designated hole and push up. When needed, I found a paper clip was just as useful.

One small drawback that I ran into when I first started up the RAIDFrame was a power-related problem caused by shaking during the unit’s shipping. Every time I inserted the second RAIDPac, the first RAIDPac would shut off. As it turns out, the Molex power connectors that come with the included power supplies got a little loose as the system was sent from Reno to Los Angeles. After a quick email from tech support, it was a simple matter to unscrew and lift the top cover off of the RAIDFrame and tightly re-attach the loosened connector to the plastic receptacle that sits behind the RAIDPac bays. For a do-it-yourself type of user, this shouldn’t be a big deal. For a tech novice or someone who doesn’t expect to get their hands a little dirty after buying a storage system, this could be a problem. Still, once re-attached, I haven’t had to re-open the RAIDFrame enclosure.

Molex and SATA connections plugged into the RAIDPac docking boards from within the rear compartment of the RAIDFrame. Though the connections are in place, we had one molex connector that was loosened during shipping.

In the time that I had the RAIDFrame in my possession, I also experienced a drive failure. One of the RAIDPac’s Seagate Barracuda drives included with the system stopped working at start-up. The drive would spin up, but the drive’s cartridge’s indicator light would go red as the RAIDPac continuously beeped. The good thing is that the test data was still accessible on the RAIDPac’s remaining two drives (given its RAID 5 configuration). After several attempts to restart, reinitialize, and rearrange the drives, I confirmed the drive was not coming back by any simple means. Suspecting that the bad drive could be one of the 1TB drives that caused so much commotion earlier this year, I called Seagate support and they directed me to the serial number verification page. After punching in the drive’s serial number, I was forwarded to the firmware update page for this particular model. After downloading the .iso and booting up the test system off of the burned CD, the firmware update still could not find the drive. To keep things going, Highly Reliable sent me an entirely new RAIDPac.

During the testing process, the center hard drive in RAIDPac #2 failed to come up.

As mentioned earlier, one of the nice pluses I give the RAIDFrame Backup System is the Field Service Access feature, where you can take any RAIDPac and plug it directly to your computer just as you would a second hard drive. I went ahead and shut down my test machine, disconnected all other drives except for my boot drive, removed the metal sleeve and rear panel from one of the RAIDPacs, and plugged an available Molex power connector and SATA connecter into the exposed RAIDCore. After booting up the test system, the drives in the RAIDCore spun-up, beeped, and were accessible via the Windows Explorer. Since the RAIDPac was configured for this machine, the assigned drive letter came up according to the HRDM2.ini file. As a quick confirmation, I was able to create a directory on the newly-attached drive and add a text file to confirm write functionality.

One of the RAIDPacs was set up and connected directly to our test PC’s motherboard. Field Service Access (FSA) would be a handy feature to have if you don’t have access to the rack-mountable RAIDFrame.
  • paxiam
    A good solid well written article. Keep up the good work.
    Reply
  • yyrkoon
    Perhaps I missed something, But I am curious as to how they got 3 drives per port on the port multiplier. I am familiar with port multipliers, but not the technology they use here apparently.

    Also, on a side note, I think this device would be more useful if it allowed you to run RAID 10, but with two out ports. Perhaps even multilane would be in order in this case ?
    Reply
  • yyrkoon
    Let me clarify what I said above. I feel the device would be "better" if they allowed it to be configured to run RAID 10 using two RAIDPacks.
    Reply
  • jeffunit
    Nice hardware, but a bit spendy. My software based raid 5 array has higher performance over gigabit ethernet, uses commodity parts, and is much cheaper. I am sure my array costs well under $1000 with 8 750gb drives. Since I am using supermicro hot swap sata drive cages, all I have to do is press a button and the drive comes out. 4 screws, and the drive is removed. Takes well under 5 minutes to remove a drive and put in a new one, and it just takes a philips screwdriver.

    And why are there two 450w power supplies? Even if the box is full, that is 30 watts per drive, which is a crazy amount of power. If they stagger the drive spin up, they would never need more than 225w tops.
    Reply
  • climber
    This should be a 6Gb/s SATA 3.0 design with port multipliers. If you wanted to back up serious amounts of data to this thing you'd never finish, it would always be in backup mode.
    Reply
  • micky_lund
    wow...this is really cheap stuff
    Reply
  • ZakTheEvil
    Highly Reliable Systems? The company name is Highly Reliable Systems???

    Reply
  • so if one of the three drives should fail inside a RAIDpac, you have to eject the whole RAIDpac to replace that drive?
    That sets the RAID offline ... a RAID 5 should be allow hotswapping a failed disk.

    And two PSU but not redundant ? ... doens't seem very HIGH RELIABLE
    Reply
  • thehighrelyguy
    Although some of these points were mentioned in the article, not being on the front page, these important features may have been missed by a few readers and may answer some of your questions.

    1. Each RAID pack has an integrated RAID 5 / RAID 0 controller. This means the RAIDPACs can operate completely standalone without the addition of any special controllers or driver software. Thus, left with nothing but a RAIDPac, you could connect it via eSATA to your motherboard and restore the data.

    2. The RAIDpacs use standard SATA hard drives. At present, 2.0TB drives are available making the available capacities 4TB in RAID 5 and 6TB in RAID 0 per RAID pack.

    3. There is also a 1 bay RAIDFrame available which can use RAIDPacs interchangeably with the 5 Bay if necessary. The one bay has both ESATA and USB connections for portability and ease of connection. The one bay is substantially cheaper than the 5 bay.

    4. The dual ATX power supplies were chosen over specialty redundant power supplies because they are the most widely available power supply in the world making field service for this unit in the dead of the night, practical. The 1 bay has this same feature. By using two instead of one, if a power supply dies, you're not dead, the unit is still usable although some bays may not function.

    Reply
  • thehighrelyguy
    yyrkoonPerhaps I missed something, But I am curious as to how they got 3 drives per port on the port multiplier. The RAIDFrame is not a single RAID system. The RAIDFrame 5 bay is 5 RAID systems. Each hot swappable RAIDPac is a volume. Port multipliers allow up to 5 volumes (drives) to one SATA channel. Hence one SATA channel, 5 RAIDPacs.
    Reply