Adaptec 5xxx Series RAID + Edge/Fanout Extenders

jwoollis

Distinguished
Oct 11, 2007
16
0
18,510
Has anyone built their own professional/business class NAS based on

Adaptec 5xxx Series RAID + Edge/Fanout Extenders without using pre-built 1-4u rackmount 4-16 drive units.

I'm talking about connecting large multiples of drives up to the maximum per controller connector SFF-8087 or SFF-8088 using Extenders with SFF-8087 and SFF-8088 connections to add up to around 128 drives per controller connection.

Granted there would be a performance bottle neck were you to use a significant number of these drives in a single RAID array however it should be possible to combine the drives in multiples of around 16 drives having up to 8 raid arrays with 128 drives. It should be possible most of the time to garner respectable performance without issue provided significant sustained access is not required.

I'm not suggesting such a substantial sized NAS need be created but I would like to learn about those who have used Expanders, what issues they have faced in using more than 4 drives per port and how they built and configured the NAS.

Please note I am not interested in NAS built purely from motherboard connected drives or NAS based on SFF-808x to SAS/SATA Fanout cables or performance oriented RAID using costly SSD's exclusively.

For such a project I might set a budget of say around £1000 for a 12 or 16 drive system excluding drives consisting of:
-Basic No Frills Computer (excluding RAID controller) (£250)
-External Case and SATA/SAS Hotswap bays for 12 or 16 drives (£200)
-Adaptec 5xxx Series RAID Controller (£250)
-Edge or Fanout Extender (£300)

12-16 Low cost 1TB or 1.5TB SATA Drives might be added for a furthur £600-£750 approxiamately though one might consider adding drives in smaller multiples of 3-5 drives, perhaps creating up to 3 raid arrays using each 5 drives with one hot spare. These drive sets might be systematically replaced with larger drives as capacity rises and cost of larger capacities drops.

I would typically expect such a solution to be used both as a general fileserver, and as a form of network service provider (FTP, HTTP, SMTP, Virtual Machines, etc) and with iSCSI to provide storage to other computers, potentially replacing internal drives on other computers for OS and Database applications. Consider the potential to have an almost limitless number of OS boot onto a client PC, and be able to easily backup and restore the cleanly built OS.

For reference, A typical Rackmounted solution for 12-16 drives typically costs between £8000-£16000 excluding the computer and raid controller with limited storage from small SAS drives.

If no one has tried then perhaps Toms Hardware might consider this a challenge.
 

wuzy

Distinguished
Jun 1, 2009
900
0
19,010
We're building a similar NAS to what you've described here.
Feel free to customise the specs of the platform according to your budget or need, such as replacing the cheap Dell PERC 5/i with your proposed Adaptec 58xx controller.
 

jwoollis

Distinguished
Oct 11, 2007
16
0
18,510
Thanks for the comments guys, I will look into these suggestions shortly, bit busy at the moment, though a quick comment!

Recent articles on next generation controllers supporting 6GB/s rather than 3GB/s suggest that it might be beneficial to consider waiting for a controller in this class to become available. I preferred Adaptec though I have to find out why Adaptec Controllers underperformed in those tests,

I'm not entirely happy with those articles - I fail to understand why:
A> Expanders were not used, to allow more disks to be used per card within bandwidth tollerences
B> More capable cards in the Adaptec Range were not selected
C> Why Windows was used to Software RAID 0 two sets of 8 SSD disk RAID 0 arrays. Seems a bit pointless...
-- Each controller card may depending on model support up to 128 or 256 disks
-- Potentially the driver and operation software of such controller cards might be configured to join multiple controller cards.

-- It is unrealistic to expect a system to ever require a perpertual sustained transfer rate in a live system - It would therefor be useful to know how such controller cards work when using large numbers of disks

James
 

wuzy

Distinguished
Jun 1, 2009
900
0
19,010
You want to build a NAS right? Not a DAS or SAN for workstation or multiple high-load servers.
NAS means you're limited to the bandwidth of a single GbE link, unless you link aggregate (802.3ad) them. A stripped (with parity) array made up of more than four drives will saturate GbE easily so there's not point in getting the maximum speed possible. For a NAS data integrity and platform flexibility is more important.
 

jwoollis

Distinguished
Oct 11, 2007
16
0
18,510
Yes, I do want to build a NAS ... The NAS would provide some DAS or SAN type services such as iSCSI, however I don't see any point in buy older technologies, or limiting my options.
My NAS would be used with a number of small computers of various specification from netbooks to high spec games machines or basic Servers. The flexibility of the NAS allows for the creation of a number of computers sharing this NAS for all storage requirements having little or no independant storage. The Computers would have each have a discrete purpose, perhaps one for network security and internet access, one for web services, one for databases, one as a media center, one for general use, one for training, one for Virtual Machines. I have a number of ancient PC's and a few newer computers which could be used for this purpose, and be rehoused in low profile cases to save space.

I expect to build a NAS with a collection of drives, starting small and expanding perhaps to around 16-24 drives eventually, but these drives will be split into sets of 4-6 disks each set would have different characteristics:
-RAID 0 sets for performance
-RAID 1 might be used but only in exceptional cases - e.g. 2xSSD RAID1.
-RAID 5 or 6 sets for security and backup
-small SSD sets for high performance bootable Filesystems using iSCSI
-SATA sets for inexpensive moderate performance
-SAS sets for moderate to high performance Filesystems for databases and Virtual Machines.

Most Motherboards come with 1 GbE port, some mid range boards have 2 ports, and you must also consider that each port may be operated with switches and Full Duplex giving 1Gb up and 1Gb down stream per port.

Each Set of Disks at least initially would be sized based on performance to meet but not exceed the network limitations. For example SSD's typically have a max read/write of 200 Mb/s giving a maximum of 5 data drives + additional drives for parity as required. SAS drives may also have similar performance and limitations, SATA drives being slower, might allow for 6-10 data drives per set.

Since each set falls within the bandwidth constraints of a single network connection, creative routing might allow distribution of loads perhaps with half the sets using one port, half using the other. Bottlenecks may still appear when two or more sets on one port are accessed at maximum speeds, however this would not happen frequently and since this is a home, and I am a hobbyist/consultant I can live with something like this solution...

It doesn't have to be perfect or crazy fast by todays standards, however I might wish to grow it for future needs and having the option to use 6Gb Enterprise SSD/SAS drives and perhaps one day to upgrade the NAS system unit with faster components and add more drives in external units is helpful and extends the useful life of the system.

The initial budget and resulting system may be quite small, to fit the budget, but it will be very scaleable and have a long potential life.
 

wuzy

Distinguished
Jun 1, 2009
900
0
19,010
You're confusing between b(for bit) and B(for byte). 1Gbps for GbE = 125MBps max. theoretically. A little under 100MBps is the real-world limit without the use of jumbo frames, a number easily surpassed by RAID stripping.

The level of performance you're demanding is on the level of SAN passing through 8Gbps FC or 10GbE. That's my cup of coffee at work and not something I'd able to afford myself at home.

As mentioned before link aggregating 2 (or more ) GbE is one way to get more than 1Gbps, but requires expensive switch. Still a lot cheaper than 10GbE or FC tho.
The other method is running VMs and multiple GbE ports, each VM get its own GbE link to the switch. Total thoroughput from speedy RAID array(s) is distributed among the VMs. The limitation with using VMs is each VM is again limited to 1Gbps.
 

jwoollis

Distinguished
Oct 11, 2007
16
0
18,510
Yes, Sorry I do appear to have got some of the sums wrong, however...

Don't forget that I'm not using this for an Enterprise solution and just because the RAIDed sets in the NAS may be capable of far higher performance, this does not mean that I am interested in using it at those levels or every RAID set at once and/or at peak levels, I am far more interested in Capacity (for storing large volumes of video, or large objects like databases and virtual machines and the backups that result) and integrity without significant cost that RAID 1 mirroring might require. Scaleability is also important, being able to support more disks than a traditionanal standalone NAS is important as it can be quite limiting to only ever be able to RAID together 3-5 disks and have to pay £700-£1000 depending on model and features with no expandability bar the usb/esata sockets meant for backup of the NAS.

I considered this project because I realise that building a NAS would be significantly cheaper than buying one, that the features of a self-built NAS are less limited allowing features such as iSCSI that are useful for sharing capacity to other machines without having to install disks in other computers - I'd have a number of diskless computersas client which makes the client cheaper to build, operate and easier to replace. It would also make backups easier since backups and certain other operations would be done internally within the NAS and not subject to the same performance issues.

You haven't mentioned the bandwidth restrictions of controller cards, each connection on the card and the PCI-E Bus, version 1 having I think a maximum of 4Gbps and Version 2 having 8Gbps. That of course would limit the number of disks that can be RAIDED together before the controller port or PCI-E bus itself became a bottleneck.

In the end I do realise there will be problems but you must understand that I do not intend to throw all these disks together into one insanely huge raid array for the sake of performance, I don't intend to paint go faster stripes on the side to fool myself and others that it is faster slicker than everyone elses.

I will consider buying additional network ports to increase bandwidth should that become an issue - perhaps bonding several network connection together might be necessary in certain cases however it might be possible to map disk sets to network connections eventually having 1 or 2 connections per set, and use routing to dictate which clients or network services use which network connections in order to mitgate the bottleneck and performance issues, or as costs drop and technology improves, I might be able to buy a 10Gbps adapter or in some cases consider an entirely different method of connection such as eSATA or USB3 for some PC to NAS connections.

Another thing that is important for me is that I will be using power saving features of these technologies to reduce operational costs, and if I organise the disk sets suitably I may also be able to periodically switch off certain sets completely when not required.

For me the design of the NAS is in a constant state of flux until I am ready to build it, there may be some technological or cost issues which mean I have to compromise on something now, however the NAS should perform well enough for my needs. The NAS will start out small but will grow not just in capacity but in capabilities as time passes and I learn how to get new facilities working either internally or through a client.

If others want to arrange their NAS differently then so be it - as they say "each to their own". I would certainly like to learn how others set theirs up, the reason for it, what problems they experienced and how they over came those problems.

Many thanks for your feedback, it provokes thought, raises my awareness, and helps me to consider issue I might not have thought about previously
 

wuzy

Distinguished
Jun 1, 2009
900
0
19,010
You haven't mentioned the bandwidth restrictions of controller cards, each connection on the card and the PCI-E Bus, version 1 having I think a maximum of 4Gbps and Version 2 having 8Gbps. That of course would limit the number of disks that can be RAIDED together before the controller port or PCI-E bus itself became a bottleneck.
You're referring to PCIe x1, single lane. All the current SAS RAID cards are at least x4, most are x8 (4GBps duplex on PCIe v2.0)
PCIe is far from being a bottleneck for todays controller, unlike PCI-X.

I expect to build a NAS with a collection of drives, starting small and expanding perhaps to around 16-24 drives eventually, but these drives will be split into sets of 4-6 disks each set would have different characteristics:
-RAID 0 sets for performance
-RAID 1 might be used but only in exceptional cases - e.g. 2xSSD RAID1.
-RAID 5 or 6 sets for security and backup
-small SSD sets for high performance bootable Filesystems using iSCSI
-SATA sets for inexpensive moderate performance
-SAS sets for moderate to high performance Filesystems for databases and Virtual Machines.
Looking at your requirements again and taking the comments above... I still haven't figured out what you're going to use the NAS for.
The keywords I've trapped so far are: database (just need one SLC based SSD for it), backup, web services? (server), media/fileserver and VM running on clients for training. Anymore?

If you want mine for example, here's the list:
-Trixbox VoIP (Linux)
-Proxy, firewall and caching (Linux)
-TV tuner server (Windows)
-Active Directory and associated service (testing out stuff for work)
-typical file & streaming server
-DC++ hub
... that's all it runs for now until I build a bigger NAS outlined in that post I linked to
 

jwoollis

Distinguished
Oct 11, 2007
16
0
18,510
Okay to clarify what I anticpate my solution to look like this is a description of the NAS and other PC's though please understand this may take many months or even a few years for me to accomplish.

The NAS itself is expected to do the following in no particular order or priority:

External Services:
-Samba/NFS File Services
-iSCSI Services
-PXE and equivelant Network Boot/Installation Services
Internal Services:
-Backup and Integrity checks.

The NAS will at no time have to perform any other functions or services.

Any other features or services I require will be performed by separate computers, those computers will utilise space from the NAS as follows.

-Any new computer with an internal Disk may optionally be installed with an operating system via PXE or equivelent Service or run without an internal OS.
-Any new or existing computer (Disk/Diskless) may be booted via a preinstalled operating system image present on the NAS.
-The NAS will provide general file storage services to any and all computers as required.
-Any space over and above that required for the operating system will be either mounted via Samba/NFS/equivelant or will be provided via iSCSI from the NAS to the system appearing as a locally installed drive to be used as needed.
-Space Allocated to a computer via iSCSI may be used for any or all of Operating System, Application Space, General Storage, Database Space, Storage of Live or Backup Virtual Machine images or other.

Where possible and practical, applications installed on a computer may actually be installed on one computer in space donated from the NAS and then mounted onto other computers - allowing one copy of software to service many computers while saving space and reducing redundancy.

Any system running on one client computer may be migrated to another comperable client computer in the case of hardware failure.

Most clients would be built in small low profile cases, and either be inexpensive low-moderate wattage systems for general applications, or a small proportion of high performance 4/6/8-core and optionally dual/quad socket systems such as AMD Opteron or Intel Xeon.

Clients each would have discrete functions such as:
-WAN/Internet Connectivity and Security functions such as Firewalls/DNS/DHCP/NAT
-WAN/Load Balancing Services where there is more than one web server.
-WAN/Internet Services such as Web Services including but not limited to http/https Services
-WAN/Mail services
-WAN/voip services
-WAN/media streaming services

-Media PC with integrated Tuners and Scheduling/Recording Functions
-Media Jukebox

-General Security including OS and Database User Authentication
-Database Services

and so on...

I expect to have to learn a lot as I go and so Virtualisation may be used to prototype platforms for specific functions before they are eventually put onto dedicated hardware and this will allow me to run much of this on a small number of computers initially.

For my purposes, in choosing a RAID controller which is capable of supporting 128 or 256 disks by using expanders, I can start very small with perhaps 4 drives and add a few disks periodically, either expanding one set or creating a new set.

Hope this is a sufficient description, I am not sure , as I have said this for me is all at a protracted planning stage until relevant components can be found and I can secure sufficient funds.