
Anyone who has ever dealt with server and network administration in a corporate environment has encountered the need to smoothly add storage capacity using existing infrastructure. Although the resulting solutions usually do the job, they tend to be rather expensive and are very rarely flexible.
19" systems often don't offer enough space to accommodate additional hard drives. This leaves as the only alternative hooking up storage modules to servers in a 19" rack using the SCSI interface or Fiber Channel connections. However, it is not always advisable to mix critical file-server functions with other jobs - even tasks as simple as storing data.
Big tower computers could be used to hold additional hard drives and even controllers to be added into the system. Again, however, this would entail the mixture of server and storage tasks mentioned above, plus the work involved in the installation.
For many companies, the ideal storage solution needs to offer great flexibility. It should be easy to implement, usable in several locations and in conjunction with various systems, and, finally, scalable. Of course, performance should not suffer appreciably either. The answer to this seeming holy grail of storage is called iSCSI - Internet SCSI. This solution embeds the SCSI protocol in TCP/IP packets, making it possible to use the most common enterprise storage interface over the existing network infrastructure. It also represents a consolidation of storage subsystems.

Source: Adaptec
The diagram above details the working principles behind iSCSI. The storage systems are supposed to be able to utilize network infrastructure independently of servers. The consolidation of storage subsystems we discussed before only means that a storage system can be accessed from several servers, with minimal management complexity. Alternatively, it is also possible to offer the additional storage space in existing systems, and attach them via iSCSI.
The advantages of this approach are many, and mostly pretty obvious. Many businesses already have an efficient network infrastructure in place, usually consisting of mature and reliable technology such as Ethernet. No new technologies need to be introduced, tested and validated for the incorporation of iSCSI systems or others such SANs (Storage Area Networks). Additionally, there is usually no need to hire expensive specialists for implementation.
This means any network administrator can manage iSCSI clients and servers with very little training, since these are subordinate to the existing systems and rely on well-established technology. Also, iSCSI can be considered a high-availability solution, since iSCSI servers can be connected to several switches or network segments. Finally, the architecture is scalable by design, thanks to Ethernet switching technology.
In principle, an iSCSI server (target) can be realized either in software or in hardware. However, due to the high CPU load of software implementations, it works best with dedicated hardware. The main workload of the iSCSI server consists of embedding SCSI packets in TCP/IP packets, as mentioned earlier, which must be done in real time. A software solution incorporates the CPU(s) into the iSCSI system, while a hardware solution can rely on a TCP/IP-offload unit as well as a SCSI-offload unit.
Through an iSCSI client (initiator) the storage resources on the iSCSI-server can now be integrated into the client system as a device, which can be used as if it were a local drive. The great advantage of this compared to a classic network share is in the area of security, as iSCSI puts great emphasis on proper authentication of the iSCSI packets, which are transported through the network encrypted.
Of course, the attainable performance will be slightly lower than that of a local SCSI system due to the network's higher latencies. Still, today's networks, with their bandwidths of up to 1Gbit/s (=128 MB/s), already have a lot of capacity, most of which is not being fully utilized.
Every iSCSI node possesses its own name, which is 255 Bytes long, and an alias, both of which are independent of its IP address. This way, a storage array in the network can still be located even after it has been moved to another subnet.

Source: Adaptec
Aside from requiring a network, the primary prerequisite for implementing iSCSI is an iSCSI server. We tested two solutions here: one software, and the other hardware. The software-based solution goes by the name of SANMelody, and comes from a company called DataCore. This software can be downloaded from the web as a 21-day trial version. To evaluate the hardware approach, we took a look at Adaptec's Storage Array iSA 1500, which counts as a full-fledged SAN-application.
Both solutions fulfill all requirements of iSCSI, making the storage space on the host system available to the client systems through iSCSI. A client system can be fitted with an Adaptec adapter, reducing the CPU load on the system (for example in workstations).
In principle, iSCSI can be used on a 100Mbit network, but there will be a noticeable slowdown compared to local drives. Gigabit Ethernet is the more sensible choice here, since it is unlikely to become a bottleneck even if a multiple-drive RAID 5 array is used. Bandwidth could become an issue with RAID 0 arrays, but it is rare that such a fast storage area would be accessed and fed over a network.
On the client side, an iSCSI initiator is needed. These are available for practically every operating system. A Google search with the keywords "Microsoft", "iSCSI" and "Initiator" should quickly yield a number of relevant results.
The next step is now to log on to the server using the server's IP address from within the initiator software. This can be configured to occur automatically upon system startup. The allocated drives on the server will then be available on that computer as physical drives; they can be used like local drives, complete with a drive letter in Windows Explorer and "My Computer".
The iSCSI protocol also allows for a packet encryption scheme based on IPsec, although this is not compulsory. Within Intranets, for example, there is not always a need for encryption, while the security issues involved in WAN connections make the option much more important there.
iSCSI also lends itself to backup tasks, as information can easily be replicated to another drive, even if the target volume is in a neighboring building or another subsidiary connected through a broadband line. For example, this can be done using the volume shadow copy functionality built into Windows. iSCSI can even be operated over an ordinary DSL line, though here bandwidth may be the limiting factor, depending on the application.
The great advantage of iSCSI is that classical backups are now no longer limited to one location - an advantage that should not be underestimated. For example, devices such as tape libraries could be installed at any place in the company network. Even if a worst-case scenario does come to pass, the backup data that was saved via iSCSI can be replicated within a very short time frame.
Bearing The ISCSI Workload
If an iSCSI solution is implemented in software, the network adapter is hit by massive amounts of data. This workload is then delegated to the CPU courtesy of the NIC driver, since ordinary network adapters don't usually offer acceleration functions. SCSI is a block-oriented protocol, and Ethernet is packet-oriented. Due to the large amount of information that can be moved using Gigabit Ethernet, the embedding and extraction of blocks to and from TCP/IP packets can take up most or even all of the CPU cycles even on a modern system.
To ameliorate this problem, special TOEs (TCP/IP Offload Engines) were developed, which handle all of these complex operations right on the iSCSI network adapter. This takes the load off of the system's CPU, so the users and the server system can continue with their normal work.
Anyone planning to set up a high-performance iSCSI environment should either steer clear of software solutions or ensure a decent system environment. Adaptec offers a possible solution with its ASA-7211C and the Storage Array iSA 1500, which we will present on the following pages.

The Storage Array is 24" (60 cm) long, and requires a rack that is correspondingly deep.
Adaptec's iSA 1500 Storage Array is a 1U rack system that houses four Serial ATA (SATA) drives, and can be connected to local networks or SANs using the iSCSI interface. In this case, the choice of SATA devices makes a lot of sense, as the drives in question are MaXLine-II models made by Maxtor, designed with constant operation in mind. Choosing SCSI components would only have driven up the cost of the system without providing any tangible added value. SCSI hardware only pays off if the application calls not only for constant operation, but also constant heavy loads and maximum performance.
The system connects with the network using two Intel 82546EB Gigabit network ports. The load created by TCP/IP packaging operations is handled by a fast Intel Xeon processor on a Supermicro X5DPR-IG2+ board. This board is based on Intel's E7501 server chipset, and comes equipped with 1 GB of ECC memory in the standard configuration.
The system boots from flash memory which is attached via UltraATA/100 and comprises a full Linux installation. An IP address can then be assigned to the system using a Linux console. After this is done, the device can be remotely configured and administered using the Adaptec Storage Manager software, accessed via the system's third network adapter, an Intel Pro/1000 card. Note that Adaptec differentiates between the network port for configuration and the other two ports for data.

The 1U module houses an entire Xeon system. The upper network port is used for remote configuration, while the other two, on the bottom right, are for data.

Unfortunately the radial fans aren't exactly quiet. The PSU's fans are literally drowned out in their noise.

The slide-in systems for the four 3.5" SATA drives are robust and easy to use.

The four hard-drives are controlled by an Adaptec AHA-2410SA. Unlike our test version, the Intel RISC processor operates at 100 MHz instead of 66 MHz.


The first step in making an Adaptec iSCSI device operational is creating a so-called agent...

... which will be reachable under the name and IP address of the host system.

In Adaptec's lingo, "storage pools" are usable, configured storage areas, such as a RAID 5 array.

The configuration itself is easy and goes step by step. Anyone who has ever set up a RAID controller should not have any trouble creating a pool.


At this point, the drives that will belong to the storage pool are selected. In our case, the number of drives is limited to four, due to the iSA 1500's space constraints. The Adaptec controller also wouldn't be able to address any more drives (being matched with the Storage Array). The software, on the other hand, could.

Done! The RAID 5 array we created first had to be initialized. In the status display, Adaptec does not differentiate between an initialization and a rebuild.

Only an iSCSI Target can be accessed by an Initiator. So let's create one.

A Target can be found by its alias on the network, even if the iSCSI application has been moved to another subnet.


A Target can be as big as the entire storage space available in the desired pool. For our test, we selected a size of 100 GB.

Creating our test Target took less than a minute.


The status/information window answers all questions.

Very Nice: the Target's size can be changed at any time. This process doesn't take long.

Adaptec supports the creation of snapshots, and includes a rollback feature that restores a Target using a snapshot. This requires free space on the Storage Array, which is then reserved for the snapshots in a similar manner as for a pool.


Done: The snapshot is an exact copy of our Target.

The 7211C is a PCI-X adapter for iSCSI and sports both a TCP/IP and an iSCSI offload engine. A Marvell chip handles the physical connection to the network itself, at speeds up to 1 GBit/s. The adapter uses a 64-bit interface that can operate at up to 66 MHz - more than enough for its intended purpose.
When examining this card, the large number of memory modules immediately catch the eye; SDRAM chips can be found even on its back. All this memory is required by the buffers for the individual offloading functions, which speeds up processing immensely.




The Initiator is a network adapter to begin with, and therefore needs to be configured. Of course, DHCP can also be used.

To be able to access an iSCSI Target, its IP address needs to be known. Port 3260 is used by default.

The Target overview shows what options are available. Clicking on "Edit/Logon" opens a connection.

Here we can create a connection to the Target. We can also configure the connection to be re-established after every reboot.

Voila!

After the installation of the SANMelody program, a SANMelody Plugin is available in the management console under "Drives." Here, free partitions can be converted into virtual volumes.
DataCore, founded in 1998, specializes in memory management, virtualization and data replication. Just under a year ago, the company unveiled the SANMelody suite, which acts as a software SAN solution. It allows any drive partition to be integrated into Windows systems as iSCSI Targets. Although SANMelody costs at least $1,200, it does offer the advantage that it can be used on nearly any (current) system with at least 512 MB of RAM. Another positive aspect is that DataCore doesn't require the use of a Windows Server - meaning that Windows XP can be used just as easily. The only prerequisite is that the .NET framework be installed.
Partitions that are created using the Disk Manager have to be completely blank - they cannot be formatted, and can't have a drive letter assigned to them. In SANMelody's configuration plug-in, these free partitions are then used to create so-called "virtual volumes," which are in turn assigned to an application server. Application servers are machines that are supposed to have access to these storage areas.
DataCore offers four base packages, which differ in price and feature set. The smallest version supports only one processor, eight hard drives and two network adapters, while the bigger configurations offer quite a bit more - up to 320 hard drives, 16 network adapters and four processors! In addition, they offer IP replication, snapshot functionality and auto-failover as an option. Aside from the smallest one, all versions support Fiber Channel connections.
You can find more detailed information on the feature sets of the various versions on DataCore's website at the following address .

Different SANMelody systems can be managed through the menu item "Storage Server". This is also where all virtual volumes are listed.

Here, we are integrating an application server that is intended to have access to the virtual volumes.

Next we choose the network interface (channel) we want to use.


Finally, the virtual volume needs to be assigned to a channel.

First we have to define a Target Portal or an iSCSI Server.


Next, we choose a Target on the iSCSI Server and log on.

Like Adaptec, Microsoft offers the option of reestablishing the connection to the Target after every reboot.

Done: The iSCSI connection is now active.


The result is as desired: a new drive has been integrated on our test bed system via iSCSI.
Client For Adaptec ISA 1500
| Processor | |
|---|---|
| Socket 604 | Dual Intel Pentium 4 Xeon, 2.8 GHz, 512 kB Cache, FSB533 |
| System Components | |
| DDR SDRAM | 2x 512 MB PC3200 Samsung, ECC, Registered |
| Motherboard | Asus PP-DLW, Rev. 1.03
Intel E7505 Chipset |
| Graphics Card | Matrox G450, 32 MB |
| Hard drives | System drive: Western Digital WD800JB
80 GB, 7200 rpm, 8 MB Cache Test drive: 4x Maxtor MaXLine Plus II, 250 GB, 7200 rpm, 8 MB Cache |
| iSCSI Controller | Adaptec iSCSI Host Bus Adapter 7211C |
| Software | |
| Chipset | Intel Chipset Installation Utility 5.1.1.1002
Intel Application Accelerator RAID Edition Ver. 3.53 |
| DirectX | 9.0b |
| OS | Windows Server 2003 Enterprise Edition |
| Benchmarks & Settings | |
| Transfer Performance | c't h2benchw Ver. 3.6 |
| Data transfer diagram | Winbench 99 2.0
Disk Inspection Test |
| I/O performance | IOMeter 2003.05.10
Fileserver Benchmark Pattern Webserver Benchmark Pattern Database Benchmark Pattern Workstation Benchmark Pattern Throughput Benchmark Pattern |
| Application performance | Winbench 99 2.0
Disk Winmarks Disk Inspection |
Server For DataCore SAnmelody
| Processor | |
|---|---|
| Socket 478 | Intel Pentium 4, 3.4 GHz, 512 kB Cache, FSB800 |
| System Components | |
| DDR SDRAM | 2x 512 MB PC3200 Micron |
| Motherboard | Aopen AX4SPE Max II
Intel i875 |
| Graphics card | Matrox G450, 32 MB |
| Hard drive | System drive: Western Digital WD800JB
Test drives: 1x Western Digital WD800JB, 80 GB, 7,200 rpm, 8 MB Cache 1x Seagate Cheetah, ST336732LW, 73 GB 15,000 rpm, 16 MB |
| Network | Intel Pro 1000/MT
Gigabit Ethernet |
| Software | |
| Chipset | Intel Chipset Installation Utility 5.1.1.1002 |
| DirectX | 9.0b |
| OS | Windows XP Professional Build 2600 Service Pack 1 |
| Benchmarks & Settings | |
| Transfer Performance | c't h2benchw Ver. 3.6 |
| Data transfer diagram | Winbench 99 2.0
Disk Inspection Test |
| I/O performance | IOMeter 2003.05.10
Fileserver Benchmark Pattern Webserver Benchmark Pattern Database Benchmark Pattern Workstation Benchmark Pattern Throughput Benchmark Pattern |
| Application performance | Winbench 99 2.0
Disk Winmarks Disk Inspection |






The basic function of iSCSI, namely utilizing existing technologies to implement inexpensive SANs, works like a charm. In our tests, both the software-based iSCSI Server from DataCore and the hardware-based iSA 1500 Storage Array from Adaptec offered good performance paired with easy handling.
DataCore aims its SANMelody suite more at companies that wish to retrofit existing servers for iSCSI use, or that are looking to enter the iSCSI world for as little cost as possible. The price hierarchy of the various packages is organized by feature set, allowing even smaller companies to create flexible iSCSI storage solutions. A 21-day trial version lets you test out the software before you buy, and SANMelody Lite, which is an even more pared-down version costing only $199, makes the decision even easier.
Adaptec, on the other hand, chooses to go the hardware route - unsurprising, considering that this company supported and pushed the iSCSI technology from the beginning. The iSCSI-to-PCI-X adapter AHA-7211C serves to connect SANs or storage appliances to existing servers. To this end, the card sports a TCP/IP offload engine as well as an iSCSI offload engine. As a result, the system which houses this card has to deal with an even lower workload than if it used directly connected low-complexity storage solutions (DAS - Direct Attached Storage). While the 7211C isn't exactly cheap at $600, it is one of the few well-designed SCSI adapters available at the moment.
Finally, there is Adaptec's Storage Array iSA 1500 - a potent 1U server with a compact, task-oriented operating system and the ability to act as an iSCSI server in a SAN. Two Gigabit Ethernet ports ensure very fast network connections, while a third port is dedicated to configuration and administration. The server itself uses four 3.5" Serial ATA hard drives, which helps keep costs down - all of the bigger manufacturers now offer ATA drives that have been designed for constant operation in so-called near-line environments. Current drives offer capacities of up to 400 GB, allowing for a maximum nominal capacity of 1.6TeraByte in a 1U case. Costing several thousand dollars, such a Storage Module is obviously outside of the financial reach of Joe Average, but for mid-size companies, it is a safe investment. This is especially true given that the storage capacity can be increased quite easily as soon as larger drives become available.