Fast, scalable and reliable, Serial Attached SCSI's day has come. Our comprehensive look at hard drives, host adapters and storage applications shows what you can expect out of SAS for enterprise storage applications.
"Existing Gigabit Ethernet or Fiber Channel network architecture is used to run a slightly modified SCSI protocol, which is encapsulated into Ethernet frames (iSCSI - Internet SCSI)"
afaik in SAN iSCSI (internet SCSI) is only used for...guess what iSCSI SAN (ethernet based), not for FC networks: iSCSI is a relatively new protocol (it was ratified by IETF in 2003 i suppose) and FC has been around for a lot more time...
Although Ethernet is the primary target for ISCSI, technically, toplogies like Token Ring, FDDI - anything that supports IP is supported. IP is layer 3 after all. Ethernet is a logical choice however because it [ethernet] scales so easily and is very affordable. Even GigaBit Ethernet today in terms of cost is a fraction of what it was a few years ago.
Anyways, embrace SAS. It marries wonderfully with ISCSI because of it's ability to support both SAS and SATA drives. This way you can carve out LUNS for your exchange server using 15K SAS drives and at the same time, carve out an array composed soley of affordable, large SATA drives for your backup solution.
In the section on the Adaptec SANbloc S50 JBOD the article states:
"At the beginning of our tests, we reached burst transfer speeds of only 610 MB/s only. After changing the interface cable between the S50 and the Adaptec host controller, we finally managed to hit 760 MB/s. We found seven hard drives to saturate the system in RAID 0. Adding more drives would not allow for higher throughput."
Was that limit reached with SATA drives, SAS drives or a mixture of both? In the case of SATA disks the expander bridge (STP to SATA) may not disconnect between the read command and the start of the data transfer back to the controller. Once the data transfer commences, a SATA disk will occupy a SAS path (one quarter of the available bandwidth) until the transfer is finished. SAS disks should be more frugal with their bandwidth usage.
Something i thought about but haven't researched yet is with the advent of the SAS standard and the 2.5" drives are the manufacturers planning to migrate to perpendicular recording with these drives or maintain longitudinal? As from a non-comparisonal perspective that would make the 2.5" units even more attractive for 1U systems. Just a thought/question i wanted to share.
I just purchased a Promise Vtrack J300s and a Adaptec 4800SAS controller. I filled the JBOD with 12 WD5000YS drives.
I'm using the latest drivers and firmware on a PowerEdge 2650 A20 bios revision.
Something definitely isn't right here because I'm getting no where near the performance this article talks about. In doing a simple link bandwidth test, I setup all 5 drives as simple volumes and created 5 Managers in Iometer, each manager with 5 workers and decided i'd start out with 128 queue depth and 100 seq and write with 64k block size. All volumes were formated with ntfs, 64k cluster size. Each Manager and it's corresponding workers are assigned one volume. In my tests, Ive hit a hard limit that I'm looking into further. Testing with 5 or more drives at any queue depth, I get less than 20 MBs per drive. If I remove a Manager and thus a drive, and test with an queue depth of say 64, I can about 40 - 43 MB/s per drive. Does anyone else have this controller and is using it with SATA-II drives? Does anyone else have the Vtrack J300s?
Performance seemed pretty solid once I got the 4800SAS working.. Though I didn't run any tests so I couldn't give you an exact number. Quite frankly this controller is going to get thrown in the bin. Nothing but trouble.
6 x Raptor 150gb drives Raid 5
I started off using it in conjuction with adaptecs own enclosures (don't have the model number handy. It's the 4 drive enclosure that fits in a 3 x 5 1/4 space). With the drives connected in this way the controller bios took literally minutes to load. And when it did load the array would degrade randomly which made it near impossible to load an OS. In fact it corrupted it's fair share of installs. After flashing the controllers bios, and following what little advice adaptec support had to offer I ditched the enclosures and connected the drives directly.
This solved the slow boot times and allowed me to get the machine loaded. At this point it seemed I was in the clear.
Mind you this took three weekends of troubleshooting and backing up and restoring data (was trying to upgrade an existing array). So naturally it made me none to happy.
Fast forward a month. I have a drive that is failing, and being rebuilt on every couple hours. Not good. I pick up a new drive.
I shutdown the machine because I've noticed the cables that come with the controller are an accident waiting to happen. The cable end for the HD has connectors for both power and data and you plug in the power to the back of it. The problem is that it makes it all relatively heavy and without any sort of locking mechanism it is very easy to pull them out of the backside of an HD just working on other cables in the case.
Anyhow I replaced the drive and booted the machine back up. The bios wouldn't rebuild the array automatically with the new drive. After a couple reboots I tried to go in to specify the new drive as a hot spare. Before entering the bios setup itself the status screen showed the degraded array. however once inside the setup the firmware refused to acknowledge there was any array definition at all. Which was a problem, because the only way to setup a hot spare is by going into the maintain array menu item.. which it wouldn't allow me to do since it didn't think I had an array to maintain.
Several reboots later and futile changes later (mostly swapping drives around and trying different ports) the controller lost it's configuration altogether.. Not even anything about the array on the status screen while booting.
I lost a day of data and a *lot* of sleep in the last three days.
Maybe I'm just not lucky and got the one oddball card. But I think this card isn't baked yet. They've had a lot of bios revisions due to compatibility issues and that could be part of my problem. But I'm done with adaptec support they were completely unhelpful. Often suggesting things that I already told them I had tried.
I've spent the weekend once again moving everything to a new array, but this time it's not a adaptec controller.
yeah I don't know. I returned EVERYTHING save the drives. I even bought a LSI controller - 3800 HBA. I returned that too! Dude, Im getting a DELL!
We opted to go with a shiny 2950 server and a Dell MD1000 with a Perc5E controller. I figured I'd just not waste anymore time and get what I know works. I'll post some performance numbers when it arrives and after I test. I got a bunch of numbers from the previous fiasco if anyone is remotely interested.
As a bugging homebrew U320 SCSI fanatic :-) I really enjoyed this article as while the SATA-2.5 specification in the big picture (not the gorry details) is pretty clear... SAS has always been a closed book!!
In my current homebrew setup I have a number of SATA harddrives hanging off a multilink SATA connector using a SATA Host Controller. The Host Controller is based on the SIL3124 chipset and I am using the port-multiplier feature in this to address multiple drives.
I didn't even release that SAS supported multiple drives per Host Controller port as I have never seen any products for sale that expend SAS ports. 128 devices per host-controller (or port?) sounds like a lot!!
My question to anyone out there / the author is can you buy the SAS equivalent of a SATA port multiplier card on its own?
Apart from the cost of external server (enterprise) level HD enclosures they run the drives too hot... Just out of interest...