iSCSI has been around a while although I have to admit I didn't read up on it much until just now. Frankly I'm feeling a bit ambivalent about it, mostly because I'm lukewarm - skeptical even - to this ridiculous rush over the past few years to use TCP/IP (with all that overhead) on everything from toothbrushes to toilet bowls. Then again, sadly, nobody's ever accused me of being a world class visionary. Oh well.
It does have some fun possibilities. Poor man's cluster was one of the first to come to mind. Too bad 10 Gig is still so expensive - GigE can be completely saturated by just one of a few of today's better consumer -class SATA hard drives.
FWIW, I won't even send email to people with gmail accounts ;-).
-Brad
The thing is, if you read my mini analysis of speed testing I did on mashies Forums, you'll note that the TCP/IP overhead is a bit lower than you would expect from a different Protocol (Samba, or FTP for example)
Also, In my testing, I noticed typically .1 MS overhead in random accesses, and in the case of using an image file vs direct HDD use, I actually gained 5.x MS random access times. Granted, I lost some to throughput as well . ..
One of the most interesting uses I've personally thought about, was the ability to say RAID across multiple Targets(to one Initiator), using very large RAM DISKs, and suddenly, you have a HUGE RAIDed RAM DISK, with very low access times, and potentially huge throughput capabilities. The usage here should be obvious. Back that with the ability to use software RAID 1 on the target side, and suddenly you have a very large RAM DISK with redundancy
Forget the fact that a lot of people think that software RAID 1 is slow, it *is* not. It may be a little slower than when implemented in hardware, but not enough to worry about. The only caveat I could think of , would be CPU usage.
Also keep in mind, that if you read my mini analysis, I used common everyday equipment, including older hardware, and a PCI GbE adapter on the target side(I expect this was one of the limiting factors in my tests). If I used an Intel Pro 1000 PCI-E card (which I defiantly plan on buying), and a good adapter on the Initiator side with TOE, performance would definitely improve.
If you're interested, keep an eye on that forum. I plan on writing my own real world usage benchmarking application, to say copy a DvD on one HDD to another, and implement a Linux iSCSI Target using Dapper 6.06 server for further tests. In the future, with the possibility of PCI-E 2.0 direct peer to peer communications, this technology will really start to show its true colors.
Where iSCSI really shines however, is its ability to work with any network provided it implements TCP/IP, and is not llimited to any one peice pf proprietary equipment. This also means, you do NOT have to use SCSI devices, you can use ATA, SATA (shown in my tests), SCSI, FC SCSI, and any other interface device the Target has the ability to use, which would even include MFM (although WHY you would want to would be reason for questioning ones sanity . . .).