Sign in with
Sign up | Sign in
Your question

Intel's RAID 5 NAS Makes the Grade

Last response: in Toms Network
Share
May 19, 2006 11:10:51 AM

Intel has decided it wants to play in the SOHO/SMB NAS market and produces a first product that should get it into the game.
May 19, 2006 1:29:31 PM

hey Tim

How come this NAS wasnt tested with the usual method? Kind of makes it hard to compare.

How come every time a gigabit device is released, it never takes anywhere close to advantage of the bandwidth available? Makes me mad.

Someone should do an article to see what you can really expect to achieve from a RAID 0 configuration with gigabit ethernet. Basically a best-case-scenario.

Get an oldish computer, 2 ghz or so (or perhaps a dual core), whack 4 drives in it on a good quality RAID card. Install linux and a good gigabit ethernet card.

Then test it with another computer, possibly using some kind of a ram drive so that the pc definitely doesnt limit the transfer. I wonder what you could expect? Surely the ethernet connection would only start limiting above 80 (or so) megabytes/sec???

Somone enlighten me.
May 19, 2006 4:15:38 PM

Quote:
hey Tim
Get an oldish computer, 2 ghz or so (or perhaps a dual core), whack 4 drives in it on a good quality RAID card. Install linux and a good gigabit ethernet card.

Then test it with another computer, possibly using some kind of a ram drive so that the pc definitely doesnt limit the transfer. I wonder what you could expect? Surely the ethernet connection would only start limiting above 80 (or so) megabytes/sec???

I second this! It would be nice to know what the best possible NAS speed is using gigabit. Throwing the best possible hardware at it would be enlightening. Wouldn't it be funny if a top-of-the-line $2000 screaming RAID PC could still only get about 15MB/s?
Related resources
May 19, 2006 6:04:12 PM

That was a very interesting article but I have a few observations. Firstly, Intel's crib sheet for the SS4000-E talks of drive capacity being 500gb & yet the article says that the current generation of 750gb drives will fit. Who is right. Secondly, although this is network storage & speeds might not be anywhere near as high as disks within a PC, are there any of these mini NAS units that take advantage of SATA II? Furthermore, will SATA II drives work within the SS4000.

Finally, I'm very tempted by a networked storage area for home & office use & feel it'd be great to store various documents & media but I was wondering whether the SS4000 gave the best bang for buck. I'm interested in something that will have Raid 1 & will allow for future expansion. What, in your opinion is the best device to look at?

Thanks all,

Sam
May 19, 2006 8:00:32 PM

Quote:
hey Tim

How come this NAS wasnt tested with the usual method? Kind of makes it hard to compare.


Yes. I know. It was tested in our labs in Germany, which uses a different test suite. Sorry about that. We're working to change this as well as change the way we use iozone.

How come every time a gigabit device is released, it never takes anywhere close to advantage of the bandwidth available? Makes me mad.

Quote:
Someone should do an article to see what you can really expect to achieve from a RAID 0 configuration with gigabit ethernet. Basically a best-case-scenario.

Get an oldish computer, 2 ghz or so (or perhaps a dual core), whack 4 drives in it on a good quality RAID card. Install linux and a good gigabit ethernet card.


Part of the performance issue is the added overhead of TCP/IP and SMB/CIFS.

But part is also the use of embedded processors and relatively small amounts of memory in these products to keep cost/price down.

Interesting idea for the article though, Kevin. Want to write it? I can help you out with getting iozone configured. iozone would work nicely since it doesn't hit the drive of the machine it runs on much at all.
May 23, 2006 9:41:06 AM

I'm having problems understanding why some of these NAS manfacturers aren't implementing iSCSI as a block protocol on their boxes. Just seems that it would add some needed utility and make the arrays more effective as backup devices. Entry level SAN anyone?
May 24, 2006 3:51:18 PM

I agree. I run a Linux desktop and laptop but occasionally serve files over my (admittedly cheap and old) 100Mbps router to Windows machines. Linux will push as much data through the NIC when you use NFS as the NIC can support and the host and client disks can handle. I saturate the link between the desktop and laptop when I do backups- that's about 12MB/sec throughput from a 5400rpm notebook drive to a single 74GB Raptor. My laptop doesn't have a gigabit connector, so I am not sure fast I can get NFS to go.

Serving to SMB clients generally is a lot slower- if I average 75% of full speed I am generally happy. I somehow doubt that their disks are that much slower than my laptop's and limit the TX speed- even a crappy 4200rpm notebook drive can throughput 20MB/sec reliably. I proved that when I switched from serving to Windows clients with Samba and used a little LAN HTTP server (kpf), when I was again throughputting the full ~12MB/sec. I don't blame Tridgell and the rest of the Samba guys on this- they do a good job making Windows and Linux computers talk considering that they had to sue to get any documentation and what they did get sucked (and they're back in court in the EU to get better docs.)

I guess you could deploy an NFS client on the Windows machines, which should allow the SS4000-E to throughput a lot faster. But the only program I know that will do this is Windows' Services For UNIX and it is not very good. SFU basically says, "If you switched the server to a Windows server, you'd be happier."
May 24, 2006 5:54:21 PM

If I see another enaemic NAS box with inadequate harddrive cooling I think I am going to be sick... :-)

Why not just run that NAS Linux distro on a box... I least it won't fallover if you try and access disk drives in parallel from different machines!!

This is one area where a Gbit interface is useful (obviously not applicable if you run all the drives in a single Raid-5/6 cluster). Given that maxed out harddisk speeds can't sustain above 80Mbytes (unless you run Raid-0 - risky or U320/SAS - v. expensive).

So lets see Tom's buy a tower for 500 that has a PCI-Express MB and at least 4 SATA ports... OK no Raid-5 but at least you can upgrade the thing...!! Then stick on FreeDOS booted from a USB stick and give them others a run for their money.

Give me a Lian-Li case anyday... at least then you have the option to watercool or aircool your HD!! No point having Raid-5 if your HD are going to die prematurally because they are running >30C.

Just my $0.02

Bob Wya
June 4, 2006 4:59:03 AM

Quote:

It would be nice to know what the best possible NAS speed is using gigabit. Throwing the best possible hardware at it would be enlightening. Wouldn't it be funny if a top-of-the-line $2000 screaming RAID PC could still only get about 15MB/s?


Why would you expect it to sustain only 15 MB/s when you can get around 30 MB/s from single IDE to single IDE over consumer gigabit?

For this reason, I also wonder why the review was titled "makes the grade" when it obviously fails even a basic performance test compared to desktop hardware, and as far as I saw, give no real measurements noise or temperature regarding cooling.

Cute box, little thinking = big money?

The "best possible NAS speed" wouldn't be the best possible gigabit network throughput speed, because the file transfer protocols are more complex and less efficient, but they should be at least well above 30 MB/s, and probably closer to 60 MB/s at least.
June 4, 2006 5:05:22 PM

Quote:
Why would you expect it to sustain only 15 MB/s when you can get around 30 MB/s from single IDE to single IDE over consumer gigabit?


How did you measure that performance and was it with SMB/CIFS or NFS?

Quote:
For this reason, I also wonder why the review was titled "makes the grade" when it obviously fails even a basic performance test compared to desktop hardware, and as far as I saw, give no real measurements noise or temperature regarding cooling.

The "best possible NAS speed" wouldn't be the best possible gigabit network throughput speed, because the file transfer protocols are more complex and less efficient, but they should be at least well above 30 MB/s, and probably closer to 60 MB/s at least.


As you say yourself, you can't expect any NAS or shared drive to approach filesystem performance of directly-attached drives.

I've yet to see any NAS we've tested even approach 30 MB/s, let alone
60!
June 4, 2006 5:26:18 PM

Quote:
Why would you expect it to sustain only 15 MB/s when you can get around 30 MB/s from single IDE to single IDE over consumer gigabit?


How did you measure that performance and was it with SMB/CIFS or NFS?



It was Windows to Windows, so MS SMB on NTFS drives. (I haven't done the same test using Linux, but don't expect to find material differences; other tests are generally positive. I'd use CIFS because smbfs is deprecated. I'll be happy to re-run the tests and provide any desired results, including Linux on one side, when I have the time.)

These low speeds, 30 MB/s, are well within the capability of modern desktop gigabit, so the networking per se is not a big factor in my experience -- the drive speeds dominate until you get higher. Moreover, when I took same/similar drives and mounted them in the same computer, I got similar results copying from drive to drive.

I used xcopy to transfer the files, and timed it within the command script, typically using very large files.

Inexpensive NAS's don't have the sorts of processors that we typically use on the desktop, so might be suffering for that reason.
June 6, 2006 11:31:01 PM

Tim, did you guys test out whether a USB hub could allow more than two external drives attached? I know the Infrant arrays support that.

Any idea if there are plans to supplement the firmware with more features (like slim server, print server, etc)?

it sucks that the german lab didn't use the same benchmarks. I'm curious how all of the NASes stack up - Terrastation, Yellow machine, Thecus, Infrant, and now Intel.
June 7, 2006 1:26:52 PM

Quote:
Tim, did you guys test out whether a USB hub could allow more than two external drives attached? I know the Infrant arrays support that.

Unfortunately, no.

Quote:
Any idea if there are plans to supplement the firmware with more features (like slim server, print server, etc)?

I'm not sure, but I doubt it, since that's not the market Intel is after with this product.

Quote:
it sucks that the german lab didn't use the same benchmarks. I'm curious how all of the NASes stack up - Terrastation, Yellow machine, Thecus, Infrant, and now Intel.

I apologize for that. We're trying to standardize on iozone, but will be changing the way we present the data.
June 20, 2006 11:26:28 PM

Tom,

I'm pretty much torn on which way to go, Intel or Infrant ReadyNAS NV. I'm in the market to purchase one ASAP. The price difference is minimal to effect my decission. Which product would you choose. I would definately use raid 5 and have a 1gb network.

Thanks :) 
July 24, 2006 9:13:27 AM

I have two of these but experienced nothing but trouble. I am working with the Intel tech support to get them resolved, but so far it just looks like the firmware/system installed on the units is very buggy. If we wouldn't have two systems that show identical problems, I would think that my hardware is defective. I've of course used the latest firmware and software as of today.

There are problems such as:

- Frequently hangs on large file transfers (100MB+) and then crashes, freezes or very rarely continues after 10 minutes with the file transfer (both via Windows or FTP).
- Windows often throws an error saying the system is not available any longer in the network.
- Web Interface frequently stops responding and the unit is not reachable any longer (including via Intel’s Storage System Console) and needs to be rebooted
- When turning off via front button it blinks forever and only power cycling helps (which in turn may lead to rebuilding the drives which takes 40 hours or more or will just take forever to start up again)
- Frequently takes 15 minutes or more to start up (web interface won't let you login during that time and the system is not available in the network, HD3 very active).
- All data seems to be written first to HD3 and not to complete RAID5 array (buffered on HD3?)
- Relatively slow network speed using Gigabit (5800kb/s), when Jumbo Frames are enabled on NAS and computer, the Web Interface won't be reachable and Jumbo Frames on the computer have to be disabled to use the Web Interface (while the rest of the network works fine though).

Please note that all of this has been repeatedly reproduced with two different systems on different networks with different PCs just within two days.

We will try to return these units, but I would be interested to see if there are any other user experiencing problems or if there is anyone who is happy with the system. I've had only one moment where I thought the problems are solved, that was when I could transfer about 100 GB of backup data overnight to the system without any problem. However, the problems described above repeated at the next day, and I performed some tests on both systems to make sure it’s not just a defective unit…
August 2, 2006 1:34:07 AM

Did I understand correctly from the review, that the drives can only be formatted in FAT32? This was a surprise, I would have thought since the system runs linux, it would support ext3 etc.

Isn't FAT32 antiquitated and problematic with very large files? This would be a serious downside.
August 5, 2006 9:55:45 AM

Suzanne,

i fully agree with you, this system is very buggy. I have the same
problems. I use both Windows and Linux clients to access the
SS4000. Sometimes NFS hangs up, sometimes SMB. There
is no way to transfer large files to the system via NFS, SMB.
The only reliable protocol is FTP, but the transfer rates are very
low, despite Gigabit switch and interfaces...

So i opened a case at intel regarding transfer rates, and i got
this statement:
"The performance you reported seems to be within the normal
range for the Intel(R) Entry Storage System SS4000-E."
So i wonder why they build two gigabit interfaces into this system....

What else i found out is that they disabled all hard drive related
caching (controller and hard drive). I didn't managed to turn it on.
Maybe this could explain why the system is so slow...
September 3, 2006 11:00:33 PM

I have one of these units using 4 segate 250GB Sata 2 drives. All ic an say is. Don't buy it. I have had system becomoe unsable twice and lost all the data on it twice (once with raid 5, once with raid10)

Raid 5 - The system intialized the disks instead of rebuilding
Raid 10 - it was completely fubar'd.

Unfortunately i can't take it back anymore :( . If anyone knows of a way i could get this hunk of $650 junk to work again i am all ears.
October 19, 2006 4:17:15 AM

I don't know much about technology so forgive me but I have a question.

Everyone seems to be unsatisfied with the network performance, so my question is, can the dual network ports be configured as an etherchannel? If not does anyone know of a device that is capable and what are the results from that configuration? Is this idea out to lunch?
October 19, 2006 1:07:48 PM

Quote:
I don't know much about technology so forgive me but I have a question.

Everyone seems to be unsatisfied with the network performance, so my question is, can the dual network ports be configured as an etherchannel? If not does anyone know of a device that is capable and what are the results from that configuration? Is this idea out to lunch?


I don't think they can be configured in that way.
October 21, 2006 11:09:27 PM

Quote:
I have one of these units using 4 segate 250GB Sata 2 drives. All ic an say is. Don't buy it. I have had system becomoe unsable twice and lost all the data on it twice (once with raid 5, once with raid10)

Raid 5 - The system intialized the disks instead of rebuilding
Raid 10 - it was completely fubar'd.

Unfortunately i can't take it back anymore :( . If anyone knows of a way i could get this hunk of $650 junk to work again i am all ears.


I too just lost a complete RAID 10 configuration with no reason. Because I had nothing but troubles so far, I decided to only use it for backing up data of lower importance but not even this will work without problems !!

It was fine a couple of days ago, it was shut down, and then when I turned it back on yesterday, the GUI showed that all my drives where new and I would need to setup a new RAID. Now I thought damn alright, lets build a RAID 0 then, since it really doesn't matter now I just need something to store some data on and we paid for it so I can't just throw it away. It started building, 10%, 15%, but then this morning again it showed the same message that I lost all my drives (all marked as new) when I checked the status.

I'll remove the drives straight away and trash that piece of junk immediately, I've had enough. We actually have two units, the other one showed the same problems but since we parked it in the store room (powered down and unused) there was no drive failure yet, but using it now makes no sense at all.

I'll re-use the drives with something else, we've a test unit of the Synology CS-406 and it works without problems for a 2 weeks now with multi user access.

I can just agree with the others here, stay away from this product, it's not only about 'not worth the money', it simply doesn't work and I could not imagine before that a company such as Intel could sell something like this with the simple excuse "but it's an entry level device, you may want to look at our (more expensive and) more professional units".
October 28, 2006 5:45:51 PM

I got a bit of an updated.

I basically bitched at Intel's tech support for about 2 weeks and when i couldn't even enter debug mode they told me it was either a backplane or firmware issue. (gee, why am i not surprised that Intel makes crap) They offered to send me a replacement unit after i send them the broken one and oh btw i have to pay for shipping. Which was not cheap since i had to send it to the states from Canada. (costing a min of $50). So if their replacement doesn't fix it i will be scrapping it, (maybe seeing if freenas will install on it) or just building a decent nas box from a computer. and eat the now 800 bucks i have spent on this. If anyone is looking at this product run for hills it's complete and under garbage and as a result i will now stay away from ALL Intel products.
October 31, 2006 3:45:37 PM

Does it support 750 gb drives and how? i have four 750 gbs and when i try to configure them i get "the amount of diskspace cannot exceed 2tb". If anyone knows aything about this please help.
October 31, 2006 7:20:58 PM

i think at the moment it can not handle more than 2 TB. you could try possible either making a raid 5 or possibly 2 raid 0 1.4 tb volumes ?
November 1, 2006 4:53:05 AM

well raid 5 won't work because the 3 discs are about 2,1 tb. Making 2 raid 0 1.4 volumes? how's that possible with one nas? i am in deep furstration about this and i wanna hunt down the guy that said that the 750 gb hds work.
November 1, 2006 10:05:57 AM

maybe making one to work with nfs and the other smb ? not sure on this one.
!