NAS vs SAN (iSCSI) performance hit for clients on a network

daleos

Distinguished
Apr 29, 2010
26
0
18,530
Scenario: Web/Video design team.
1x Windows Server 2012 (Dell T110 II so not a beast)
20x PCs (mostly 2D design but 2x are doing 3d animation, 2x are doing TV/Video)
5x Macs (all 2D design)

Okay, this may look like a storage question but I think it's a more of a networking one.

Let's say I've just bought a good NAS (like the SYNOLOGY DS1813+) to use as the main file store for everyone on the network.

I could either connect it as a NAS or use iSCSI and connect it to the server and use it as a SAN

Now to me, the flexibility of the SAN is really appealing as it will mean I can format to NTFS, use Active Directory for file security, use whatever backup / security system I choose and best of all I can implement data deduplication.

...BUT...

The server will only really be used to share folders/files to clients over the network so bandwidth between the two isn't really the issue (if I were using it for VMs for example) but as the client will now need to access the files via the server, what are the throughput penalties likely to be? Won't having to use the server as an intermediary cause throughput to suffer? If so does anyone know by how much?

Would it be better to totally isolate the SAN from the main switch and have it on a direct link to the server or can I just shove it on main subnet and let the server see it that way? Once again, what are the likely throughput penalties that might occur here?

Do the quality of the networks cards make any difference in this scenario? There's a lot of incidental traffic and due to the video/animation work, quite a few large file transfers several times a day.

 
It depends on the speed of your network, and how much load is already on it. If you have a fast network with a lot of free bandwidth, putting the NAS on it may not be a terrible thing. If you have a slow network, or are already using a lot of bandwidth, then you probably want to isolate the NAS to it's own network and avoid saturating your network.

Also there are NAS devices that allow you to create shares right on the NAS itself, so you don't have to connect it to an intermediary server. Just attach the NAS to the network, create the shares, set permissions(many use LDAP connections) and you're done.
 

daleos

Distinguished
Apr 29, 2010
26
0
18,530


As I mentioned, I'd rather not got the pure NAS route due to their inflexibility. This is for a busy office with quite complex data needs. It's not the only data store on the system and there's a fair bit of Active Directory group policy stuff that doesn't translate well to *nix boxes. The two styles of security are not the same and in many cases aren't compatible (eg file encryption). Also, I want to centralise all management to one system and their backup/versioning/archiving/rollback needs to be extremely flexible, something which our current NAS boxes can't come close to managing.

It's going to need to be either a proper fileserver or iSCSI SAN to be able to manage the complexity of stuff we need it to do. I'm leaning towards a proper fileserver but if there's not a massive performance with iSCSI, I'm willing to give that a try.

We've got 100MB fibre internet link, Cat6 cabling and semi managed 1GB switch (with link aggregation) so we're already a step above most SOHO gear and we're prepared to upgrade the switch to an enterprise grade one if we have to.
 

pdohara

Honorable
Nov 5, 2013
2
0
10,510
Of course there will be overhead, latency and bandwidth minimizing by going through the server. When you ask "how much", it suggest that you don't have a clear idea of how this is going to work (hence asking the question :).
Every transaction will incur some latency and overhead. This is a combination of the time necessary to make the security checks and the time required by the server to "read" data from the network and then "write" it to the client. This part fits into the how much question better. The latency will be measured in milli seconds (I am assuming you do not care about this) and there is little that can be done to improve it. It takes time to review user credentials, etc. This assume a fairly low number of users on the network, so their credentials are already available in memory. There may be as much as 10% overhead for reading control blocks (from files) and such, but I would be surprised if it was that high most of the time. Obviously if the server is also soing other things this will impact the number. A better NIC helps here.
The bandwidth metric is not a percentage, but a limiter. There server has the bandwidth that is has. A Gig E card cannot do more than a Gig. I know this seems obvious, but it always surprises me when people over look this point. For file servers network saturation can be a real issue. If this presents as a problem then you can add an addition NIC to the server and subdivide your network. One option that a SAN presents which a NAS does not is that you can have two file servers accessing the same SAN. for a small network (less than 50 devices) that is not likely to be needed though.
I know there is a fair amount of hand waving in that answer but I hope it helps you understand the nature of the issue better.

Pat O
 

daleos

Distinguished
Apr 29, 2010
26
0
18,530
Thanks guys. In the end there were far less potential gotchas by going the big box fileserver route. Fortunately the company had enough of a budget for me to do this fairly well so now we have a 10 drive (2x500GB RAID1 (for the OS), 8x4TB RAID10) fileserver with dual NIC setup. On a quiet day, users don't see much difference between that and our old QNAP TS-469Pro but during busy times, the QNAP's performance used to drop like a stone. The new fileserver never misses a beat.