New 10gbe networking setup

aromakat

Honorable
Apr 30, 2012
18
0
10,510
I work in video post-production, so we need fast, shared access to a NAS. I had the office rewired for cat6, with 12 drops going being the walls to wall sockets. Its a small office, and the server room is at max 30 feet away, not including running it up the wall, through the ceiling in the attic, etc.

We just ordered a QNAP TS-870U-RP and are now trying to figure out how to best handle the interface cards, both for the QNAP and for our workstations.

When looking at the options on newegg, I see that there are 2 cat6 interface options: 'RJ45' and 'RJ45 Network'. What is the difference?

The features of the NAS say (Expandable up to 6 x 1 Gb LAN or 4 x 10 Gb + 2 x 1 Gb LAN by installing optional dual-port 10 Gb and 1 Gb network card). With there being 2 of us primarily using it, would we have better performance if we each used our own card or can we both connect to the same dual-port card in the nas without a difference in speed?

There seem to be a huge range in price and terminology for interface cards... What should I be looking for? What should I watch out for?
 
Cat 6 is not rated to run 10g. You would need cat6a or cat7 cable. You can run cat6 patch cords for short distances at 10g but there is no standard that allows you to run 10g on cat6 that has keystone (ie rj45) jacks installed and then patch cords plugged into those.

What this means is you may be able to get it to work on cat6 cable in your office the way you did it but the official standard says it won't work. To be really sure you would have to have it tested.

I suspect it all doesn't matter. Sure you can put the 4x10gb interface in the NAS. The larger problem is where are you going to get a 10g interface to put in your pc. Even if you could where are you going to find a PC that can exceed even 1g. In most cases you have to run huge blade servers to be able to use 10g...and that is really a fake in many caes since the blades hook to a internal 1g switch that just consolidates them together on the 10g port.

I am going to be your only option is going to be to use 1g ports. You may want dual nic PC. Then you could get 1g dedicated to just the NAS network. Still to get even 1g on a PC you have to do all kinds of things like use jumbo frames and such so the IP overhead does not get in the way

 

aromakat

Honorable
Apr 30, 2012
18
0
10,510


All of the research I did showed that 10gbe is supported on cat6. The wire we bought even says 10gbe on it. Maybe its a cat6a cable, as you noted.

10g interfaces are available. A link to a page of options is in the original post.

Why are you referring to 1g ports? I'm not running them to the 1g.... I am asking about the cards...



 
That is strange...550mhz rating is what makes it cat6a cable. You would think they would call it that. Your general consumer generally does not know the mhz stuff and what points make the difference.

Most those cards are actually for server class boxes. I guess if you are going to run a server as your desktop machine you could use those. To save some money you would want to find one that has copper ports built in rather than use SFP. A SFP is nice if you don't know what type of fiber or copper you are going to use but when you know its only copper it will save you a bit.

But with only 2 machines the easy way to do it is cable them directly to the NAS. You would then use a second nic on your PC to access the internet and such. Any storage network even at gig speeds you need to run with things like jumbo frames and it is better to not mix this with normal IP traffic. If you were to need more than 2 PC you would likely save money buying less ports on the NAS and putting a switch with 10g ports on it. The pc would connect to the switch and share a 10g interface to the nas. But with only 2 its simpler to just cable directly.
I would suspect you can share a single dual 10g card. To see if it benefits you would first have to see if the nas can even exceed 10g and then you would have to see how those interface cards are connected. This will take a lot of digging. I would try a dual card and if you have issues buy another one later.

Actually getting a application to use 10g is going to be quite a challenge. Most machines you max the CPU or memory or something else well before you even hit 1G. There are severs that are designed to get around these restrictions but you then must have applications that can run in these configurations. This is similar to the problem of multicore CPU. You can quote all kinds of fancy processing power but if the application can only use 1 core it really doesn't matter that your have 7 more it could use.

I have never seen someone use 10g as you are proposing. Most the time I have seen this done it was for massive database machines that had the application spread out over multiple what appear to be servers in the same chassis.

 

choucove

Distinguished
May 13, 2011
756
0
19,360
The first thing you have to do is find out if your QNAP unit is running 10GbE ethernet on copper RJ45 ports, or fiber SFP+ ports as this will determine everything about your cabling needs, as well as your end point network cards for your workstations.

Typcially 10GbE is not used to connect workstations like this, and the cheapest way it could be done would be to run Cat6a conenctions directly from one port on a dual-port RJ45 PCI-Express 10GbE card to one port on the add-on card for the QNAP. This is a completely isolated network, meaning you can't use this connection to get out to the internet or to any other devices, it only gets you data to and from the server. You would need another network connection, under a completely separate network range, connected to a switch and the rest of your router and internet infrastructure to share data inside your LAN with other computers or printers, as well as get to the internet.

Of course this is pretty limited. You care bound to only the number of 10GbE ports you can have on your QNAP. If you need more than that, or you don't want the hastle of multiple networks, then you're going to have to run everything through a 10GbE switch, which are very expensive as well. This also requires you know if you are using 10GbE RJ45 copper connections (which are not very common on switches) or fiber SFP+ transceivers (which are a little more common but still rather rare and very expensive.)

Between the cost of the actual PCI-Express 10GbE NIC (be it SFP+ or RJ45) and network switch infrastructure, cabling, and fiber optic transceiver modules (if needed) you're looking at between $1,000 and $2,000 per computer to set it up on 10 GbE.
 

aromakat

Honorable
Apr 30, 2012
18
0
10,510
Thanks choucove, bill.

I know its unorthodox in general but its actually fairly common for high-end video production workflows. 10gbe is still very new within the industry and is slowly being adopted, but I have seen it done at 2 other shops thus far and tested it myself when one of them first rolled it out. Because we need a network storage solution to collaboratively edit in real-time uncompressed 4K video files, my options are either 10gbe or fiber.

Fiber has been the most common method being used in post-production houses for years, but the cost-effectiveness of 10gbe over fiber is starting to take hold to those of us getting into such an infrastructure upgrade nowadays. I know switches, etc are expensive compared to most networking equipment but the cost of getting a even a used fiber system deployed to 4 workstations would start at 25-30k. Not only is it cost-effective, I would much prefer having a NAS over a SAN.

The first thing you have to do is find out if your QNAP unit is running 10GbE ethernet on copper RJ45 ports, or fiber SFP+ ports as this will determine everything about your cabling needs, as well as your end point network cards for your workstations.

The QNAP didn't come with a 10gbe card, but it has 2 slots to support up to 2 dual-port 10gbe cards. So with that, I think I have the option to purchase whatever cards works best for us.

Considering we already wired cat6, the specs of the wire we used seem to mach the cat6a specifications, but just doesn't say so. The wire we used also said that it supports 10gbe up to 33 yeards, which we are well under. I think we are safe there, so RJ45 would be the most ideal so we don't have to undo anything already installed.

But going back to a question that I had before; On newegg there are 2 interface types that I don't know the difference of. 'RJ45' and 'RJ45 Network'. Do you have any idea what the difference could be?

Typcially 10GbE is not used to connect workstations like this, and the cheapest way it could be done would be to run Cat6a conenctions directly from one port on a dual-port RJ45 PCI-Express 10GbE card to one port on the add-on card for the QNAP. This is a completely isolated network, meaning you can't use this connection to get out to the internet or to any other devices, it only gets you data to and from the server. You would need another network connection, under a completely separate network range, connected to a switch and the rest of your router and internet infrastructure to share data inside your LAN with other computers or printers, as well as get to the internet.

This was our plan, exactly. No 10gbe switch, direct connection to the QNAP. There are a total of 4 workstations, so it works out perfectly. If I need to expand beyond that, I will just pick up a switch.

All workstations will still be on our current 1gbe network via the onboard LAN.


@choucove, what is a "fiber optic tranceiver" and why would I need one if using 100% cat?


Thanks again, guys!





 

choucove

Distinguished
May 13, 2011
756
0
19,360
It sounds like we are narrowing this down a little more for you, and while I've not done any 10GbE network personally, I've done some researching into it. Basically, there are two interface types that you can get for running 10GbE. That is a standard RJ45 if you're going to be using copper UTP like Cat6 cable, which is the most economical choice. This works basically the same as any other network gear, just at a higher throughput. The second option is SFP+, which is actually just a port connector type on your NIC. This type of port requires adding a fiber optic transceiver, which converts electrical signals into light pulses, and can then connect using fiber connection. The type of fiber optic transceiver you need depends upon the throughput, wavelength, distance, single-mode or multi-mode, etc. There's a lot of factors, and its quite expensive.

Often times with NAS devices, when they are stating that they support optional add-in cards, you have to be very careful to get an add-in card that is supported on that unit, and usually it's a specific component sold by that original manufacturer. Buying a standard Intel dual-port 10GbE RJ45 network card and sticking it in there might not work if the NAS doesn't have built-in driver support for it. It's not like a Windows machine where you can just go on and download the latest driver from the manufacturer and run an install wizard. As for your workstation computers, though, all you would need is a single-port 10GbE NIC with RJ45, though they do also sell the dual-port controllers for only a little more, depending upon your needs.

When Newegg lists two different interface types, RJ45 and RJ45 network, this is both the same thing. It's just Newegg's lack of unity in the description which can happen. RJ45 is RJ45, it's just the type of port connector.

http://www.newegg.com/Product/Product.aspx?Item=N82E16833106043
 


That makes it even more strange. The main thing that makes cat6a cable different from cat6 cable is that it can run at more than 500mhz. Cat6 cable normally runs about 225mhz. Cat6a cable can go 100m. Cat6 cable i have seen listed at 55m but normally this does not include any allowance for loss do to patching it normally means a end to end run. I suspect what you have will work. We have accidentally used cat5e patch cable (like 20ft) and they worked fine until we found it. The box saying 550mhz implies it can go the full 100m.

I only know of 2 kinds of 10g copper interfaces. There are the ones that you need to insert a SFP that is 10GBASE-T or the board is a 10GBASE-T only. In both cases the physical connectors are RJ45...or more to be very technical IEC 60603-7 8P8C. There is only 1 allowed connection configuration for 10g on copper so I would suspect they are the same.

 

PatanjaliS

Honorable
Dec 31, 2013
10
0
10,510
I know this is an older thread, but just to answer a query about why use 10GbE if there is not the data throughput to justify it.

Well, for real time data, like audio or video, having a transfer rate much higher than the actual data rate allows for lower latency. Ignoring the interface servicing overhead latency, theoretically the remaining latency when transferring over 10GbE is a 1/10th that of GbE.

The sample buffers for audio data are set for the worst case latency expected, so a 10GbE LAN could handle many 192kbps streams at low latency, so a studio could run an acoustically isolated virtualised DAW farms, thus keeping the noise level low in the studio and control rooms.


The other side of 10GbE is that it starts making remote HD computer monitors feasible, and would provide a scalable and routable counter to the limited and localised daisy-chain ThunderBolt topology.


With managed switches with QoS, 10GbE is poised to allow server virtualisation to be used for building-wide realtime multimedia production, utilising low-cost off-the-shelf distribution hardware, but allowing being able to start with just a single system.
 



You are going to have to explain your point a little better I suspect. What exact latency are you talking about.

There are only 2 types when it comes to cables. The propagation delays...ie the time it takes the signal to go from one end to the other. This is a some fraction of the speed of light. The difference between the cables is not related to the speed you run and the difference between a cat5e cable and a cat7 cable rated to run 10g is less than 2 nano second. So if you were to run cat7 cable your first bit would arrive .0000002 milliseconds faster than cat5e. I am not even sure if that is completely true since it is based on worst case allowed between the cables

The second delay related to cable is serialization delay. This is the time to take the packet and convert it from the memory buffer into a bit stream. Using the maximum packet size it is 12 micro seconds for 1g and 1.2 micro seconds for 10g. So you save 10 micro seconds at the very most using 10g.

I can't see how either of these delays are significant. The cpu time processing to take the data from say disk and wrap tcpip headers around it and calculate all the checksums and every thing is many many times greater.

 

PatanjaliS

Honorable
Dec 31, 2013
10
0
10,510
@bill001g

Digital multimedia is not truly real-time, but gives the impression of real time if EVERY data packet representing the audio/video is received within a specified MAXIMUM time, and that is what is known as latency. It is the worst-case maximum delivery time. It is not set by the system characteristics per se, but empirically-derived/best-pre-emptive-guess and set manually to create a reliable installation that seems real-time.

Now, while there are theoretically 1,000,000,000 / 192,000 /3 = 1,700+ 192k/24bit channels over a 1Gb synchronous channel, practically a system like Dante, using the Focusrite RedNet PCIe card, gets 64 such channels at sub-ms latencies over GbE. That means a dedicated GbE channel for one system can handle a lot. Add in, say multiple bi-directional video communication feeds and some general IP control traffic, and that would be knocked down.

But in a multi-studio situation, using managed switches with VLANS for routing, it would not take many simltaneous sessions to saturate GbE to the point of causing audio dropouts and dropped video frames. Obviously, 10GbE backbones would allow a lot more and wider gaps into which multi-media packets could be dropped to arrive within the required latency. That is what I was getting at.

Sorry, but I was really only hinting at the above with my original comments.
 
In general the jitter buffers in the end equipment and the buffers in the switches will keep any burst type traffic in check and not drop it. Delay it a few fractions of a ms yes but if you get the burst big enough to say delay enough to blow a jitter buffer you have a general capacity issue. In general you do not even see packets queued in a switch until you start getting to say 80% utilization. If you planning to run that high then there is a obvious need for more than 1g interfaces. The only way you are going to get into trouble is if you had a lot of machine all sending data and somehow they managed to exactly sync their transmissions. Still the application would have to be many to many not single machine to many or single machine to single machine. Any buffering and bottleneck would be in the end machines.

I can see why in certain instance people might use more than 1g. I know in the large disk farms we have in our data centers these are all 10g but they are also not copper or IP based because of the possible slow downs.

Problem is on this forum most people are coming here talking about hooking up a box that costs maybe $500 and wondering if they should spend money on cable that can run 10g when their $500 box would be lucky to put out 200m/sec
 

PatanjaliS

Honorable
Dec 31, 2013
10
0
10,510
When you describe the issues with the jitter buffers, that is exactly the issues that occur with audio buffers. It is to cater for all these possible situations that buffers have to be manually set for each particular situation, as it is too difficult to predict.


I can concur about the incongruous appreciation some have of their equipment's capabilities, though I can understand it in many cases because some of that essential information is sometimes difficult to get hard facts on.

For example, while RAID0 may be supremely advantageous for a single or very few multimedia streams, many in digital audio recording want to use SSDs in RAID0 to get higher transfer rates for their software sample instruments. The problem is that I suspect that because heavy use of samples involves round robin reads of many buffers, a full stripe may never be read, which bypasses the advantages of RAID0 altogether. Many large capacity SSDs already use RAID0 internally.

However, try finding some relevant tests that make it clear one way or the other, and there are none. Even though a major maker of such instruments recommends SSDs for their larger libraries, they publish no quantitative data about how well SSDs help or whether RAID0 helps.

Ignorance is NOT bliss, especially when it comes to deciding how to spend one's time and money!
 

Dennis Wood

Reputable
Mar 12, 2014
5
0
4,510
I wrote a rather lengthy 6 part series on this subject. We have been video editing quite happily over a 10Gbe network with a shared workflow for over a year now. The 10GBe is running flawlessly over CAT5e, so until you test, don't assume that you need to rewire. Speeds have topped out at 1500 MB/s testing MS SMB3 speeds over two ports. The series is published in a few spots, but you can find it at Cinevate's site here: http://www.cinevate.com/blog/confessions-of-a-10-gbe-newbie-part-6-breaking-the-10gb-data-barrier/
 

PatanjaliS

Honorable
Dec 31, 2013
10
0
10,510

Hello Dennis. I have just read your whole series and found it refreshing to be given so much relevant information, so concisely presented. Thank you. As a technical writer of 20 years, I much appreciate it.

Audio streams present a lot of issues as latency needs to preferably be in the <2-3ms range to not create audible lags. The problem with most published specs about equipment is that they are trying to impress 'speed freaks' who are after maximum transfers, but typically don't care about latency. For audio, minimum latency is king, but is given little consideration elsewhere.

A case in point. Samsung provides a Rapid Mode for their later SSD drives, and it does boost throughput. However, it also increases the sub-ms latency of the plain drive to a possible maximum of 30ms. For audio, which requires catering to the worst-case scenario to prevent audible glitches - NOT!

Even typical drive speed tests are of dubious value, because they are all running their queues at maximum, thereby guaranteeing maximum latency. To illustrate, you go into a bank. Would you prefer a full queue, where you have to wait for everybody before you to get served, or where there are free tellers waiting to serve you immediately? The latter of course. But the banks don't want that because they are paying those 'free' tellers to stand around being basically unproductive. To most customers, if getting rapid service was to require banks upping their fess substantially, most would choose the queues.

Audio is very much like a time-precious bank customer -- the less wait the better -- but the banks/drive-oems prefer deeper queues.


The same with network connections. If they are run at their maximum, every packet is delayed for the time it takes to wait in the queue. For a saturated channel, it does not matter whether it is 100MbE, 1GbE or 10GbE, the delay is at maximum. However, running a channel at a 1/10 of its capacity, a 10GbE connection will transfer packets in a 1/10 the time of a 1GbE, allowing lower latency buffer settings for a given bandwidth usage. We are not talking about average MB per second, but guaranteeing n kB per every ms. This is what I was getting at in previous posts.


As I alluded to previously, audio production is undergoing a big change as far as networking goes. Up until recently, it been pretty much been a solo PC effort, but with the increasing size of sampler libraries and the CPU power required to drive them, many are running them on separate PCs, slaved to the master DAW over 1GbE. FX are now getting their own PCs as well, so total DAW network bandwidth is increasing. To keep low latency, a DAW may benefit from a 10GbE connection to a switch that connects to the slaves at 1GbE.

Meanwhile, the audio interfaces have been a direct installation or connection to one PC. However, audio is now getting the benefit of being routable, allowing the interfaces to be near the sources, but the computing to be centralised. Big studios could run 10GbE backbones and hubs supplying 1GeE PoE connections to performers via their own audio interface, taking input from their mics/instruments, and outputting foldback via headphones.

This is a revolution that only 10GbE can enable, but latency still needs to be controlled. Certainly the QoS in switches can ensure that no single connections suffer excessive queuing.
 

PatanjaliS

Honorable
Dec 31, 2013
10
0
10,510

For audio, because of real-time latency considerations, the need is not about average Mbs per second, but about guaranteeing n kBs in every millisecond.

For substantially unsaturated channels, faster channels guarantee lesser delays, allowing lower latency buffer settings.

Therefore, for audio production, if already substantially using a 1GbE connection's bandwidth, a 10GbE connection will provide immediate benefits.

 

Dennis Wood

Reputable
Mar 12, 2014
5
0
4,510
Pat, first of all, thanks for the kind words :) I spent a few months building up the 10Gbe systems, so the blog series really was a summary of many experiments over that time. In part 6 of the series I include test results using Adobe benchmark tests which time actual transfers, renders, etc. using real video/audio files. During these tests, if the Adobe apps are optimized, all cores are typically at 100%. Aside from the raw disk speed tests, you can see my edit over 10Gbe tests show virtually no reduction in render and encode output times when compared to SSD (Samsung with RAPID) times In other words, my video editing and graphics guys are happy!

SMB3 multichannel in windows 8.1 and server 2012 allow up to four TCP/IP connections/session, loaded over available cores, and this I believe is a game changer for "pedestrian" 10GBe networks like the one we built. Importing media from the 10Gbe network is lightning quick.
With 4K and RAW becoming a reality for many editors, the need to work from a 20TB array (or larger) is becoming more an issue. Having all of our media (photos and video) instantly accessible to any of the team is very much part of our collaborative process, particularly as we also integrate all of our Solidworks and rapid prototyping work flow now in the same way.

The NAS units not only host several VMs over 10GBe, but they also serve as backup for the servers over a separate 10Gbe port. The 10Gbe links between server and NAS allows daily backup and versioning of the media server which is done very quickly. Additionally, where full workstation images ( done nightly) used to take hours, the 10GBe workstations are done in 15 minutes or so, and automatically shut down. There are likely some power savings there...
 

PatanjaliS

Honorable
Dec 31, 2013
10
0
10,510

Hello Dennis,

At lower transfer speeds, say those that would normally be within the capability of 1GbE, are the CPU overheads with the 10GbE more or the same as for 1GbE?

Patanjali.
 

Dennis Wood

Reputable
Mar 12, 2014
5
0
4,510
If you look at the graphic here: http://www.cinevate.com/blog/confessions-of-a-10-gbe-network-newbie-part-1-basics/

You'll see I've taken a screen shot of CPU activity during a 1.48 GB/s transfer (11 840 Mb/s) using Windows 8.1 SMB3 multichannel (so two 10GBe connections). The Intel 540 NICs do have quite a few performance options to offload from the CPU..most are turned on.

Keep in mind that with SMB3 in Windows server 2012 you can choose which cores to balance over in the Intel driver properties. You can do this to some degree in Windows 8.1, but the best bet is to ensure you're using a processor with at least 4 logical cores minimum to take advantage of MS SMB3 multi-channel. SMB3 in Samba (Linux) does not have this feature...which is why I suspect Windows SMB3 in my testing was consistently faster.

It's important to monitor CPU activity during performance testing as I found bottlenecks almost right away with certain Antivirus programs etc. Bitdefender had to be uninstalled completely. More on the software toolkit I used for testing here: http://www.cinevate.com/blog/confessions-of-a-10-gbe-network-newbie-part-2-test-toolkit/

Cheers,
Dennis
 

PatanjaliS

Honorable
Dec 31, 2013
10
0
10,510
Unfortunately, Windows file transfers are not time-wise proportional enough to get a reliable idea of the ongoing CPU overheads likely to be encountered when using high-priority multiple streams.

This is because the Windows file transfer utility algorithm does a binge for a few seconds, then throttles for the rest. While the algorithm has been tweaked with each new OS version, I think it is designed so that it has minimal effect upon the user's use of their system.

That is, when a user first kicks off a transfer, they are not immediately concerned with anything else, so the system can maximise that distraction time by bingeing the first few GBs. However, if it calculates that the total transfer will take longer, it then slows it down so that the system is able to be used while the remainder of the transfer is done in the background.

With 10GbE speeds, many transfers may now complete in minimum time at the maximum rate throughout. But try a 100GB to 1TB transfer, and the bulk may well be transferred at more pedestrian speeds.

A reliable way to test the CPU usage at 1GbE type speeds, like 10-100MB/s, is to run a program that is totally controlling the transfers as steady streams. That means the OS is less likely to be making too many of its own 'optimisations' of the transfer speeds.
 

Kewlx25

Distinguished
I've only tried 10GB files on my 1Gb network. I can understand what you mean about really high speed transfers starting fast, but it doesn't happen with 1Gb speeds with simple transfers.
 

Dennis Wood

Reputable
Mar 12, 2014
5
0
4,510


Pat, if you look at the graphic (and a really nice feature of Windows 8.1) you'll see I was trasnfering 5GB between RAM disks on two machines. I observed no throttling during these transfers. It's very difficult to test raw performance at these speeds as it is beyond most disk systems. The max I could typically test between RAM disks on two machines was 8GB at a time.

The reality of a server as ours, is that the editing workstations are often limited by their own disk cache (even with SSD) rather than the Server disk array, and 10Gbe network speeds. Our server tops out at about 1000MB/s with 8 x 4TB RAID5, so with two editing workstations hitting it simultaneously (where older SSD write speeds are under 200MB/s), it would still take 5 of these workstations to saturate the 10Gbe connection.

We do multiple workstation backup/imaging at the end of day (using Storagecraft), where multiple workstations are hitting the server simultaneously over 10Gbe. In that case, we have a max of 2-3 workstations with > 100GB SSD drives pushing encrypted images to the server simultaneously at end of day. There is no throttling evident there. Each workstation typically pushes data slightly under max read speed for the target SSD over the entire backup. When I started observing changes in linearity of large transfers, that was typically a problem with either disk systems, antivirus, or high CPU background activity.

Where I did see dramatic hits/throttling in terms of disk performance was in experiments with Windows Storage Spaces where large file transfers would start off impressively from cache, but then sink to unusable levels for video work. I ended up just using a dedicated RAID controller.

One of the reasons I tested extensively using a variety of tools was to establish "real world" performance. In part 6: http://www.cinevate.com/blog/confessions-of-a-10-gbe-newbie-part-6-breaking-the-10gb-data-barrier/ , I used two Adobe "benchmarks" which use Adobe Premier Pro (we use Adobe CC) and large video files to render, encode and transcode. The output is a series of benchmark numbers which directly reflect real work timing. In other words, these tests are as real world as it gets..and allow you to compare to a fairly large database of results.

Keep in mind too that we generate a lot of data every time we do a video and/or photo shoot. We're sitting on about 30TB of online media storage over 10Gbe. We of course work on audio with the projects, but Adobe creates local cache files etc. as one renders the time line, and locally of course we're using 100 to 500GB SSD drives. So in that sense we're taking advantage of SSD low latency, while keeping all the large video files on the LAN.