USB 3.0/3.1 Add-on PCI Card - Power vs Reliability vs Speed

milkygirl

Distinguished
Jun 12, 2013
71
0
18,630
Hi guys,

I'm currently using a z77 Pro 3 and it only has 2x USB3.0 ports.

Will using a USB3.0/3.1 add-on PCI card be as good as the native solution in terms of power requirements (I have an external hard drive in RAID 0 that disconnects if other hard drives are connected simultaneously, even though I already gave it 2 USB connectors), reliability and speed?

Probably won't get the front panel as I have 2 bad experiences so far - the speed, power and reliability just won't cut it.
 
Solution

Mark RM

Admirable
can often be better, for servers I'd been buying HighPoint RocketU 1144D , with the x4 PCIe connection and it's increased electrical feed, they get all the amps the specification can deliver to each (individual) port on the card (900mA) even when all four are occupied. The only way to get more is to utilize so called charging ports that deliver up to 1.5A
 
Speed and power from an add-in card should be much the same as the onboard USB 3.0 ports.
If you actually want any level of performance or reliability from the drive though, you would be better with SATA drives.
External USB drives use SATA internally, but the USB controller adds another layer that reduces performance.
USB 3.0 is technically capable of 60 MB/s, but I've never seen more than 40 MB/s and the latency is relatively high.
Modern SATA drives are capable of over 100 MB/s.
 

Mark RM

Admirable


Everything stated here regarding performance of USB 3 and SATA attached USB is wrong.

Max performance of USB 3 is 640MBps. WIth USB 3 external enclosures supporting UASP drives easily max out. A 100MB/s hard disk would be consider slow by new drive standards. I've attached backup disks to USB 3 that max the drive (150 - 200 MB/s depending on where the heads are) in exactly the same fashion as a native SATA.

In point of fact I've also maxed out SATA SSD's over USB 3.0 with a UASP enabled dock.
 

InvalidError

Titan
Moderator

USB2 is 480Mbps half-duplex, which yields about 35MB/s actual usable speeds. USB3 is 5Gbps double-simplex, which bumps the theoretical maximum closer to 500MB/s in each direction, fast enough to handle SATA SSDs without losing much performance. USB3.1 has the possibility of using two data lanes in each direction, bumping the speed to 10Gbps, which is potentially up to 1GB/s usable.
 

milkygirl

Distinguished
Jun 12, 2013
71
0
18,630
yeah, the funny problem is even with the native USB3.0 ports, I am getting reduced speeds when I simultaneously plug in 2 high speed drives that are not working at the same time.

Example:
Port1: Lexar 150mb/s SD Card speed drops to 40mb/s transfer to local SSD when port 2 is connected with my 4TB Raid drive.
Port2: Raid drive - Similarly, drive speed reduced even though it is working on 2 unrelated tasks.

Would like to have recommendations for a add-on card that can provide sufficient power, speed and reliability, and no, I don't need charging ports.
 

Sorry, USB 2.0 is up to 480 Mbps, or 60 MB/s although I'm not sure this is accounting for encoding overhead.
USB 3.0 is up to 5 Gbps or 625 MB/s.

Never the less, I've never seen an external USB 3 drive match a SATA drive for performance.
The USB controller does add latency and reduce performance compared to the native SATA interface of the drive.
I've never seen UASP devices, but I'm guessing you will only find this is in devices targeted at the server market.
The USB controller used in a typical Western Digital or Seagate external drive enclosure is not going to come close to an internal SATA drive.
 

Mark RM

Admirable


I have a USB 3 dock here on my desk that will max out an OCZ enterprise SATA SSD , specifically an Intrepid 3700 , I have multiple sizes. UASP further decreases the latency penalty of USB 3. So yeah it can match the native SATA throughput since the drive itself is the bottleneck. So not only does it come close, it matches it . And I'd consider that typical when someone can buy a dock for 40 dollars with UASP.

@milkygirl, that card I mentioned had four individual HUBS, one port for each hub, plugging devices into them won't cause amperage drops or performance drops on the other HUBS/ports on the card. It's expensive but they do work to solve issues like that. Conversely you can buy a simple Rosewill add in card, power it from the PSU and split chores between your built in USB ports and the add in card. There's very little performance difference if any at all.

 
Solution
The other thing to consider is the DMI speed on the motherboard.
All your SATA ports, USB ports and PCI-E lanes are connected via the Z77 chipset, with the exception of the PCI-E 3.0 x16 slot which is directly connected to the CPU.
If you want to copy data from one drive to another, this needs to pass through the DMI into RAM, and then back through the DMI to the second drive.
The total bandwidth of the DMI is 2GB/s for this board (DMI 2.0).
 

InvalidError

Titan
Moderator

60MB/s is the raw theoretical absolute maximum data rate on USB2. To that, you need to subtract USB2 protocol overhead, the protocol overhead of whatever device protocols are laid on top of it, and the fact that USB2 is half-duplex - whenever the host is sending commands or data to a device, the device cannot send anything back and devices need to wait for the host to poll them to declare interrupts, which means USB1/2 waste tons of time waiting to get polled by the host and switching between receive and transmit since USB1/2 only have a single pair to both receive data from and send data with.

Due to all of the wasted time and overhead, you will rarely manage to extract more than 35MB/s out of USB2.

USB3 provides a dedicated RX and TX pairs, two in each direction for USB3.1 on Type-C, which eliminates the need to share bandwidth on a single wire pair and waste so much time waiting or polling.
 

InvalidError

Titan
Moderator

There is no problem there if all you have is individual USB3 and SATA3 drives: the DMI bus is symmetrical, which makes it 2GB/s in + 2GB/s out. Even if you use 500MB/s each way, you still have 1.5GB/s to spare.

DMI becomes a potentially significant bottleneck only for NVMe and SSD arrays but not many home users are going to copy massive amounts of data from NVMe to a RAID array of SSDs at 2GB/s while doing other stuff on some other drives.
 

You're right. I didn't realise each PCI-E lane was actually made up of two signalling paths for full duplex transfer.
The DMI 2.0 interface is equivalent to 4 x PCI-E 2.0 lanes, so 2GB/s full duplex.
It seems less of a worry unless you have other add-in cards using a lot of this bandwidth.
A single PCI-E 2.0 x 4 card could saturate the DMI. A good example would be a graphics card in the second PCI-E x16 slot.

 

Mark RM

Admirable


The last sentence/part is not normal for a standard dual GPU setup, instead the two slots fall back to x8 speed at PCI-E 3.0 serviced by the CPU.
 

InvalidError

Titan
Moderator

Setups where one GPU is connected to the CPU's x16 PCIe interface while the other is connected to an x4 interface on the chipset are not (officially) supported by AMD and Nvidia. Motherboards with SLI/CF support split the CPU's x16 interface into a pair of x8 ones for the two physical x16 "GPU" slots. (Or x8x4x4 for motherboards that support triple-CF/SLI on mainstream i5/i7.)
 

Motherboards optimised for multi-gpu setups can split the CPU PCI-E lanes to allow two slots to operate at x8 speed.
Most motherboards don't, including the Z77 Pro3:
http://www.asrock.com/mb/Intel/Z77%20Pro3/?cat=Specifications

The PCI-E lanes from the CPU are not split, and are dedicated to the first PCI-E x16 slot.
The second PCI-E x16 slot operates at x4 speed and uses lanes from the Z77 chipset.

There is actually a typo on the spec page as well:
- 1 x PCI Express 3.0 x16 slot (PCIE2: x16 mode)
- 1 x PCI Express 2.0 x16 slot (PCIE3: x4 mode)

This should be:
- 1 x PCI Express 3.0 x16 slot (PCIE3: x16 mode)
- 1 x PCI Express 2.0 x16 slot (PCIE2: x4 mode)
 

Crossfire is supported on these boards, although it would be inadvisable with the second card running at PCI-E 2.0 x4 speed.
It is only Nvidia that limits support to x8 speed or greater for SLI.
This even applies to high end boards with three PCI-E x16 3.0 slots that can run in (x8/x4/x4) mode. These support 3-way crossfire but not 3-way SLI.
 

InvalidError

Titan
Moderator

Working, mainly by omission of an arbitrary lock as Nvidia did to prevent people from setting themselves up for disappointment, is not the same as being officially supported. PCIe itself is bandwidth and latency agnostic: everything is supposed to work regardless of how it is connected, albeit slower than intended.

With newer AMD GPUs using XDMA for CF, more than half of PCIe 2.0 x4's and DMI 2.0's bandwidth can potentially be consumed by frame buffer transfers between GPUs alone. To that, you need to add the fact that HDDs, SSDs and everything else connected to the chipset will be sharing the DMI link to the CPU, subtracting that much from what the 2.0x4 GPU can predictably use. I doubt this is a scenario AMD could be bothered to officially support: too many variables beyond AMD's and software developers' control.

AMD is simply not going out of its way to prevent you from experimenting.
 


Any motherboard with multiple PCI-E x16 slots will list Crossfire support.
How is this not "officially supported"?
Nvidia does not support these configurations, which is a credit to them since the bandwidth limitations on the second card could actually reduce performance compared to a single card configuration.
 

InvalidError

Titan
Moderator

They support CrossFireX - the old one that uses bridges between GPUs to transfer frame buffer and synchronization data between GPUs.