SilverStone Announces ECS03 PCI-E SATA Controller

Status
Not open for further replies.

Symple

Reputable
Jan 26, 2016
4
0
4,510
Well, hello 2003! Oh wait.....

Am I missing the entire point of this card or which niche exactly are they looking to cover with a 2-port card that bottlenecks long before even the first SATA channel is saturated? First of all these controllers haven been available for well over a decade, so what's the point of launching yet another one? And secondly, why produce something that is clearly so underpowered that even the conventional HDDs of today (in RAID0) will be hampered by it?
 

Vatharian

Distinguished
May 22, 2009
90
0
18,630


Duuuuude, it's Silverstone! It must be glorious! I also bet it will be overpriced. Silverstone has balls to out some serious hardware, like slim DVD-RW drive (recently). I understand, that slot-in drives are rare, but they charge $150 for it.
 


While I agree it is mostly a pointless product right now, where are you getting your information about bottlenecks? PCIe 2.0 x2 gives 1000MB/s of data bandwidth while one SATA port uses up to 600MB/s of data bandwidth. You can fully saturate the first port and saturate the second port up to 67% with 1GB/s. Even SSDs almost never saturate SATA 6Gb/s except in synthetic benchmarks, so this is unlikely to be a bottleneck ever, especially if you're copying data between the two drives. With hard drives, there is no way you'd saturate one port's bandwidth, let alone two, let alone two lanes of PCIe when there are only two ports for one hard drive each.

Also, it should be noted that these controllers have NOT been available in PCIe 2.0 x2. Most of them are either cheap PCIe 2.0 x1 which is a bottleneck or PCIe 2.0 x4 which is more expensive. PCIe 3.0 is also only available on systems that have no need for a two port SATA adapter, so PCIe 2.0 makes sense all considered. The only use for this is upgrading somewhat older systems with something better than a craptastic single lane adapter and still cheaper than a PCIe x4 adapter to decently handle SATA 6Gb/s SSDs.

If I felt like buying (or already have) a good used LGA 1366 system since they have huge performance for the money nowadays, then this would definitely go in the computer for good SSD performance. You can get a machine that fights with a Skylake quad core i7 in multithreaded performance with still much better than AMD single threaded performance for the price of only the Skylake i7 using an overclocked hex-core LGA 1366 i7 or Xeon at around $300 to $400 if you look around for the right prices.
 

firefoxx04

Distinguished
Jan 23, 2009
1,371
1
19,660
If it is cheep, it is exactly what I have been looking for. Stupid ESXI has to pass through entire sata controllers, meaning if I want to pass drives to a VM, I need somewhere else to install the esxi OS.
 

nradtke

Reputable
Jan 26, 2016
2
0
4,510
I'm not sure where the bandwidth numbers in this article are coming from.

PCI-E 2.0 spec indicates a max of 500MB/s per lane (note the big B there for bytes). PCI-E 2.0 x2 would use two lanes and have 1000MB/s of bandwidth. Even with the overhead from the 8b/10b encoding scheme, it's still something like 4 gigabits per second per lane, giving 8 gigabits per second for this card.

SATA III spec indicates a max of 6Gbps (note the little b there for bits). This translates to 750 MB/s.

Given that, this card has more than enough theoretical bandwidth to saturate the bus for one drive, while leaving about 30% of the max SATA III spec throughput left for the other one. In real world, a HDD will almost never saturate this bus, so it will likely be more than enough for both drives.
 


SATA can only transmit 600MB/s because the other 150MB/s is overhead from 8/10 bit encoding. PCIe 2.0 x2 is enough for up to 167% of a SATA 6Gb/s port. That's also assuming that we're going one way because while SATA is simplex, PCIe is duplex, so we can read to one drive and write to the other both at full speed for SATA 6Gbs/s with plenty of PCIe bandwidth to spare on both paths.
 

nradtke

Reputable
Jan 26, 2016
2
0
4,510


Yeah, I may have omitted some details, but I knew PCI-E 2.0 x2 was plenty of bandwidth. The part that confused me was the article claiming PCI-E 2.0 maxes out at 4Gbps (well below a single SATA III connection).
 


Yeah, the article is wrong about that. PCIe 2.0 x2 has two 500MB/s lanes That's 5Gb/s each (including their 8/10 overhead) for 10Gb/s total. They might have made a typo in saying 5Gbp/s and were referring to it per port, in which case it is a slight bottleneck, but the PCIe bandwidth isn't arranged like that. Both ports are connected to the PCIe 2.0 x2 bus, so you can run them both around 5GB/s or one at around 6Gb/s and another at 4Gb/s, or anything like that so long as you don't exceed SATA 6Gb/s' capabilities per port and 10Gb/s total.

Of course, those are all theoretical numbers and real world numbers will be somewhat lower. You won''t get quite the full 6GB/s out of a SATA port and you won't get quite the full 5Gb/s out of a PCIe lane.
 

c0rr0sive

Reputable
Mar 17, 2015
75
0
4,660
If it is cheep, it is exactly what I have been looking for. Stupid ESXI has to pass through entire sata controllers, meaning if I want to pass drives to a VM, I need somewhere else to install the esxi OS.

Get a proper SAS controller instead? Adaptec 9650SE can be had for under $100 these days. That aside, no point in having ESXi installed on an HDD in the system, waste of a whole disk to be fairly honest.
 

eriko

Distinguished
Mar 12, 2008
212
0
18,690
Well, hello 2003! Oh wait.....

Am I missing the entire point of this card or which niche exactly are they looking to cover with a 2-port card that bottlenecks long before even the first SATA channel is saturated? First of all these controllers haven been available for well over a decade, so what's the point of launching yet another one? And secondly, why produce something that is clearly so underpowered that even the conventional HDDs of today (in RAID0) will be hampered by it?


Stumped me too.

Zero effs given.
 
I'm not sure where you guys are missing the bottleneck.

A single PCI-E 2.0 lane is capable of up to 4 Gbps (500 MB/s) of bandwidth.
A single SATA-III connection is capable of 6 Gbps (750 MB/s) of bandwidth.

So if this card uses two PCI-E 2.0 lanes totaling 8 Gbps or 1000 MB/s of bandwidth, it is insufficient to feed both SATA-III ports that need a combined total of 12 Gbps or 1500 MB/s of bandwidth.

If you factor in overhead, the situation gets worse. SATA-III drives essentially top out at around 600 MB/s of transfer speed, but you can't say that it needs only 600 MB/s of bandwidth. That other 150 MB/s is lost as overhead, so that data is still being transferred through the SATA-III interface.

If you then factor in the 20 percent overhead that PCI-E 2.0 has, that drops it down further to 800 MB/s of total bandwidth to feed two SSDs requiring 1500 MB/s of bandwidth to operate at peak performance. In other words, the PCI-E 2.0 x2 interface bottlenecks the two drives in simultaneous operation to the point that they are both running at about half-speed. Either that or one is operating at full speed, while the other is operating at less than 50 MB/s.
 


SATA III does not actually transmit 6Gb/s of data, only 600MB/s. The other 150MB/s is overhead which does not get transfered over the PCIe bus. Yes, it gets transffered over the SATA bus, but it stops there at the controller. Only the data is passed through the PCIe bus.

PCIe's overhead is already factored in at 500MB/s. The interface is actually 5Gb/s, not 4Gb/s, if we count the overhead of PCIe 2.0. You have 1GB/s of PCIe data bandwidth with PCIe 2.0 x2 and the SSDs can't pull more than 1.2GB/s of data bandwidth between both SATA III ports. That is a fairly small bottleneck, especially since SSDs typically aren't fast enough to even come close to saturating SATA III in almost all real-world workloads and even when they are, there is rarely a noticeable improvement in real-world performance because most programs either don't need that performance or weren't made to take proper advantage of it.

PCIe 2.0 x2 is adequate for almost everything that two SATA III ports can do and is only a moderately small bottleneck even in the worst case scenario, such as very high performance RAID 0 SSDs in workloads that scale well.
 
No, you are wrong. The maximum theoretical bandwidth of PCi-E 2.0 per lane is 4 Gbps. You lose roughly 20 percent of that in overhead.
http://www.xilinx.com/support/documentation/white_papers/wp350.pdf

So the bandwidth deficit is greater than you letting on.

The numbers I used in response to your comments and in the article are accurate, and my statement that "This might cause a bottleneck issue for users if they connect two SSDs to the controller and access them simultaneously, transferring large amounts of data, but it shouldn’t cause a serious problem when accessing drives one at a time or with conventional HDD storage devices." is also accurate. Even your initial comment acknowledged that a PCi-E 2.0 x2 connection cannot run two SATA-III ports at 100% at the same time. I'm sure many users would be displeased with losing roughly 100 MB/s of data throughput when transferring files or using a RAID configuration because of a PCI-E bottleneck, and it was worth mentioning in the article.
 


According to the guys who make PCIe, I am correct. This is their website:
http://pcisig.com/faq?field_category_value%5B%5D=pci_express_2.0&keys=

Notice how it mentions PCIe 1.x as PCIe 2.5GT/s and PCIe 2.0 as PCIe 5GT/s. PCIe 1.x is 2.5GT/s with 8/10 encoding that causes 20% overhead and brings the effective data rate down to 2Gb/s or 250MB/s. PCIe 2.x is double that at 5GT/s with 8/10 encoding and 4Gb/s effective bandwidth or 500MB/s. PCIe 3.x is 8GT/s with 128/130 encoding, giving it slightly under 8Gb/s or 1GB/s. Few if any users would even notice such a bandwidth bottleneck because it is already in the margin of error where SSDs don't fully saturate SATA anyway.

EDIT: Also, I quote the second page, first paragraph of the pdf you linked:

The specified maximum transfer rate of Generation 1 (Gen 1) PCI Express systems is
2.5 Gb/s; Generation 2 (Gen 2) PCI Express systems, 5.0 Gb/s; and Generation 3
(Gen 3) PCI Express systems, 8.0 Gb/s. These rates specify the raw bit transfer rate per
lane in a single direction and not the rate at which data is transferred through the
system. Effective data transfer rate or performance is lower due to overhead and other
system design trade-offs.

I quote the third page, third paragraph:
To summarize this section: Although Gen 1 transmission lines can operate at 2.5 Gb/s,
8B/10B encoding reduces effective bandwidth to 2.0 Gb/s per direction per lane.
Similarly, Gen 3 can operate at 8.0 Gb/s using 128B/130B encoding, which reduces the
theoretical bandwidth to 7.9. The actual system bandwidth is slightly lower due to
packet and traffic overhead, producing an overall system efficiency 7.8 Gb/s per
direction per lane.

Your link agrees with me.
 
Status
Not open for further replies.