Is your SSD turning in lower benchmark numbers than what its spec sheet promises? It's possible that your motherboard's chipset or an add-in storage controller is to blame. But do those results really mean anything in an enthusiast's desktop PC?
The real-world performance of a SSD doesn’t just depend on drive, but also the computer you're dropping it into. Which chipset does your motherboard employ? Is it older, and limited to 3 Gb/s transfers, or does it sport 6 Gb/s connectivity? More specifically, even if your storage controller supports the very latest standards, is it as fast as the other controllers out there with similar specifications from competing vendors?

We've spent plenty of time digging into the performance attributes of SSDs. Now, we want to have a look at how different south bridges and standalone controllers affect storage performance. We gathered an impressive array of motherboards and add-in cards from around the lab. The boards represent a veritable who's who of chipsets from 2008 to 2013, including AMD SB750, AMD A75, AMD SB950, Intel Z87, Intel P55, Intel ICH10R, and Intel Z77. The cards include ASMedia's ASM1061, Marvell's 88SE9123-NAA2, Marvell's 88SE9125-NAA2, Marvell's 88SE9128-NAA2, and Marvell's 88SE9130-NAA2 controllers.
In an SSD review, you'd see us use a bunch of different drives as comparison points. Here, though, we want to stick with one and have that serve as a stake in the ground, facilitating consistent throughput to our various controllers. Samsung's 256 GB 840 Pro is one of the highest-end SSDs we have in the lab, and the company sent out one to each of our editors for use as Tom's Hardware's reference through 2013.
If you want more information on the 840 Pro, check out our launch coverage: Samsung 840 Pro SSD: More Speed, Less Power, And Toggle-Mode 2.0, along with our last two experiments with these things: Is A SATA 3Gb/s Platform Still Worth Upgrading With An SSD? and One SSD Vs. Two In RAID: Which Is Better?
- Twelve SATA Controllers, Benchmarked
- Chipsets, SATA Controllers, And The Test Platforms
- Results: Sequential Read And Write Performance
- Results: 4 KB Random Reads/Writes (AS-SSD And Iometer)
- Results: Access Time And I/O Performance
- Results: PCMark Vantage And Tom's Hardware Storage Bench
- Results: AS-SSD Copy Performance Test And Overall Results
- Match A Modern SSD Up To A 6 Gb/s Controller
As a side note, when can we see an usb3.0 controller comparison with those new AMD and Intel chipsets?
The one thing the article didn't say, which it should, is that Marvell controllers
are garbage. Notice how often the P55 matches or beats one of the Marvell
6gbit controllers. The PCIe x1 link issue is bad enough, but sometimes even
having a proper connection doesn't help their performance.
Also not mentioned is SSD reliability. The only time I've ever had problems
with an SSD were when it was connected to a Marvell controller (eg. failed
fw update; move the SSD to an Intel port, the update then works ok).
Ian.
Most of the embedded chipsets (or external chipsets) carry a multiplexer between SATA and PCI Express. The CPUs accept PCI Express connections, not SATA, so there is a conversion that must be made, which is done by the SATA chipset. Each lane on PCI Express 2.0 supports approximately 8GB/s, and PCI Express 3.0 supports approximately 15 GB/s.
Here's the problem I have seen in external expansion slots. They connect 4 SATA slots to a single PCI Express 2.0. So potentially, four connected SATA 6 GB/s drives, or 24 GB/s total I/O throughput, is being processed into a single 5 GB/s connection to the CPU. I don't care how good the SATA chipset is at processing and prioritizing I/O data, you are going to have an I/O bottleneck. Even four SATA 3 GB/s drives create a total of 12 GB/s throughput, more than a single PCI Express 2.0 lane can handle. SSDs can approach speeds greater than 3 GB/s, so it is not a theoretical bottleneck, it is a very real limitation.
So going back to the article. At most, I have seen 4 SATA slots connected to a single PCI Express 2.0 lane. I have seen 6 or 8 connected to either 2 discrete lanes or a 2x lane (or 4x lane when talking about SAS), which carries approximately 10 GB/s of total throughput. So depending on the implementation of the embedded chipset on the motherboard, it may be the PCI Express lanes giving you the throughput limitation and not the SATA chipset. Different ports may be connected to different 1x PCI Express lanes or to a 2x lane, giving you either two discrete paths to the CPU, maximizing throughput, or a larger pipeline to the CPU, which is better than a 1x lane but not nearly as good as discrete pathways.
I have an external PCI Express controller with a few drives on my main system, and when transferring files from drives on the internal (motherboard) chipset to drives on the connected card, there is a noticeable throughput difference.
No, OF COURSE and OBVIOUSLY you plug devices into the built-in southbridge-connected SATA ports. Anyone who even thinks about installing his own SSD will AUTOMATICALLY do that, not go out and buy a separate SATA controller!
This would have been helpful when I was shopping for PCIe controller cards, although I didn't buy it to use with a SSD.
Regarding the results, I guess I might get a bit of performance boost moving from an AMD 790X to AMD 990X board, which is what I plan to do.