Solidigm's New Synergy 2.0 SSD Driver Claims up to 170% Speed Up
Top-down optimization with SSD toolkit and driver.
Solidigm's new Synergy 2.0 SSD driver and software are designed to offer up to an incredible 120% increase in some types of 4K random read workloads and up to a 170% increase in 4K sequential tasks, thus delivering leading-edge game loading and system boot performance by leveraging smart algorithms to prioritize the data you use most frequently.
Solidigm, an SK hynix venture incorporating elements of Intel’s old SSD business, acquired its name from the combination of 'solid state storage' and the word paradigm. It’s only fitting, then, that they have taken efforts to separate from the pack by changing the SSD paradigm from the top down. This strategy has led to the release of Solidigm’s Synergy 2.0 software which works above the SSD hardware and firmware layers.
Solidgm's overall approach is two-pronged: one side is the Synergy Driver to directly improve the user’s experience, and the other is the Synergy Toolkit as an SSD toolkit application. Together these software components help get more out of Solidigm SSDs via targeted real-world optimizations.
The first prong of Solidigm’s software strategy is the driver, known as the Solidigm Synergy Driver. This includes three prominent performance features including Smart Prefetch, Dynamic Queue Assignment, and Fast Lane.
The most-touted feature is Fast Lane, previously known as Host Managed Caching (HMC). This uses read SLC caching to improve boot and application load times by identifying the most frequently used (MFU) user data. This can improve reads by up to 120% under ideal circumstances, which is with 4KB random reads on a 50% full drive. SSDs perform worse once they are filled from the fresh-out-of-box (FOB) state and the dynamic SLC cache shrinks with drive utilization. Therefore, this feature is best used at between 25% and 75% drive usage, with 50% being the best target.
The Smart Prefetch feature identifies predictable read streams, typically sequential reads with a queue depth of one, to prepare data before it is needed. This is a typical gaming workload with the 4KB I/O size being the most common and having the most performance to gain, although this feature works with up to 128KB chunks in up to eight 512KB streams. Solidigm demonstrated an up to 170% speedup for QD1 4KB sequential reads, but in practice, this should only improve load times by single digits.
Dynamic Queue Assignment works by assigning I/O queues to less-utilized CPU cores, which usually isn’t an issue but can be a bottleneck with certain workloads. This is said to improve QD32 4K random write performance by up to 20%, but would also improve QD32 4k random reads. In general, this feature is designed for high queue depths and particularly with smaller I/O. This has potential use for some types of content creation workloads.
The second prong of Solidigm’s software strategy is an SSD toolkit, or the Solidigm Synergy Toolkit. This toolkit is compatible with all SSDs, including those of competitors. Universal features include real-time health monitoring including S.M.A.R.T., drive information, diagnostics, and secure erase. Drive information includes firmware and driver versions, and firmware may be updated for Solidigm drives through this application. Also shown are the host memory buffer (HMB) status and any partitioning. The write cache can also be evicted on the P41 Plus, which does impact the Fast Lane feature.
You will have to use the Solidigm P41 Plus SSD to explore the new 2.0 driver, as it is the only SSD that currently fully supports the complete functionality. The Solidigm P44 Pro, Intel 665p, and Intel 670p are also supported by the driver, but lack the Fast Lane feature. Solidigm intends to add these features to future drives. Solidigm claims this is a firmware limitation but it may be due to needing something like the P41 Plus’s unique SLC cache configuration. The Intel 660p is not officially supported at all, despite using the same controller as the 665p.
It’s true that software is often an afterthought with SSD design, although Microsoft’s DirectStorage API has encouraged some interest, and Solidigm’s driver fully supports it. Solidigm is also excited about the Synergy 2.0 software improvements, assuring us it has real-world benefits that may not always show up on synthetic benchmarks. The long-term intention is to improve this software over time while developing better hardware products. As such, additions to the Toolkit are forthcoming and the driver will see further optimization.
Getting more out of your device is always a good thing, so we are excited to see what Solidigm's new software brings to the table. You can download Solidigm’s 2.0 software on its website and begin using it today. Meanwhile, we're working on our own series of tests to characterize performance. Stay tuned.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Shane Downing is a Freelance Reviewer for Tom’s Hardware US, covering consumer storage hardware.
-
DMW888 I just installed a fresh win 11 on my new P44 Pro 1tb then had a look at their software.Reply
Synergy seemed rather useless considering their Storage tool app has nearly the same functionality, albeit without the fancy UI and 'Fast Lane', 'Clear Write Cache' setting options.
Both these options were unavailable to me however for whatever reason so I just kept Storage tool.
Message was "This operation can't be performed on this drive" -
bit_user Hmm, I like the host-managed caching. It would be nice if it let you "pin" certain files and directories to the cache, which would essentially prioritize them (but, they'd still have to get demoted to slower storage, as the drive's capacity nears full).Reply
As for other stuff like smart-prefetch and posting writes from less-utilized CPU cores, those are optimizations I'd rather see the OS do. That's easier for the latter, but smart-prefetch fits in with host-managed caching because you need to store additional usage data to support it, which could be tricky if it's not built right into the filesystem.
Thinking about it some more, it seems like host-managed caching could be a standard NVMe thing. It wouldn't be supported across all drives, but it seems like you could add some advisory bits in the NVMe protocol that would enable cross-vendor implementations. That would enable the host management part to be handled by the OS, as well. -
thestryker A lot of what they're doing, and trying to do (gleaned from the L1T interview) is stuff that could be baked into the OS and/or be part of NVMe standards. It seemed like when the team worked at Intel there wasn't any interest from management to take any swings at improving things from the software side (if anyone isn't aware this team is the Intel NAND/storage team). I think a lot of optimizations will be very important as the industry tries to move beyond QLC to drive their costs down further.Reply
Here's the L1T interview for anyone who didn't come from the other storage thread: 8YBeriMsDS0View: https://www.youtube.com/watch?v=8YBeriMsDS0 -
bit_user
As you're probably aware, the capacity yielded by packing more bits/cell increases 1/x. So, you only get 25% more capacity by going to 5 bits per cell, yet you're cutting in half the voltage difference between two states, making it much more prone to noise and dramatically reducing power-off data retention time. Plus, you need more error-correction overhead, which should even cut down that 25% figure by a little bit.thestryker said:I think a lot of optimizations will be very important as the industry tries to move beyond QLC to drive their costs down further.
In other words, I sure hope the industry doesn't go to 5 bits, but that will probably happen for at least bottom-tier consumer flash. The only good thing about it is the improvements in cell design that enable PLC should also benefit lower-density NAND.