Diskeeper’s Controversial SSD Defrager

Status
Not open for further replies.
G

Guest

Guest
Diskeeper is like a manufacturer of sleeves for floppy drives: Obsolete. Meanwhile on Denial Street, Seagate is still claiming that SSDs will probably never catch on.
 
G

Guest

Guest
Actually, the link to the OCZ forum shows that Hyperfast did result in some performance improvements but not even nearly as much as Diskeeper is claiming.
 

pbrigido

Distinguished
Mar 6, 2008
529
0
18,980
Whoever would buy into the idea of a SSD needing to be defragged, needs to study the technology a bit more. But in the computer industry, if there is a buck to be made, someone will try.
 

sidewinderdt

Distinguished
Jan 16, 2009
35
0
18,540
To add fuel to the fire, straight from an Intel SSD engineer:
"Q. Do SSDs need to be defragmented?

A. Unfortunately this answer isn't exactly straightforward. Solid state drives generally do not organize data the way that HDDs do, or the way the operating system is expecting them to. This is done to overcome the limitations of flash memory (eg. wear leveling). For that reason, standard defrag tools will not make correct decisions about how to optimize the file system layout on an SSD. I would cautiously suggest that anyone using any SSD should disable automatic defrag on that drive and don't bother running it manually. SSDs are exceedingly fast at seeking, so fetching a seemingly scattered file is going to be nearly as fast as fetching a file that is written sequentially. A traditional HDD will fetch that same scattered file drastically slower, which was the original motivation for defragmentation.


That said, there certainly are best and worst case layouts for data organization on SSD media. Currently the firmware is responsible for keeping everything as organized as possible. There might be a new opportunity for tools to be developed that will "defragment" an SSD, but they may need inside knowledge of how each SSD works. The magnitude of the fragmentation problem is reduced though, because the performance difference between an optimal layout and worst case isn't nearly as crippling as with a HDD."

http://enthusiast.hardocp.com/article.html?art=MTYxMSwyLCxoZW50aHVzaWFzdA==

And BTW, defrag is totally different from wear leveling. It has nothing to do with each other since wear leveling is the reorganization of data at the physical block level (cell) of the MLC SSD and defrag is the organization of the data at the logical (NTFS) level of the operating system.
Check out the comments section here:
http://www.diskeeperblog.com/archives/2008/12/hyperfast_is_al.html
 

SparkyTheWonderDog

Distinguished
Jan 16, 2009
1
0
18,510
My thesis is on the optimization of wear leveling algorithms on operating system file system usage. I directly monitor the embedded NAND flash component connected to the embedded controller (lazer shaved the NAND device, bond wires, connected to a dedicated fpga based tester), recording every NAND block erasure count directly on various controller commands, extending the test to the life of the product (some up to 100's of TB's of data). In addition, I also have custom hardware to drive the SSD/SD/MMC device directly (emulating the hardware interface), in complete control of all parameters; static and dynamic areas, data pattern, sector size, transfer length and so many others. I have data to show that there are significant advantages to large transfer sizes of data relative to the amount of dynamic area available. In fact, totally random write access with small transfer sizes is extremely slow (worst case) to wear leveling affects such as write amplification and block erasures, life of product, etc.

I may be able to understand their research, but it will require that the operating systems API will need to be optimized to take real advantage of the controllers wear leveling algorithm, and these algorithm vary by controller and product family. There are no 'standards' yet, as this is an emerging technology used in many different applications.

My research is to link the operating system file management to the controller and better the access and wear performance. New technologies in NAND are emerging as well (cache, stripping, multiple die and channel, etc). I read an article that Windows 7 has worked on this.

Defragmenting a SSD, SD, MMC device so larger transfers of data can be written without causing the wear leveling algorithms to internally move data sectors around (and erase blocks) requires insight into the controller's algorithm, which directly would affect write performance. A 'generic' method may be difficult to show any performance between various devices, which is probably why no significant performance was observed using a traditional file system approach with this product.

Anyway, I believe that will happen is that the controllers and NAND technology coupled with the new embedded commands to link the file system API to the hardware will leap frog the performance in the very near future. This is only my opinion; hopefully, my Thesis results will show this promise.
 

shqtth

Distinguished
Sep 4, 2008
409
0
18,780
Flash uses Pages.

So if data of a file was sequential in that page then fetching data would require less pages, and would provide faster access.

Also for writing data, writing a full page would be faster then reading a page, inserting data into a section then writing the page back to flash.

Memory devices allow access to a specific address, but flash/eeprom etc uses pages. And when writing data a whole specific page has to be written. When fetching data depending on the device may allow fetching of a specific piece of data within a page (offset), or may require the whole page being fetched. In the case where a full page has to be fetched to just to gain a peace of data is when the efficiency is lost.

It is up to the controller to hide this. A controller will use RAM to shadow/cash pages, and will write the page when it is best. In the case of writing some data to a page the controller will fetch the page, insert data then write the page. If the page is accessed a lot (reading and writing) its best to cache the page to reduce time needed to write and/or reduce time needed to close a page if data is already in cache. All in all the ram will help reduce wear and tear, increase throughput, sand ultimately save energy, as constant write uses more power then reads.

So it’s not that different from hard drivers in the way flash is accessed, but what is different in flash is opening a new page is just a quick and no seek time.


So imagine this, you have a file that would use 5 pages if it was sequential, but since it is fragmented it uses 8 pages.

Well, data that is now fetched would be: 8 pages * page size, and the efficiency would be 5/8. So the bandwidth is cut almost in half.

One reason why flash is so cheap, is that it uses pages/rows to access flash for reading/writing etc. Other types of flash that use single addresses for reading/writing is more expensive as it use more complex circuitry.
 

mmaterie

Distinguished
Jan 23, 2009
2
0
18,510
Hi David, thanks for writing a post on HyperFast. I'm the Director of Product Management at Diskeeper and author of the blog that has been referred to a few times in this thread. As one of the other comments already noted, the testing done at OCZ forums DOES show HyperFast provides benefit. The wording used by the person testing at OCZ (and they did a great job) was "no notable" change. And I would agree that when compared to the caching programs, that it does not offer the same numbers. HyperFast does offer 100% data safety as it performs atomic transactions using Windows FSCTLs (often collectively called the MoveFile API) that we originally co-wrote with Microsoft in the 90s, so data is never in a log/cache.

All these SSD optimization technologies are essentially solving the same issue - fragmented space in the file system causing sequential write I/O to be split into what then is passed to the disk subsystem as random I/O. Expanded write-caching programs can also consolidate standard random writes from background Windows ops into sequential writes. The testing we presented in HyperFast papers is based on a Vista PC, after 6 months of typical business use. The more you use Windows on an SSD the worse the free space will get and the worse the general write performance on an SSD will become. To that effect, the worse the starting environment the better the improvement you'd see running HyperFast.

SSD wear-leveling is handled by the SSD and is very much proprietary to the manufacturer.

HyperFast is just one of several SSD performance technologies we have. Give us a call and we can go through some technical discussions, demos, and talk about new things in the works.

Thanks,
Michael Materie
 
G

Guest

Guest
how would this benefit a mac or even linux machine, surely this is for xp machines only?
 

mmaterie

Distinguished
Jan 23, 2009
2
0
18,510
HyperFast is Windows only at the present. We are actually doing some field tests on Mac; more so related to HDD performance though. Based on empirical observations, OSX does a fairly decent job consolidating file fragments, but fragments the "free space" on HFS+ more so than what we've seen on NTFS over the years. That chopped up space then similarly impedes the write performance (depending on the SSD make of course). The same "should" hold true for the various Linux file systems, which also all fragment to varying degrees. I stress should, as I'd need to test that theory before stating it as fact.
 

Keatah

Distinguished
May 9, 2008
4
0
18,510
This is all very amusing. SSD do not require defragmentation of any sort at the OS level. NONE. A correctly designed controller on the SSD will complete garbage collection and consolidate the free space as necessary, and it will do it in the background when the drive is idle.

The controller will also be re-mapping blocks and 'sectors' and 'clusters' without knowledge of the OS.

If the OS, or any 3rd-party "optimization program" says to put a file at so-and-so location on a SSD, the controller will oblige up front. But in reality the controller will put the data where it sees fit based on wear leveling requirements and other factors in its software. The controller will keep a list, a table, a chart, of where that data is ACTUALLY stored. And when the OS requests the data, the controller will consult that list or table and THEN go get the data.

So all a 'defrag' program is doing is re-writing that table, and making it look pretty on one side. Remember this is a simple look-up table. The drive puts the data where it wants. And come hell or highwater NOTHING on the OS side of things is going to say otherwise.

Any point on that look-up table takes a specified amount of time to get to. And going to any other point on that table (equivalent of a mechanical drive head moving from track to track) takes a specified time as well. The time we are talking about here is microseconds and nanoseconds. The drive can easily get to data anywhere in its storage array of nand chips, put the data on the bus, then be at the next location before the previous data has cleared the bus.

Defragging a mechanical hard disk makes sense, as the access times involve rotational latency, head positioning latency, settling times. You want to have the important files close together, single-file, up-front, and in order. So the data just whips on by and the mechanical drive’s controller doesn’t have to do a lot of seeking work. This problem does not exist in SSD.

Defrag software was designed to solve a specific problem of the mechanical drive, the delays caused by moving parts. Now that those components are eliminated, of what use defrag software. Remember, an SSD can get to ‘cluster’ 14 as fast as it can to ‘cluster’ 103972673. And this speed of traversing clusters/sectors/blocks/tracks/banks, whatever have you, is faster than the data bus can transfer data!

Anyone stating you need to defrag an SSD is simply unknowledgable or trying to sell you some software. Remember the RAM defragmentation and optimizer programs of the mid-90’s? Well.. We have the same situation here. A company trying to make a fast buck.
 

buknoy

Distinguished
Nov 6, 2010
1
0
18,510
Great comments, Keatah. You seem to know alot about the complexities of reading and writing data to a hard drive.

But none of what you say explains why my SSD does measurably and perceptibly perform slower than it did when I first installed it brand new and loaded Win7x64. So does this mean my non-mechanical SSD is getting "tired"? Is the indlinx controller in my drive behaving any differently than it did when it was brand new? Or maybe its just that SSD technology hasn't evolved enough to be practical? Is Win7x64 just crappy?

I have taken some programming courses and I do remember programming the actual registers, AX, BX, CX....and so on. So who says you can't put data on whatever sector or cluster of a drive you want it to go? I think that most operating systems are written in assembly code since they've gotta be quick. And being quick is certainly something measurable and therefore can be proven. Which is what nerdy computer scientists do all day. They write programs and can mathematically prove that their algorithms are optimal for a certain situation.

The problem is there are several levels that make the whole scenario of "being more efficient" very complex. There's the programming that goes into the indlinx controller for my Corsair SSD, the programming that goes into creating the operating system environment (this is of couse a huge component, not to mention the hardware it runs on), the programming of the actual sofware I use daily which is how I can certainly tell my system is progressively getting slower reading and writing. And then there's the programming of the defrag software that actually takes all the other programming environment variables into account and tries to magically regulate it in such a way that everything works at an optimal level -- so that things run quicker -- and stay that way. That's the bottom line. Whether I'm being sold snake oil or not, I can certainly tell if reads and writes to a drive are taking longer than they usually do.

So before writing off defrag software as snake oil, do consider the bottom line that computers cannot think for themselves to be more efficient. Oh sure, there are optimization programs out there to tell if algorithms are inefficient. But how to solve this inefficiency is still up to the job of the programmer.

 

Keatah

Distinguished
May 9, 2008
4
0
18,510
Old SSD drives have to have their free space consolidated. THAT is why your drive is slowing down.

The indlinx controller isn't getting tired, it is being swamped with half-written blocks that need to be partially erased! An SSD needs "defragged" free space to maintain performance. The new sandforce controllers do that automatically. Keep in mind that an SSD can only erase a large block of data, even if you only need to erase and re-write a tiny amount. No can do..

You can re-format and re-partition the drive, and get back some performance. But using a regular defragger will do absolutely nothing for you. Does your drive support TRIM ??


Visit anadtech's site to see 2 very most excellent tutorials on SSD performance.

But, yes, SSD's are not really ready for prime-time consumer commodity usage just yet. Soon though.
 

Keatah

Distinguished
May 9, 2008
4
0
18,510
Diskeeper is a joke. It is best to let the newest SSD's manage themselves. Anything else is extra overhead.
 
Status
Not open for further replies.