SSD 102: The Ins And Outs Of Solid State Storage
How SSDs Work
With the internal SSDs we're discussing today, flash memory and a controller are installed onto a printed circuit board (PCB) and packaged into a small enclosure. This housing is typically in one of the 1.8”, 2.5”, or 3.5” form factors that we all know and love from conventional hard drives. These can be mounted into PCs, laptops, or certain rackmount server environments. Indeed, flash SSDs look and largely behave like hard drives, with the exceptions that there are no moving parts and they weigh less. In addition, modern SSDs require very little cooling. Most SSDs employ a 2.5” housing and utilize 3 or 6Gb/s interface speeds.
MLC and SLC NAND Flash
Internally, all flash SSD products store data onto either single-layer cell (SLC) or multi-layer cell (MLC) NAND memory, able to store a single bit or multiple bits per cell, respectively. SLC cells offer less capacity per transistor than MLC, but higher write performance and data durability.
Modern Controller Architectures
All SSD designs are based on flash controllers that drive the storage circuits and connect to the host system via Serial ATA. Modern designs utilize the controller "brain" to tackle various needs. For example, data durability is addressed through wear leveling algorithms, ensuring that flash memory cell usage distribution is as even as possible to maximize the device’s life span. Performance is optimized through multiple flash memory channels, load balancing, and different methods of caching. Some controllers have an integrated cache, others work with a separate DRAM memory chip, and other designs utilize a part of the flash memory across multiple channels for data reorganization. Please read the article Tom’s Hardware’s Summer Guide: 17 SSDs Rounded Up for more details on architectures and specific products.
No visible cache memory doesn’t necessarily mean bad performance. Sandforce controllers utilize flash memory for data reorganization instead of a DRAM cache.
Many controller designs have their integrated cache memory.
Utilizing a separate cache memory chip is a flexible solution and common these days.
Trend: Toggle DDR NAND Flash
Samsung introduced Toggle DDR NAND flash memory a few months ago. This is a flash memory design that transfers data during the rising and falling edge of a memory signal, much like DDR DRAM. This approach debuted in the enterprise segment but will soon also be available in consumer SSDs. The main benefit of Toggle DDR is its increased bandwidth of 66 to 133 Mb/s per channel as opposed to 40 Mb/s. Drives using the new approach will probably not employ the faster peak bandwidth, but will instead try to maximize SATA II performance on 3 Gb/s interfaces while further lowering power consumption. We’ll explain in a bit why this is important.
Source. Samsung
Upon SSD data loss, can we recover the data files if it's defragmented, especially on a SSD that has never been defragmented as Tomshardware had recommended?
Most SSDs will perform this process themselves when idle for extended periods, but it happens at a slow rate. This is what most manufacturers refer to when they talk about Garbage Collection.
Thanks, in advance !!
= Alvin =
I am using same configuration on desktop. What I have noticed is that performance is actually much better than I expected. That is probably because of cache memory. If you have drives with big cache then in RAID stripe configuration those caches logically combine. In case of good desktop drive you can easily have 64MB cache. BTW I looked at the SSD drives caches - wow I know where performance comes from.
I think SSD is overrated right now. They have to be 4x cheaper. Otherwise it makes no sense. Next year they will be 2x cheaper and after one more year they will 2x more cheaper. So actually technology still needs two years to be usable.
My recommendation: stick to SATA and RAID - save the money. If you need little storage and max comfort then use SSD.
You save a lot of money with SSDs, simply because their watt consumption is really low. So, in long term (say 1y) you will be saving enough money to probably buy those Hitachi 7200K for free.
Energy efficiency is the key factor with SSDs.
The power consumption difference of a single drive is negligible for the purposes of generating any tangible savings on the electric bill. Let's assume the average power consumption difference between HDD and SSD is 5W, and the system that employs the drive is up 24/7/365. Also, let's assume that your electricity cost is 14 cents per kWh (that's what I'm paying on average, your mileage may vary). Thus 0.005kW * 24h * 365d * $0.14 = $6.132 - that's your annual savings (to be clear, that's six dollars and some change, not six thousand). Surely, if you employ hundreds upon hundreds of drives, the savings will add up, but in the end the up-front investment into SSD's higher cost is not likely to pay off within the SSD lifetime, not to mention to get you any savings.
On a separate note, I do believe that longevity of drives is one of the major factors that affects the purchase decision. For enterprise use, if the drive is constantly hammered by writes (say, a database file is stored on it), the rate of wearing out re-writable flash is likely to be higher than the rate of failure of magnetic drives (certain 10K RPM IDE drives notwithstanding).
... if only SSD were more affordable! But, perhaps, the rumored adoption of 2Xnm technology for NAND by Intel by the end of this year will finally put enough pressure on the market to bring down prices to the realm of affordability. One can only hope.
Why is the block size so large?
What makes a 4KB or even 256B block a bad idea?
Is it there's a large per-block component that can't be shrunk?
Is it that blocks need to be insulated from each other so that high-voltage instructions (perhaps clear) don't leak?
Those are purely guesses.
5.5 watts to 1.7 watts is not "1/3 Reduced" as per label - it is "2/3 reduced" or "Reduced to a 1/3"