TH: So the risk is than that I won't be able to write to it. But there’s a linear relationship between endurance and capacity, right? As you double the capacity, you double its expected lifespan?
TC: Yes. Data endurance numbers doubled for the same about data read and written per day.
TH: Given that, how many years are we expecting from the current crop of drives?
LK: Well, that's the hard part. You almost have a sliding scale if we’re talking about client usage models to server usage models. They’re very different. The worst kind of writes that you can apply to an SSD are random. You will wear a drive out quicker that way. If all the writes are sequential, that's the best case scenario for an SSD. A typical client workload is probably a mixture of those—not all random, not all sequential.
TC: For example, with our new V+ SSD, the proper life is like this: usually, we say the effective read/write duty is about 20% of the power-on hours. With this, normal operation is 8,760 hours per year, and this allows you to read and write 20 gigabyte per day in operation. With these numbers, we know that an SSD’s expected product life is actually much better than a traditional hard drive.
TH: Let’s circle back to that difference in the effect on endurance of sequential versus random writes. If the drive controller dictates how every bit gets written to the drive and wears the memory evenly, why there is a difference between sequential and random?
TC: Let's put it this way. Most of the time with your hard drive, you need an operating system. The random reads and sequential reads have a major effect on the system’s behavior. When you boot up your computer, you are doing a sequential read. Same with hibernating and application loads. But a lot of times, you also need to access your information, the user data, and that's a random read because your data is spread out everywhere. Now, with a hard drive, the arm has to move. And with this SSD drive, there are no moving parts, so no chance of mechanical failure. So, compared to the hard drive, SSD would provide better performance.
TH: Does multi-bit MLC enter into this discussion? Is having three—or later, four—bits per cell going to change the endurance dynamic, particularly when weighed against SLC?
TC: Three bits per cell is already in the market, but you also have to look at density—32nm or 25nm. That provides the density for more stack. The use of two or three bits per cell is just a current trend. The NAND semiconductor industry is more interested in how it can provide maximum data per square inch. Typically, we are looking for more cell density than bits on this point.
TH: But will 3-bit have an impact on endurance? With more electrons being pushed through each floating gate, does that erode the oxide layer more quickly?
TC: No, actually, because right now all NAND has ECC correction. You’re talking about losing all data endurance, so you cannot recover it. You’re talking about the voltage converting in the cell.
LT: But Tony, if we were to implement today's 3-level cell NAND on an SSD, would the endurance of that be less compared to a current MLC product?
TC: I would say that would be partially true. With data endurance there are two different issues. One is how long it will last, and one is how to correct errors if they appear.


I'm not sure if you were trying to be dramatic, or if you just accidentally wrote the same thought twice. Just pointing it out.
The ideal thing for booting up fast would be to go back to using core memory :-P. RAM that doesn't lose power when you turn it off is pretty cool. Low power, low heat, and would impress people when you say "Oh, that? It's my core memory array.". You'd get dates for sure. Can't say what they'd look like, or if they'd be sane. Or even female
Still, I'd buy it. Cache handles most reads anyway, and I'm too old fashioned to feel something is a computer without some form of magnetic storage in it.
I like how good they are at dodging the tough questions.
What value is there in Kingstons Intel based SSD's vs Intel original?
Well, they helped Kingston launch a very strong product
It runs Linux, with a compressed kernel image.
Looks like real mode disk access, registry hives, antivirus and such do slow Windows boot times.
I would prefer to see the product benchmarked and compared on price..and then let us decide how we are going to spend our money.
Keep them coming. =)
Now I have the urge to go buy a 256GB SLC drive and play flaming baseball with it... I probably shouldn't...
1. Apploading is NOT sequential, it has a high ammount of random reads. This is why SSDs are so much faster than harddrives at it. You can see this in PCmark vantage, where harddrives get 4-10MB/s in apploading, and SSDs go from 80-160MB/s.
2. Booting from an SSD over USB 3.0 is wastefull. Most SSDs support NCQ, and get 3-8x higher random read IOPS when NCQ is active, and this is noticable in everyday use. USB 3.0 does NOT support NCQ.
3. You say Windows 7 requires minimum 16GB to install, wich is true. The PARTITION must be minimum 16GB for the installer to allow it to be selected, however you can reduce the size needed for windows a lot. My windows folder is 13,5GB, and even with 20+ apps installed (MS and Open office suites included) i still use less than 20GB on my C: partition.
The need for a pagefile is reverse proportional to your ammount of RAM, if you have 4GB or more RAM you can safely deactivate it for normal general computer usage and save a lot of space.
I think you mean "migration" software. Although mitigation software could be really useful for resolving hardware errors. ;-)
The Kingston videos are fun. Start here: http://www.youtube.com/watch?v=udJ8TzvJne8
This dude IS wright. And that old nt filesystem isn't helping either.
If you optimize X startup, use a different kernel start-up event manager, you can get below 10 seconds startup time with a netbook.
True... it looks like they avoided answering the question and they just talked about the difference in speed (again!).