A-DATA to OCZ: 64MB Cache on SSD? Easy

Not to be outdone by the impending release of OCZ’s high-performance Vertex series of solid-state drives, A-DATA has announced a new 256 GB SSD that features the same 64 MB of internal cache as OCZ’s product.  The company was showing off the SATAII SSD 300 Plus, as well as its mammoth 512 GB XPG SSD, at this year’s Cebit show in Hanover, Germany. The loss of capacity isn’t that bad of a deal if you look at it like a fat runner shedding weight; A-DATA is boasting speeds of up to 250 MB/sec and 160 MB/sec on the drive’s sustained reads and writes, respectively. That's 20 MB/sec faster reads than the company's XPG SSD.

You can thank the included 64MB SDRAM buffer for the speed boost. Double the size of more typical SSD caches, the extra memory—alongside an alleged new controller design--allows the drive to spit out better performance by providing more room for storing frequently accessed data.  Although solid-state drives are known for their speed in accessing data, it’s still faster to pull information straight out of the drive cache than the internal memory cells. For what it's worth, the OCZ Vertex currently wins the tale of the boasting tape, as its drive allegedly delivers 80 MB/sec greater sustained writes than A-DATA's SATAII SSD 300 Plus.

There’s no word on the price or estimated street date for the new A-DATA drives.  Nor do we know the specifics of the controller in the drive, including how many memory channels it might be able to handle at once. But the company has said that it plans to launch the 300 Plus in four different storage capacities: 32 GB, 64 GB, 128 GB, and 256 GB.

  • baov
    Why are we talking about pulling reads out of the cache? The reason it's there on an SSD is to buffer those slow writes, not to have reads from the cache.
    Reply
  • jacobdrj
    As cheap as DDR style memory is, why not have a gig of built in chace? These drives cost upwards of $500 anyways, what is another $5? Intel is still a leg up. But they are pushing them with innovation. For you geeks out there, it is taking the brute force Stargazer approach, rather than the finesse Excelsior one.
    Reply
  • Grims
    jacobdrjAs cheap as DDR style memory is, why not have a gig of built in chace? These drives cost upwards of $500 anyways, what is another $5? Intel is still a leg up. But they are pushing them with innovation. For you geeks out there, it is taking the brute force Stargazer approach, rather than the finesse Excelsior one.
    Why stop there, why not just make a 64GB cache drive and be done with it? :P
    Reply
  • MikePHD
    I don't know which vertex you are talking about, but my vertex fresh out the box gets 120mB/s writes sustained and 240mB/s reads
    Reply
  • mavroxur
    jacobdrjAs cheap as DDR style memory is, why not have a gig of built in chace? These drives cost upwards of $500 anyways, what is another $5? Intel is still a leg up. But they are pushing them with innovation. For you geeks out there, it is taking the brute force Stargazer approach, rather than the finesse Excelsior one.

    Because putting that much data in a volatile RAM cache is asking for problems unless you add a battery backup on the drive like most high end RAID cards that have a lot of on-board cache do. Not to mention that a cache flush for a 1gb cache would bog, especially if it flushes during a windows shutdown or during a latency-sensitive operation.
    Reply
  • jacobdrj
    GrimsWhy stop there, why not just make a 64GB cache drive and be done with it?Because 1GB costs 5 dollars, while 64 costs 500. If you put enough cache where the law of diminishing returns is not likely to catch up with it at minimal cost (especially with respect to the overall cost) might be worth it.
    Reply
  • jacobdrj
    mavroxurBecause putting that much data in a volatile RAM cache is asking for problems unless you add a battery backup on the drive like most high end RAID cards that have a lot of on-board cache do. Not to mention that a cache flush for a 1gb cache would bog, especially if it flushes during a windows shutdown or during a latency-sensitive operation.Interesting, but if that means they are effectively engineering for failrue, you could argue the use of SSD to begin with is a bad choice, as data from a dead drive is unrecoverable. Also, at 64mb, you still have a tremendous data loss were power to be inturrupted. It is a buffer. For any data sensitive applications, by that logic, no buffer should be used at all, and the hit in perfomence would be justified. I would immagine with proper engineering of the controller, you could at least mitigate these problems for consumer grade drives.
    Reply
  • mavroxur
    jacobdrjInteresting, but if that means they are effectively engineering for failrue, you could argue the use of SSD to begin with is a bad choice, as data from a dead drive is unrecoverable. Also, at 64mb, you still have a tremendous data loss were power to be inturrupted. It is a buffer. For any data sensitive applications, by that logic, no buffer should be used at all, and the hit in perfomence would be justified. I would immagine with proper engineering of the controller, you could at least mitigate these problems for consumer grade drives.


    I never said SSD's were a bad choice. And 64mb isnt all that far off from 32mb caches on typical hard drives, but when you start using cache sizes over 1gb like you mentioned, there's a lot more data sitting in a volatile state, that takes longer to flush. It's not an issue usually when you're sporting 32mb of cache, as your buffer-to-disk speed can flush that quickly. With 1gb, it's more pronounced. I know that it's hard to realize the size difference between 64mb and 1gb, but there is a substantial difference there.
    Reply
  • jacobdrj
    mavroxurI never said SSD's were a bad choice. And 64mb isnt all that far off from 32mb caches on typical hard drives, but when you start using cache sizes over 1gb like you mentioned, there's a lot more data sitting in a volatile state, that takes longer to flush. It's not an issue usually when you're sporting 32mb of cache, as your buffer-to-disk speed can flush that quickly. With 1gb, it's more pronounced. I know that it's hard to realize the size difference between 64mb and 1gb, but there is a substantial difference there.I am not saying you were. It just seems like the logical conclusion if failure was the only reason for not increasing the cache size more drastically.
    To your point about the size difference being significant: As a non EE/CE I don't have a concept of what is truly 'large'. I'll take your word for it. What would you say would be a good size for the cache?
    Reply