Sign in with
Sign up | Sign in

Adaptec to Demo 6600 MB/s Bandwidth Over PCIe 3.0

By - Source: Adaptec | B 15 comments

Adaptec today said that it will be demonstrating an interface capable of reaching a sustained data rate of 6600 MB/s.

The technology will leverage PMC Sierra's 24-port SRCv RAID-on-Chip with a PCI Express 3.0 interface. According to Adaptec, the new PMC RoC can support twice the bandwidth of a previous generation RoC, with twice the bandwidth on the PCIe interface and three times the bandwidth on the SAS interfaces.

There was no information which server platform the demonstration will be based on, or whether demonstration will include hard drives or solid state disk drives. However, it appears that Adaptec will become the first company to offer a 6Gb/s SAS RAID controller based on PCIe 3.0.

Adaptec will be showcasing the technology at CeBit at the Storage Pavillion. CeBit will be held from March 6 to 10 in Hannover, Germany.

Discuss
Ask a Category Expert

Create a new thread in the News comments forum about this subject

Example: Notebook, Android, SSD hard drive

This thread is closed for comments
  • 5 Hide
    nforce4max , March 1, 2012 2:32 PM
    Now that is what I need, tired of sluggish onboard sata raid controllers that produce lag on top of already lagging mechanical drives while I am editing videos or multi boxing wow X.X

    Small pocket book/Budget = Sad Face
  • 4 Hide
    JasonAkkerman , March 1, 2012 2:32 PM
    It would have to be SSDs. How else could you get 6600 MB/s from 24 drives. You need to get ~275 MB/s from each drive, and you can't get that from a HDD.
  • 5 Hide
    kaisellgren , March 1, 2012 2:41 PM
    Doesn't this mean we might some day have PCIe 3.0 SSDs delivering 6600 MBps?
  • Display all 15 comments.
  • -4 Hide
    josejones , March 1, 2012 3:20 PM
    How long until Intel incorporates this new standard into their SSD's?

    It's also good to see new PCIe 3 stuff coming out. PCIe 2 and USB 2 will soon be old, dated and eventually obsolete.
  • 3 Hide
    stuckintexas , March 1, 2012 4:10 PM
    JasonAkkermanIt would have to be SSDs. How else could you get 6600 MB/s from 24 drives. You need to get ~275 MB/s from each drive, and you can't get that from a HDD.


    SAS Expanders? Depends on the architecture and how they are getting 24 ports (is it 8 native with expanders?). LSI can do 3400MB/s with 24 15K SAS drives using expanders on their 8 port cards. Seagate Savvio drives are rated at 200MB/s on the outer rims.
  • 7 Hide
    STravis , March 1, 2012 5:41 PM
    josejonesPCIe 2 and USB 2 will soon be old, dated and eventually obsolete.


    Genius! Who else would ever predict technology getting obsolete.
  • 2 Hide
    warmon6 , March 1, 2012 7:43 PM
    6.6GB/s.... Darn....

    I was hoping is was going to be 9001MB/s......
  • 2 Hide
    A Bad Day , March 1, 2012 7:45 PM
    josejonesHow long until Intel incorporates this new standard into their SSD's? It's also good to see new PCIe 3 stuff coming out. PCIe 2 and USB 2 will soon be old, dated and eventually obsolete.


    PCI Conventional is still hanging around, and there are some manufacturers that still produce PCI-C cards instead of PCI 2.1 1x.
  • 1 Hide
    daneren2005 , March 1, 2012 7:45 PM
    josejonesIt's also good to see new PCIe 3 stuff coming out. PCIe 2 and USB 2 will soon be old, dated and eventually obsolete.

    PCIe 2 won't be obsolete anywhere but in servers for quite a while. Its only people with more money than they know what to do with that can afford this type of crap that actually saturates the bus. And USB 2 was obsolete before it even came out, it was just the best thing out at the time so people put up with the slow speeds.
  • 0 Hide
    blazorthon , March 1, 2012 8:01 PM
    kaisellgrenDoesn't this mean we might some day have PCIe 3.0 SSDs delivering 6600 MBps?


    We already have PCIe SSds faster than that, OCZ has a 7200MB/s PCIe 3.0 SSd.

    JasonAkkermanIt would have to be SSDs. How else could you get 6600 MB/s from 24 drives. You need to get ~275 MB/s from each drive, and you can't get that from a HDD.


    We have 48 3.5" bay server chassis' right now, two HDDs per port (SAS isn't SATA, it supports MANY drives per port) and no problem. Some high end hard drives can go in excess of 200MB/s on the outer edges of the disks, specifically some 15K RPM drives. I bet a 7200 RPM 4TB drive could too since the 5400RPM 4TB drives can go in excess of 160MB/s, it's just a matter of building a 4TB 7200 drive.
  • 1 Hide
    jaber2 , March 1, 2012 8:23 PM
    I was thinking the same 6600MB/s should be about 6GB/s not 6Gb/s.
  • -4 Hide
    dalethepcman , March 1, 2012 8:39 PM
    JasonAkkermanIt would have to be SSDs. How else could you get 6600 MB/s from 24 drives. You need to get ~275 MB/s from each drive, and you can't get that from a HDD.


    Considering that your 5400rps 1tb can do 100mb\s, it is hardly a stretch of the imagination to assume modern 15k sas drives can do 300mb\s.
  • 0 Hide
    blazorthon , March 1, 2012 9:49 PM
    Quote:
    Considering that your 5400rps 1tb can do 100mb\s, it is hardly a stretch of the imagination to assume modern 15k sas drives can do 300mb\s.


    I keep up somewhat with the top hard drives and I don't think that any hard drive on the market can do 300MB/s. Around 200MB/s, maybe 210MB/s is probably about the fastest we have and they only hit that speed on the outer edge of the disks. Remember, the closer to the center of a hard drive, the slower it is. Closest to the center of each disk is probably less than half of the outer rim's performance.

    15K drives don't come in 1TB densities. They probably top out at 300GB, 450GB, maybe some 600GB, but I don't think they go above that. 10K drives can go above, I've seen 900GB 10Ks and I think that I saw a 1.2TB, although it might've not hit the market yet. Also, increasing density does not linearly increase performance.

    Increases are more like a square root graph because density increase both by increasing the data that fits in one circle of a disk (called cylinders, this increases performance) and by increasing the number of cylinders (this can help latency very slightly if going from one cylinder to a near by cylinder, but not MB/s). That means that capacity needs to increase by percentage rather than linearly in order to improve performance linearly. Perhaps a better way to say it than square root would be logarithmic.

    A drive with lower density is slower than one with higher density if the platter count is the same and the RPM is the same. Going for a lower density means you need to have a much higher RPM to increase speed. High RPM drives have low densities because although they have 3.5" containers, they actually have 2.5" disks instead of 3.5" disks like that 1TB drive.

    This means that their platter density is still pretty high, but since the outer edge of the disks move faster than the inner disks (if you are 3.5" away from a center and you are spinning around it, the outer edge spins faster than the inner edge so that they can make one revolution at the same time. This way, the farther you are from the center, the more data is going past the read/write head.

    There are simply so many factors that go into hard drive performance, but we can't really make a true 3.5" drive spin at 15K RPM, it spins too fast for such large disks on the current motors and such, there are just problems with it. 15K 2.5" disks are still faster than 7200RPM 3.5" drives, but that is why it isn't as great of a difference as it would be. A 1TB 15K drive could probably hover around 240MB/s, if I were to hazard a guess.

    Since we are stuck with 2.5" disks, we can't get as high capacities as 5400, 5900 and 7200 RPM 3.5" drives.
  • 0 Hide
    blazorthon , March 3, 2012 1:35 PM
    4TB 7200 can do what, exactly?

    4TB 7200RPM drive would, at best, be around 220-230MB/s (I did the math this time instead of guessed) at the outer rim of the disk if it is similar to the current 4TB 5400 drives in every other aspect. It would be the fastest until a 15k or 10k beat it, but it still would need 48 drives, not 24, to fill the bandwidth of a 6.6GB/s controller because it won't stay close to the outer rim in an enterprise setting so it will need 48 of the drives just to stay close to filling 6.6GB/s after it reaches for more than about 60% full.
  • 0 Hide
    blazorthon , March 4, 2012 1:05 PM
    Considering that this is for servers, I should probably include RAID in the math. Instead of just 48 drives, it might need something more like 72 drives (RAID 5) or 96 drives (RAID 10/0+1), more if more redundancy is required. That could need a rack instead of a single chassis to hold.