Seagate 8TB Archive HDD Review

We create more data every time we snap a picture with our phone, record a video or post something to a social media website. The budding Internet of Things (IoT) means that our thermostats, vehicles and toasters (conceivably) only add to this unending stream of data. The Internet reaches to the far flung corners of the globe and the number of users streaming data, seemingly just into the nether, continues to expand every day.

Read The Review On Tom's IT ProRead The Review On Tom's IT Pro

The only problem is that the data isn't actually disappearing into the nether. We expect to be able to retrieve a family picture in mere milliseconds from Facebook even if we uploaded it (and forgot about it) five years ago. We increasingly (and blindly) trust that our social media services will safely store our data and memories, well, forever.

The picture from your latest snap has to land somewhere, and that "somewhere" is a server that churns away day and night. This influx of data creates a tremendous challenge at the other end of the pipe, and datacenter operators are tasked with keeping this exponentially increasing burden of data under safe keeping; but the key is to do it in a cost-effective manner.

HDD vendors have responded to the need for more storage with clever engineering that increases the amount of storage they can provide per device, as well as per dollar. The newest innovation comes in the form of SMR (Shingled Magnetic Recording), which overlays data tracks to increase the amount of available storage capacity.

The Seagate SMR datacenter offering lowers cost to an unheard-of three cents per gigabyte of storage (and that is at retail pricing). This low cost has made the drives increasingly popular with consumers for bulk data storage, which is a use-case the drive excels at.

However, SMR is not without its tradeoffs. For those who aren't familiar with the limitations of SMR technology it can become a frustrating experience. Never fear, we've got it all figured out. Come along as we explore all things SMR in our Seagate 8TB Archive HDD Review at Tom's IT Pro.

MORE: How We Test Enterprise HDDs
MORE: SMR (Shingled Magnetic Recording) 101

Paul Alcorn is a Contributing Editor for Tom's IT Pro, covering Storage. Follow him on Twitter and on Google+.

Follow Tom's IT Pro on Twitter, Facebook, LinkedIn and Google+.

This thread is closed for comments
22 comments
    Your comment
  • Wow, 8TB for about $250, seems great! I've used SMR drives. They are awesome if you write few/ read many. Also if your writes aren't that massive. But if you write a lot, after a while things crawl from 200 MB/s to 30 MB/s, and the head starts to move around every so many seconds from the buffer to a shingle and back.

    In a couple of years, SSD's will come with 10TB+, but meanwhile this is a very good deal.

    Also, power loss can lead to corrupted sectors with SMR, so you have to be more careful.

    Five of these in raid-6 give 24 TB of storage space, very awesome.
  • Achoo22
    WTF, this is just a teaser article? Time to remove Tom's from my RSS reader?
  • PaulAlcorn
    Quote:
    Wow, 8TB for about $250, seems great! I've used SMR drives. They are awesome if you write few/ read many. Also if your writes aren't that massive. But if you write a lot, after a while things crawl from 200 MB/s to 30 MB/s, and the head starts to move around every so many seconds from the buffer to a shingle and back. In a couple of years, SSD's will come with 10TB+, but meanwhile this is a very good deal. Also, power loss can lead to corrupted sectors with SMR, so you have to be more careful. Five of these in raid-6 give 24 TB of storage space, very awesome.


    Actually, Seagate uses a section of the platter to back up the volatile cache, so these drives are less likely to experience data loss than a typical desktop HDD.
  • Quote:
    Quote:
    Wow, 8TB for about $250, seems great! I've used SMR drives. They are awesome if you write few/ read many. Also if your writes aren't that massive. But if you write a lot, after a while things crawl from 200 MB/s to 30 MB/s, and the head starts to move around every so many seconds from the buffer to a shingle and back. In a couple of years, SSD's will come with 10TB+, but meanwhile this is a very good deal. Also, power loss can lead to corrupted sectors with SMR, so you have to be more careful. Five of these in raid-6 give 24 TB of storage space, very awesome.
    Actually, Seagate uses a section of the platter to back up the volatile cache, so these drives are less likely to experience data loss than a typical desktop HDD.


    Yeah, except I've experienced it. It happens when data is being transferred from the buffer (on the drive, that buffer track) to a shingled area and you get a power loss. It shows up as corrupted sectors, but they aren't. After you use the drive for a while, the corrupted sectors drop back to 0.

    From some engineers:

    Quote:
    1) A power loss when running with write cache enabled (i.e. normally - see hdparm for description) it can get a full track (~2MB or 500 sectors) of bad sectors - the old data was damaged when the overlapping track was written, but new data hasn't been written yet. Those sectors will stay bad until you re-write them with valid data. 2) I've heard from the guy we work with at Seagate that they were worried about how long the startup code could take under certain failure recovery situations, risking drive timeouts like the one you saw.
  • PaulAlcorn
    To my understanding the data is still held in the cache section of the platter until it is committed to the new band in the home location. this is why the sectors show up temporarily as corrupted, they corrupted sectors are in the home location. However, a copy of the data still exists in the media cache, and during idle time, or when the drive basically gets around to it, it will re-write the effected band, copying the valid data back from the media cache, thus 'repairing' the corrupted sectors. Did you experience permanent data loss, or just temporarily? Of course, things don't always happen as they 'should' in real life.
  • Quote:
    To my understanding the data is still held in the cache section of the platter until it is committed to the new band in the home location. this is why the sectors show up temporarily as corrupted, they corrupted sectors are in the home location. However, a copy of the data still exists in the media cache, and during idle time, or when the drive basically gets around to it, it will re-write the effected band, copying the valid data back from the media cache, thus 'repairing' the corrupted sectors. Did you experience permanent data loss, or just temporarily? Of course, things don't always happen as they 'should' in real life.


    In my case, it was permanent. 4728 sectors were flagged as bad, on a 5TB model. Eventually, the sectors were reclassified as ok, but the data loss and corruption was permanent. I have overall data redundancy and checksums, so I didn't actually lose any data overall because of this.

    The drive is marketed as "Archive HDD", and in that context, it's excellent. Most people don't write that much non-sequential data for extended periods of time. For most consumer use, this drive is a very good choice.

    The persistent cache is about 20-25 GB, so actually, you can write relatively a lot of random data without any performance degradation.
  • MidnightDistort
    And just when you think they couldn't do more than 5TB they do an 8TB, pretty impressive.
  • JPNpower
    Quote:
    And just when you think they couldn't do more than 5TB they do an 8TB, pretty impressive.


    Remember, this is "kinda gotcha" 8TB. Not the real deal. Only through sketchy SMR or witchcraft helium.
  • billyboy999
    Interesting - from the article: Seagate does not recommended utilizing these drives in RAID or NAS environments.
  • Eggz
    Are these reliable simply for offloading system drives, and then keeping in a safe until the next offload (maybe monthly)? I'm seeing reliability concerns, but I'm not sure whether they are tied to constant operation.

    I just want to back up by RAID periodically to something that's offsite except during backups, and in the even I need to pull from an archive, during recovery.

    Don't mind spending more on a helium drive from HGST if it means the data is safer, but is it?
  • JPNpower
    1406980 said:
    Are these reliable simply for offloading system drives, and then keeping in a safe until the next offload (maybe monthly)? I'm seeing reliability concerns, but I'm not sure whether they are tied to constant operation. I just want to back up by RAID periodically to something that's offsite except during backups, and in the even I need to pull from an archive, during recovery. Don't mind spending more on a helium drive from HGST if it means the data is safer, but is it?


    Meh, it has flaws. Less active usage may be better for reliability, but then again, constant operation constantly checks for errors so it would find errors more quickly. So as usual, the answer is to use multiple drives/services. Including onsite, offsite, online, etc.
  • Eggz
    1335260 said:
    Meh, it has flaws. Less active usage may be better for reliability, but then again, constant operation constantly checks for errors so it would find errors more quickly. So as usual, the answer is to use multiple drives/services. Including onsite, offsite, online, etc.


    That can't be the end answer. There'd be an infinite regress of infinite backups: backup the backup, and then back that up, and then back that up. . . . and back that up too . . . .

    What's a good stopping point? It seems like backing up working files is sufficient. What would warrant backing up archives outside enterprise applications?
  • MidnightDistort
    The way i back up my files is that i already have a 3TB drive (which is my main backup), the 2TB i use is when i need the most common files and i use 40-250GB drives as my main drives. They are older drives but at least if any of them die it won't be a huge loss. Doing a drive error check may be in order if you are concerned about data loss and backup drives should be well, used as backups meaning it should only be running for backing up data. At least that's how i see it. I had already lost a 120GB drive (died) and my 160GB drive no longer works properly so i can't count on that one for every day usage so eventually i will end up needing the 2TB for every day usage. Which means i'd like to get another high capacity drive which would be ideal as right now several 80GB drives are all that i am using right now and with more tv shows on my 2TB drive i will need additional drives. I like the easy access, no need to run the dvd.
  • JPNpower
    1406980 said:
    1335260 said:
    Meh, it has flaws. Less active usage may be better for reliability, but then again, constant operation constantly checks for errors so it would find errors more quickly. So as usual, the answer is to use multiple drives/services. Including onsite, offsite, online, etc.
    That can't be the end answer. There'd be an infinite regress of infinite backups: backup the backup, and then back that up, and then back that up. . . . and back that up too . . . . What's a good stopping point? It seems like backing up working files is sufficient. What would warrant backing up archives outside enterprise applications?


    It's your choice. Drives are inherently unreliable. If your data isn't really important, 1 or no backups is probably enough. If you care, add more backups. It's just statistics at this point. There is no perfect system.

    What do I do?: Two active sites (work/home) for important recent data, along with USB stick(s) and possibly cloud. Two passive sites (external HDD) for important old data, and 1 passive site for old lesser important stuff. Some long term data like photos and stuff uploaded to Backblaze online as well.
  • Eggz
    I think I'll impose a "two-copy" rule.

    I'll just backup my RAID until it's full and I need to offload. Initially, one backup will be enough. But once all 4 TB of my RAID 1 is full, I'll have to delete things from the RAID to make space, which will result in only the archived copy unless I back that up.

    So for now,

    System SSD + Data RAID + RAID Bakcup

    But then once I need to offload and make space on the RAID, it will be:

    System SSD + Data RAID + RAID Backup + Emergency Recovery Archive

    System SSD = 750 GB

    Data RAID = 2 x 4 TB in RAID 1 (for single drive fault tolerance)

    RAID Backup = Single 4 TB Drive

    Emergency Disaster Recovery Archive = An 8 TB drive to hold twice as much as the Data RAID

    That should keep things pretty sage if I keep the Emergency Disaster Recovery Archive in a fireproof safe. There's nothing worth backing up on the SSD.
  • JPNpower
    1406980 said:
    I think I'll impose a "two-copy" rule. I'll just backup my RAID until it's full and I need to offload. Initially, one backup will be enough. But once all 4 TB of my RAID 1 is full, I'll have to delete things from the RAID to make space, which will result in only the archived copy unless I back that up. So for now, System SSD + Data RAID + RAID Bakcup But then once I need to offload and make space on the RAID, it will be: System SSD + Data RAID + RAID Backup + Emergency Recovery Archive System SSD = 750 GB Data RAID = 2 x 4 TB in RAID 1 (for single drive fault tolerance) RAID Backup = Single 4 TB Drive Emergency Disaster Recovery Archive = An 8 TB drive to hold twice as much as the Data RAID That should keep things pretty sage if I keep the Emergency Disaster Recovery Archive in a fireproof safe. There's nothing worth backing up on the SSD.


    Do you commute to work or something similar? Because if you do, I suggest only doing the "Raid Backup" once a month or so, and bringing that to a secure place at work to have a good "offsite" backup. Keep the EDR in a fireproof at home or whatever. In between the monthly or so backups, rely on USB flash drive(s).
  • Eggz
    I think that will work, except I'm not sure any USB flash will have nearly enough space. My data source for backing up is 4TB, which is about half full.
  • JPNpower
    1406980 said:
    I think that will work, except I'm not sure any USB flash will have nearly enough space. My data source for backing up is 4TB, which is about half full.


    Go surf Amazon. 256gig monster thumb drives are being sold for peanuts. Do you generate more than 256 gigs of new data every few weeks to a month? I doubt it. But if so, use those 1tb portable HDDs which are also being sold for (relative) peanuts.
  • Eggz
    1335260 said:
    Go surf Amazon. 256gig monster thumb drives are being sold for peanuts. Do you generate more than 256 gigs of new data every few weeks to a month? I doubt it. But if so, use those 1tb portable HDDs which are also being sold for (relative) peanuts.


    I definitely could, but having 20 TB of drives should be fine, and I like the simplicity of having only 4 storage devices for the entire solution. I can see the USBs turning into a messy drawer where I'll probably lose drives.
  • JPNpower
    1406980 said:
    1335260 said:
    Go surf Amazon. 256gig monster thumb drives are being sold for peanuts. Do you generate more than 256 gigs of new data every few weeks to a month? I doubt it. But if so, use those 1tb portable HDDs which are also being sold for (relative) peanuts.
    I definitely could, but having 20 TB of drives should be fine, and I like the simplicity of having only 4 storage devices for the entire solution. I can see the USBs turning into a messy drawer where I'll probably lose drives.


    It's less secure though. The system you have protects only against random hard drive failure/corruption. Your immediate data is not safe from house catastrophe or power surges etc. All you have is that "fireproof" archive and no offsite.
  • Eggz
    Yeah, I think the data will be pretty secure with all we've discussed. :)
  • JPNpower
    1406980 said:
    Yeah, I think the data will be pretty secure with all we've discussed. :)


    Just watch out for the issues I listed...