Sign in with
Sign up | Sign in

Google Picks Gigabyte for Efficiency/Reliability

By - Source: Tom's Hardware US | B 12 comments

Google this week released details of how it builds servers at its major data centers. Not surprisingly, Google focused on efficiency and power, two big criteria for running an always online operation.


Google engineer Ben Jai revealed that Google used large shipping containers that can hold up to 1,160 servers at once. Each server is a 2U server chassis and are backed up by battery power in case of outages. The biggest difference however, is that Google builds each server with its own battery backup unit.

Typically, data centers have large uninterruptable power supplies (UPS) that will supply backup power to whole racks of servers. Jai however, said that this only achieves up to 95-percent power efficiency. Google's servers, which are custom designed by Jai and his team in house, use individual batteries for each server. Jai said that Google was able to achieve greater than 99.9-percent efficiency.

Using this method, Google is able to allow backup power to be supplied to servers that require it, not an entire rack. This method also allows Google to fully track power consumption and efficiency on a very granular level.

The individual servers themselves use motherboards by Gigabyte. For the past two years, Gigabyte has been positioning its motherboards as top performers when it comes down to power utilization and heat efficiency. The servers also contain two CPUs and two hard drives each, and a stock load of memory. Jai mentioned that Google uses CPUs from both AMD and Intel.

In this particular server, Google uses a Gigabyte GA-9IVDP, which is not available to the general public.

Image: courtesy of Stephen Shankland/CNET

Display 12 Comments.
This thread is closed for comments
  • 1 Hide
    Shadow703793 , April 3, 2009 1:28 AM
    Good news for Gigabyte. More specs would have helped, esp. regarding the HDD(10k or 15k?,etc) and the motherboard model(s).
  • 0 Hide
    Flameout , April 3, 2009 2:17 AM
    i'll be looking for 3 years warranty on parts that i buy for my next compy. i know asus has it, but does gigabyte?
  • 0 Hide
    my_name_is_earl , April 3, 2009 2:23 AM
    that's a pretty nice open case. Very useful for switching out component. Need one of those for my computer store as we regularly fixes PCs and the likes. Too busy to do one myself :(  Anyway, I build all my computer using Gigabyte motherboard so I kind of know what this article is about and too agree with Google's decision.
  • 0 Hide
    ravenware , April 3, 2009 4:33 AM
    Cool Gigabyte has really stepped up their game as of late.

    I have been eyeballing MSI too. They slipped for a while but now seem to be offering good competitive boards with nice layouts.

  • 0 Hide
    scimanal , April 3, 2009 7:27 AM
    7.2K RPM drives I have some of those same drives sitting on my desk. SATA Hitachi Deskstars - not SAS (no need since these are commonly used in massive parallel database operations. IE no vmware etc ) The massive storage would be in separate SAN units, much like IBMs storage units.

    I am curios about mobo model as well, but I assume it varies, just look at Gigabytes most recent server mobos. They are across the board very efficient. I suspect allot of the efficiency lies in
  • 0 Hide
    ilikesoup , April 3, 2009 2:00 PM
    The GA-9IVDP has been rumored since 2006. I found a thread where a guy said he got a system built around a GA-9IVDP as a signing bonus from google.
  • 0 Hide
    hellwig , April 3, 2009 9:12 PM
    Shipping containers? Those big solid metal ones? I wonder if google counts the power it takes to circulate air through one of those. They aren't designed to be very open to air flow, not when they travel on an open-deck ship across the ocean.
  • 0 Hide
    Anonymous , April 3, 2009 9:28 PM
    perhaps they forgot to mention that separate batteries often have a lower efficiency than a large one?
    The 95% VS 99% efficiency quickly gets lost if you know that if 1 battery has an efficiency of 95%. 16 smaller batteries will have a total efficiency of 90,31% (every doubling of battery adds half of the efficiency to the total sum).
    In other words, they'd be better off with a large capacity battery and controllers that will redirect power where needed, as opposed to separate cells for each rackserver.

    Either way, google must know something Yahoo or others don't, because their search engines are just amazing!
  • 0 Hide
    Milleman , April 4, 2009 8:23 AM
    A battery on each server...? Maybe they are right regarding efficiency. But they only last around 2 years. Then each battery has to be changed. How's that for cost and efficiency?
  • 0 Hide
    Milleman , April 4, 2009 9:44 AM
    Are they using Linux?
  • 0 Hide
    smokinjoe , April 4, 2009 9:55 PM
    They are using Windows Datacenter edition 2008 64 bit and each container they cough up 2-9 million to Microsoft! What a deal, they could we waiting billions per year if they ran Linux! You know the TCO is less on Windows, right?
  • 0 Hide
    Anonymous , April 5, 2009 12:12 AM
    Wonder why they dont use one large power supply and one large battery backup.

    As others have said many small batters is not more efficient. I dont konw where they are getting their 99.9% efficiency numbers, but thats completely bogus. The battery itself is less efficient then that. Unless they are only talking about transmission losses, but i really cant see them losing very much using 20 feet of wire vs 2 feet. One large supply with controllers makes way more sense, especially since you could redirect power to keep specific servers up way longer then you can if each server has it own battery.

    As far as one large power supply goes. Same thing. Power supplies have an efficiency curve based on how much you draw out of it. It would be far easier to tune for a high efficiency with one large power supply then many small ones. Im sure you could get the efficiency up in the 95% range.

    Its also much easier to cool the entire thing if you can isolate the power supply and backups from the servers. Pulling the power supply out of the 'server room' can reduce the cooling requirements significantly.

    I mean they have hundres of thousands of servers, so im sure they know what they are doing. But in this case it looks like they chose the off the self method rather then the best method.