Sign in with
Sign up | Sign in

Power Cycling

Power Supply Reference: Consumption, Savings, And More
By

Should you turn off a system when it is not in use? To answer this frequent question, you should understand some facts about electrical components and what makes them fail. Combine this knowledge with information on power consumption, cost, and safety to come to your own conclusion. Because circumstances can vary, the best answer for your own situation might be different from the answer for others, depending on your particular needs and applications.

Frequently powering a system on and off does cause deterioration and damage to the components. This seems logical, but the simple reason is not obvious to most people. Many believe that flipping system power on and off frequently is harmful because it electrically “shocks” the system. The real problem, however, is temperature or thermal shock. As the system warms up, the components expand; as it cools off, the components contract. In addition, various materials in the system have different thermal expansion coefficients, so they expand and contract at different rates. Over time, thermal shock causes deterioration in many areas of a system.

From a pure system-reliability viewpoint, you should insulate the system from thermal shock as much as possible. When a system is turned on, the components go from ambient (room) temperature to as high as 185°F (85°C) within 30 minutes or less. When you turn off the system, the same thing happens in reverse, and the components cool back to ambient temperature in a short period.

Thermal expansion and contraction remains the single largest cause of component failure. Chip cases can split, allowing moisture to enter and contaminate them. Delicate internal wires and contacts can break, and circuit boards can develop stress cracks. Surface-mounted components expand and contract at rates different from the circuit boards on which they are mounted, causing enormous stress at the solder joints. Solder joints can fail due to the metal hardening from the repeated stress, resulting in cracks in the joint. Components that use heatsinks, such as processors, transistors, or voltage regulators, can overheat and fail because the thermal cycling causes heatsink adhesives to deteriorate and break the thermally conductive bond between the device and the heatsink. Thermal cycling also causes socketed devices and connections to loosen, or creep, which can cause a variety of intermittent contact failures.

Thermal expansion and contraction affect not only chips and circuit boards, but also things such as hard disk drives. Most hard drives today have sophisticated thermal compensation routines that make adjustments in head position relative to the expanding and contracting platters. Most drives perform this thermal compensation routine once every five minutes for the first 30 minutes the drive is running and then every 30 minutes thereafter. In older drives, this procedure can be heard as a rapid “tick-tick-tick-tick” sound.

In essence, anything you can do to keep the system at a constant temperature prolongs the life of the system, and the best way to accomplish this is to leave the system either permanently on or permanently off. Of course, if the system is never turned on in the first place, it should last a long time indeed!

Now, I am not saying that you should leave all systems fully powered on 24 hours a day. A system powered on when not necessary can waste a tremendous amount of power. An unattended system that is fully powered on can also be a fire hazard. (I have witnessed at least two CRT monitors spontaneously catch fire—luckily, I was there at the time.)

The biggest problem with keeping systems on 24/7 is the wasted energy. Typical rates are 10 cents for a kilowatt-hour of electricity. Using this figure, combined with information about what a typical PC might consume, we can determine how much it will cost to run the system annually and what effect we can have on the operating cost by judiciously powering off or taking advantage of the various ACPI Sleep modes that are available. ACPI is described in more detail later in this chapter.

A typical desktop-style PC consumes anywhere from 75 W to 300 W when idling and from 150 W to 600 W under a load, depending on the configuration, age, and design of the system. This does not include monitors, which for LCDs range from 25 W to 50 W while active, whereas CRTs range from 75 W to 150 W or more. One PC and LCD display combination I tested consumed an average of 250 W (0.25 kilowatts) of electricity during normal operation. The same system drew 200 W when in ACPI S1 Sleep mode, only 8 W while in ACPI S3 Sleep mode, and 7 W of power while either turned off or hibernating (ACPI S4 mode).

Using those figures, here are some calculations for annual power costs:

Electricity Cost:    $0.10 Dollars per KWh
PC/Display Power:    0.250 KW avg. while running
PC/Display Power:    0.200 KW avg. while in ACPI S1 Sleep
     PC/Display Power:    0.008 KW avg. while in ACPI S3 Sleep
    PC/Display Power:    0.007 KW avg. while in ACPI S4 Sleep
    PC/Display Power:    0.007 KW avg. while OFF
Work Hours:     2080 Per year
Non-Work Hours:     6656 Per year
     Total Hours:     8736 Per year
-------------------------------------------------------------------
Annual Operating Cost:  $218.40 Left ON continuously
Annual Operating Cost:  $185.12 In S1 Sleep during non-work hours
Annual Operating Cost:   $57.32 In S3 Sleep during non-work hours
Annual Operating Cost:   $56.66 In S4 Sleep during non-work hours
Annual Operating Cost:   $56.66 Turned OFF during non-work hours
-------------------------------------------------------------------
       Annual Savings:    $0.00 Left ON continuously
       Annual Savings:   $33.28 In S1 Sleep during non-work hours
   Annual Savings:  $161.08 In S3 Sleep during non-work hours
   Annual Savings:  $161.74 In S4 Sleep during non-work hours
Annual Savings:  $161.74 Turned OFF during non-work hours


This means it would cost more than $218 annually to run the system if it were left on continuously. However, if it were turned off during nonwork hours, the annual operating cost would be reduced to $56, for an annual savings of more than $161! As you can see, turning systems off when they are not in use can amount to a huge savings over time.

But even more interesting is that you don’t have to turn a system all the way off to achieve this type of savings. When properly configured, most PCs will enter ACPI S3 Sleep mode either manually or after a preset period of inactivity, dropping to a power consumption level of 8W or less. In other words, if you configure the PC to enter S3 Sleep mode when it’s not active, you can achieve nearly the same savings as if you were to turn it off completely. In the preceding example, it would only cost an additional $0.66 to keep the system in Stand By mode during nonwork hours versus turned completely off, still resulting in an annual savings of more than $161.

With the improved power management capabilities of modern hardware, combined with the stability and control features built into modern OSs, systems can Sleep and Resume almost instantly, without having to go through the lengthy shutdown and cold boot startup procedures over and over again. I’m frankly surprised at how few people I see taking advantage of this because it offers both cost savings and convenience.

Many people perform a full shutdown procedure when turning off their computer, closing all open applications, shutting down the OS and system completely. Then when powering back on, they do a cold boot and reload the OS, drivers, and applications from scratch.

There is an alternative that is much better. Instead of shutting down completely, put the system to Sleep instead. When in Sleep mode the system saves the full system context (state of the system, contents of RAM, and so on) in RAM before powering off everything but the RAM. Unfortunately, many systems aren’t configured to take advantage of Sleep mode, especially older ones. Note that Sleep was called Standby (or Stand by) in Windows XP and earlier.

The key is in the system configuration, starting with one important setting in the BIOS Setup. The setting is called ACPI suspend mode, and ideally you want it set so that the system will enter what is called the S3 state. S3 is sometimes called STR for Suspend to RAM. That has traditionally been the default setting for laptops; however, many if not most desktops unfortunately have ACPI suspend mode set to the S1 state by default. ACPI S1 is sometimes called POS for Power on Suspend, a state in which the screen blanks and CPU throttles down; however, almost everything else remains fully powered on. As an example, a system and LCD display that consumes 250W will generally drop to about 200W while in S1 Sleep; however, the same system will drop to only 8W of power consumption in the S3 (Suspend to RAM) state.

When the system is set to suspend in the S3 state, upon entering Sleep (either automatically or manually), the current system context is saved in RAM and all the system hardware (CPU, motherboard, fans, display, and so on) except RAM is powered off. In this mode, the system looks as if it is off and consumes virtually the same amount of power as if it were truly off. To resume, you merely press the power button just as if you were turning the system on normally. You can configure most systems to resume on a key press or mouse click as well. Then, instead of performing a normal cold boot and full restart, the system almost instantly powers on and resumes from Sleep, restoring the previously saved context. Your OS, drivers, all open applications, and so on, appear fully loaded just as they were when you “powered off.”

As mentioned, many people have been using this capability on laptops, but few seem to be aware that you can use it on desktop systems also. To enable this deeper sleep capability, there are only two main steps:

  1. Enter the BIOS Setup, select the Power menu, locate the ACPI suspend setting, and set it to enter the S3 state (sometimes called STR for Suspend to RAM). Save, exit, and restart.
  2. In Windows, open the Power Options tool in the Control Panel, locate the setting for the Power button and change it to Sleep or Stand by.

You can also take advantage of hibernation, which allows you to use the ACPI S4 (STD = Suspend to Disk) state in addition to S3. ACPI S4 is a lot like S3, except the system context is saved to disk (in a file called hiberfil.sys) instead of RAM, after which the system enters the G2/S5 state. The G2/S5 state is also known as Soft-Off, which is exactly the same as if the system were powered off normally. When you power on from Hibernation (S4), the system still cold boots; however, rather than reloading from scratch, Windows restores the system context from disk (hiberfil.sys) instead of rebooting normally. Although hibernating isn’t nearly as fast as S3 (Suspend to RAM), it is still much faster than a full shutdown and restart and works even if the system loses power completely while suspended. Windows XP and earlier allows you to place a system in Standby (Sleep) or Hibernate modes, while Windows Vista and later has Sleep, Hibernate, and Hybrid Sleep modes. Hybrid Sleep is a combination of sleep and hibernate, where the system state is saved both in RAM and to the hard disk as a backup. Hybrid Sleep is the default Sleep function setting for desktop systems, and because of the extra time to create the hiberfil.sys file it unfortunately makes the system take just as long to Sleep as it does to Hibernate. To speed up the Sleep mode functionality in Windows 7/Vista you can disable
Hybrid Sleep.

Finally, to make the system Sleep automatically, you can change the Windows Power Scheme settings to put the system in Sleep mode after a time duration of your choice. This allows the system to automatically enter Sleep mode after the preset period of inactivity (I usually set it for 30 minutes to an hour) has elapsed.

By using S3 Sleep mode, you can effectively leave the system running all the time yet still achieve nearly the same savings as if you turned it off completely. Servers, of course, should be left on continuously; however, if you set the system to Wake on LAN (WOL) in both the BIOS Setup and in Windows, the system can automatically wake up anytime it is being accessed. The bottom line is that taking advantage of Sleep mode can save a significant amount of energy (and money) over time.

Display all 35 comments.
This thread is closed for comments
  • 1 Hide
    de5_Roy , January 11, 2012 4:07 AM
    very informative!
  • 0 Hide
    palladin9479 , January 11, 2012 4:47 AM
    Holy cow. Thanks for that Asus PSU link. I now know what's causing my system instability.

    AMD Phenom II x4 980BE OC'd
    4 x 4GB DDR3-1600 memory
    2x NVidia GTX-580 SLI'd
    4x SATA HDD's
    1x SATA DVDRW
    7x FANs (Water cooled system)

    Comes to 1150W recommended. I have a Corsair HX-1000 1000W PSU.
  • 0 Hide
    sincreator , January 11, 2012 4:57 AM
    Still running a Thermaltake 750w toughpower here. Been 5/6 years now. Man this PSU has seen some upgrades. lol. I'll probally buy another toughpower/Corsair sometime in the near future.(If this one ever dies. lol)
  • -4 Hide
    Dacatak , January 11, 2012 5:41 AM
    Still using the same Enermax Liberty 500W from 2006 for my new Sandy Bridge upgrade with GTX 560Ti.
    The only reason you'd need more than 500W is if you need to power more than one GPU.

    Of course, as stated in the article, not all 500W PSUs are equal. The Enermax Liberty was among the best 500W PSUs in its day, and its quality is still exceptional even by today's standards.
    It has dual 12V rails with 22A on each with a combined output of 32A total. Most of the dual-rail 500W PSUs sold nowadays max out at 18A per rail.

    The Enermax was definitely ahead of its time, and in general, PSUs sold directly by their manufacturer (OEMs such as Enermax, FSP, Kingwin, Seasonic) tend to be of superior quality than those sold by third-party rebranders (Antec, OCZ, Thermaltake, Corsair, etc.).
  • 7 Hide
    cumi2k4 , January 11, 2012 8:22 AM
    Was wondering about power cycling and thermal shock... The article said that thermal shock from powering on & off can cause deterioration in a system. You suggest S3 (Suspend to Ram), but does this also cause thermal shock to the system when resuming from sleep mode?
  • 2 Hide
    lordvj , January 11, 2012 11:59 AM
    ^ this. was wondering the same thing
  • 2 Hide
    jaquith , January 11, 2012 2:37 PM
    Great article and thanks, it'll 'hopefully' make my job easier in the Forum and stop the silly arguments I have recommending PSU's. I really wish folks would stop skimping on their PSU's on nice systems.

    Another important point that folks have a tendency to forget is 'electrolytic capacitor aging' which over time takes their once 650W and after a year or so reduces it to 520W~500W aka Capacitor Aging.

    Great PSU Sizer -> http://www.thermaltake.outervision.com/
    Peak:
    100% CPU Utilization (TDP)
    100% System Load
    30%~35% for Capacitor Aging
  • 0 Hide
    zak_mckraken , January 11, 2012 2:40 PM
    @cumi2k4 and lordvj : We can only assume it does cause a thermal shock, since only the RAM retains power in S3 mode. The other unpowered components thus cool down during stand by mode, like a regular shutdown.

    Very informative article by the way!
  • 1 Hide
    TeraMedia , January 11, 2012 2:40 PM
    @palladin9479:

    Yeah, me too! I had significantly underbudgeted power for fans (9), ODD/HDDs (8) and USB devices (3), and was going nuts trying to figure out why the system was unstable at times. I thought I had a bad MoBo, or HDDs, or GPU, or ??!?!@#$? Now I know.
  • 1 Hide
    xenol , January 11, 2012 3:04 PM
    I'm kind of suspect about the ASUS power supply link. It tells me for my old system, I should get a 600W power supply but I ran a 500W on it for years without problem.
  • 0 Hide
    Onus , January 11, 2012 3:35 PM
    The statement about third party rebranders depends on who the OEM is. If Seasonic or Delta makes it (e.g. most Antec units), it is going to be a good PSU. Many Corsair and XFX are made by Seasonic too. Channel Well, Sirtec, and some others have some units that aren't so great.

    I found the article of some interest (and will revisit the sleep settings on my own system), but some of it was also years out of date. That's probably hard to avoid on a writing project of this magnitude.
  • 5 Hide
    chaz_music , January 11, 2012 3:38 PM
    Good collection of interesting PSU topics. I especially liked the ACPI information. I have several comments and suggestions to change in the article though. I work in the PSU industry and can shed some light on a few issues.

    On efficiency, most people leave out the fact that we tend to use air conditioning here in the USA a good part of the year. Here in the mid Atlantic, we tend to use A/C for about ~ 7 months annually. This adds a thermal penalty to any heat that you dump into the office/home air during those months. With most A/C systems, the cost to remove 1W of heat is an additional 0.5W of A/C power (50% overhead). Taking the above numbers and some rounding, I use an overhead rating of 30% total for any heat dumped into my home / office. So take your power loss numbers and multiply by 1.30 to get the total cost impact to your wallet. This also should be done for using CFL and LED lighting. They are not allowed to use A/C cost in their advertising, so the public does not get to see the true possible savings.

    There are several types of UPS systems that you should write about. The one you outlined is called a double conversion unit, which is always processing the power to give a clean regulated sine wave output. These are the least efficient and most expensive though. Double conversion is always taking the AC input, making DC, and using a PWM inverter to make regulated AC again for the output. Double conversion efficiencies are typically around 88-90% efficient, so this can impact you total system efficiency and operational costs. A cheaper UPS is the standby type, which allows the raw utility power to go straight to the load with some light duty surge clamping in between. When the input power voltage goes out of bounds, there is a switch over that is usually around 4-8msec which is faster than the PSU hold up time of 20msec. Since normal operation is straight pass through, the usual efficiency is close to 100% (minus the UPS internal power needs and charging). Note though that some UPS systems are crap and can use upwards to 100W just being plugged in.

    I did not follow your discussion on the alarm buzzer indicating overcharge, which should never happen in any UPS. Most modern UPS system implement a battery test to make sure that the battery capacity and internal resistance is able to hold up the load. If the battery fails, they set off the buzzer. In almost all UPS systems, a buzzer alarm is critical - something is wrong. Some UPS systems also monitor the ground feed continuity and will alarm if the input feed ground starts to float making the UPS and the load unsafe to touch.

    The UPS output waveforms are not all sine wave. Often the double conversion types are sine wave, adding to their cost. The standby UPS systems are usually step wave which is also called quasi-sine which is marketing term for step wave (to confuse the buyer). Most PC loads and monitors work fine with step wave (and are even more efficient on step wave!), although some PFC PSUs have problems. Magnetic loads can have real heartburn with step wave (motors, transformers) due to high losses and non-sinusoid voltage waveform effects.

    Ferroresonant transformers are good voltage regulators, but the way they work is very lossy. A good ferro will only run around 90% efficiency. If your load is attached to a ferro, you are adding another power loss in your system. In my opinion, you are better off spend a few more dollars and getting a UPS (which there are ferro types still out there also).

    There is no mention of oversizing your PSU also. Many HTPC and SOHO/home server needs are on 24/7 so power usage and efficiency are paramount to the cost of use / ownership. If you install an oversized PSU, you are taking a efficiency hit (for most brands) that increases your energy usage. The 80 Plus standards do not test below 20% load, so the efficiency of most PSU designs drop off quickly below 20% load. I have seen several that are below 50% with 10% loading. A good analogy on oversizing that I have used before is thinking about car engines. You cannot get a V8 car engine to run as efficiently as a 4 cylinder due to the physics (more friction/mass, etc.). That same effect occurs in a PSU. Larger magnetics, power devices, and other overhead lowers the efficiency at low power. proper sizing can save a good bit of money. Just don't get it too small, especially thinking about system start up (HDD spin up, fans, CPU local PSUs ramping up, etc.).

    You comment on thermal shock is great, but there are many other factors to consider in reliability. Spinning down any HDD and fan loads reduces bearing wear for those mechanical parts. But keeping the main motherboard PCB powered and some operation continuing also helps with reliability. The minor amount of heat that is generated helps keep the PCB dry (PCB material is hydroscopic!), which one major part of the high voltage area in a PSU failing after a long storage (like right after purchase) causing a DOA. And as others pointed out in the comments, allowing the system to go into a sleep state will also cause a cool down thermal shock. The biggest problem with thermal shock is that it break solder joints and helps break bond wires/connections in ICs. It also speeds up electrolytic cap leaking and shortens the life. Does anyone remember the motherboard cap failure from a few years ago?

    The absolute largest cause of computer failures is caused by ESD damage. The data from companies that keep statistics on this unanimously show this as a fact, but the PC enthusiast industry does not work to educate the end users of this well at all. In the electronics industry as a whole, ESD accounts for nearly 55-60% of all failures! This includes component suppliers, etc. So if you want a great topic for a future article, tackle ESD. It is real and it is very costly when ignored. Ever had a PC part that was DOA, i.e., that just "did not work at all" when powered up the first time and would not work at all? Good chance it was ESD.

    Thanks for the article.
  • 3 Hide
    george21546 , January 11, 2012 3:52 PM
    Buy a power meter kill-o-watt comes to mind. Cost 15-20 and will tell you amps, watts, power factor and cycles per second. Best of all it will measure watts over time so you can check how much your system is using in each of it's states. I like to oversize power supplies by 25% unless upgrades are planned.
  • 0 Hide
    chris maple , January 11, 2012 6:48 PM
    The low ends of the ranges shown are too high. Discrete video cards are available that use less than 10 watts, same for hard drives. Motherboards rarely exceed 25 watts.
    My system has an Intel Core I7-870, discrete video card, 2x2G RAM, 2 1Tbyte hard drives, an SSD and a DVD burner. It usually runs at 70 watts and has never exceeded 200 watts driven hard.
  • 0 Hide
    ethaniel , January 11, 2012 8:49 PM
    I thought it was only -5% tolerance for the -/+12v rail. Good data.
  • 0 Hide
    BlackHawk91 , January 11, 2012 10:22 PM
    Would enabling the S3 sleep mode interfere with OC settings and/or performance?
  • 0 Hide
    hardcore_gamer , January 12, 2012 12:12 PM
    palladin9479Holy cow. Thanks for that Asus PSU link. I now know what's causing my system instability.AMD Phenom II x4 980BE OC'd4 x 4GB DDR3-1600 memory2x NVidia GTX-580 SLI'd4x SATA ......


    You have a serious bottleneck there bro ;) . Time to upgrade the CPU.
  • 0 Hide
    g-unit1111 , January 12, 2012 10:50 PM
    palladin9479Holy cow. Thanks for that Asus PSU link. I now know what's causing my system instability.AMD Phenom II x4 980BE OC'd4 x 4GB DDR3-1600 memory2x NVidia GTX-580 SLI'd4x SATA HDD's1x SATA DVDRW7x FANs (Water cooled system)Comes to 1150W recommended. I have a Corsair HX-1000 1000W PSU.


    Yeah... that floored me as well, mine is 900 minimum.

    1 x AMD Phenom II X6 1055T OC'd
    2 x Geforce GTX 550TI
    4 x 4GB DDR3
    1 x SSD
    2 x HD
    2 x DVD-RW
    5 x CPU fans (double heat sink)

    I know now what's causing most of my heat issues is that I'm running an underpowered PSU (Corsair 750). I will definitely make this my next upgrade.

    And that thing about putting systems to sleep, I'll do that more often.
  • 0 Hide
    palladin9479 , January 12, 2012 11:57 PM
    Remember that ASUS link is calculating the approximate maximum power draw possible on your system. Basically with everything going full blast which doesn't happen too often.

    PSU's in general start to get stressed once their over 80% of their rated output. Prolonged stress can cause components to wear out much earlier then before. This is why a PSU may be fine for awhile but then start to have random issues six months or more after installation. I just didn't think I was burning that much juice, but now it seems I am.
  • 0 Hide
    A Bad Day , January 13, 2012 2:51 AM
    Just a question, is it worth watercooling a PSU? I know it would boost efficiency and allow it to put out higher watt than specified, but is it worth it?
Display more comments