on or off

Archived from groups: microsoft.public.windowsxp.basics (More info?)

The more I read the posts, the more uncertain I am now about the on or off
question. Power use? power surges? cycling? more dust build up? damage to
parts? wear on HD? quick availability?
What do most of you out there actually do?
Perhaps a reply with just the words 'on' or 'off' would give us a flavour
as to general feeling.
9 answers Last reply
More about tomshardware
  1. Archived from groups: microsoft.public.windowsxp.basics (More info?)

    always on for mine
    off when not in use for my son's
  2. Archived from groups: microsoft.public.windowsxp.basics (More info?)

    On when I use it. Off when I don't.

    Ginger or Mary Ann?
    Mary Ann.
    Coke or Pepsi?
    Pepsi.
    Hip Hop or Rock?
    Rock.
    Automatic or Manual?
    Manual.

    "Happy" <happy@trial.ca> wrote in message
    news:L32rd.187800$Np3.7656800@ursa-nb00s0.nbnet.nb.ca...
    > The more I read the posts, the more uncertain I am now about the on or off
    > question. Power use? power surges? cycling? more dust build up? damage to
    > parts? wear on HD? quick availability?
    > What do most of you out there actually do?
    > Perhaps a reply with just the words 'on' or 'off' would give us a flavour
    > as to general feeling.
    >
    >
    >
  3. Archived from groups: microsoft.public.windowsxp.basics (More info?)

    On most of the day. Off at night and when I leave home.
    Gene K

    Kevin wrote:
    > On when I use it. Off when I don't.
    >
    > Ginger or Mary Ann?
    > Mary Ann.
    > Coke or Pepsi?
    > Pepsi.
    > Hip Hop or Rock?
    > Rock.
    > Automatic or Manual?
    > Manual.
    >
    > "Happy" <happy@trial.ca> wrote in message
    > news:L32rd.187800$Np3.7656800@ursa-nb00s0.nbnet.nb.ca...
    >
    >>The more I read the posts, the more uncertain I am now about the on or off
    >>question. Power use? power surges? cycling? more dust build up? damage to
    >>parts? wear on HD? quick availability?
    >>What do most of you out there actually do?
    >>Perhaps a reply with just the words 'on' or 'off' would give us a flavour
    >>as to general feeling.
    >>
    >>
    >>
    >
    >
    >
  4. Archived from groups: microsoft.public.windowsxp.basics (More info?)

    It just doesn't matter.
  5. Archived from groups: microsoft.public.windowsxp.basics (More info?)

    How to separate the responses. Do they provide numbers? Do
    they mention information from manufacturer datasheets? Do
    they explain the science behind the reasoning? Do they reply
    using principles you were taught in Junior High School science
    - it must have both theoretical reasoning - the principles AND
    it must include experimental evidence - the actual numbers
    from industry experiments and datasheets.

    A power switch has a life expectancy of (typically) 100,000
    cycles. Clearly power cycling a switch is far more
    destructive than leaving it on. Lets see. Power cycling
    seven times every day for ... 39 years.

    Another device that has a particularly small 'power cycle'
    life expectancy is one particular IBM hard drive - 40,000
    cycles. That is seven times every day for ... 15 years.

    The idea that power cycling shortens life expectancy is
    correct - until we apply engineering numbers and put those
    numbers into perspective. Then power cycling worries belong
    in a myth category. Some devices may have a shorter life
    expectancy such as that power switch and that disk drive. But
    who cares? Once numbers are applied, then reality takes on a
    whole different perspective.

    Some components, such as CPU are power cycling most severely
    when in normal operation. Did they forget to mention that?
    If power cycling was so destructive to a computer, then it is
    also so destructive to a TV. If power cycling shortens a
    computer's life expectancy by a factor of ten, well, who cares
    if the computer is still working 150 years from now.

    Those who say 'leave it on' never meet the criteria for
    scientific response. A glaring missing detail - they post no
    numbers. That alone says the post has no credibility. No
    numbers suggests junk science reasoning. When done, turn it
    off or put it to sleep. Clearly the best solution is we
    eliminate those who post only their personal speculations -
    not tempered by the numbers.

    Too much 'general feeling' is easily confused by those who
    don't have numbers. Too many eyes glaze over when the numbers
    are provided. But the answer to your question is found in
    numbers - that junk scientists fear.

    We speculate because we don't have numbers. Then get
    numbers so that we have knowledge. Now start filtering out
    posts that only speculate. Your final answer will become
    obvious.

    Happy wrote:
    > The more I read the posts, the more uncertain I am now about the on
    > or off question. Power use? power surges? cycling? more dust build
    > up? damage to parts? wear on HD? quick availability?
    > What do most of you out there actually do?
    > Perhaps a reply with just the words 'on' or 'off' would give us a
    > flavour as to general feeling.
  6. Archived from groups: microsoft.public.windowsxp.basics (More info?)

    In news:41AE169F.C88F9D29@hotmail.com,
    w_tom <w_tom1@hotmail.com> typed:

    > How to separate the responses.

    <snip>

    On my goodness! I can't believe there is still life in this thread, in this
    particular newsgroup, being that the topic has been beaten well-past death
    all over the Internet for years.

    The "*ANSWER*" was given a long time ago, as I recall;

    These days? You're *far* more likely to replace a machine because it's
    obsolete, than you are to replace any individual component of the *same*
    machine which failed due to any kind of "power" issue.

    (E-Latrines, and unprotected lightning strikes excepted.)

    And please don't ask me to cite. :-)
  7. Archived from groups: microsoft.public.windowsxp.basics (More info?)

    w_tom <w_tom1@hotmail.com> wrote:

    > How to separate the responses. Do they provide numbers? Do
    >they mention information from manufacturer datasheets? Do
    >they explain the science behind the reasoning? Do they reply
    >using principles you were taught in Junior High School science
    >- it must have both theoretical reasoning - the principles AND
    >it must include experimental evidence - the actual numbers
    >from industry experiments and datasheets.
    >
    > A power switch has a life expectancy of (typically) 100,000
    >cycles. Clearly power cycling a switch is far more
    >destructive than leaving it on. Lets see. Power cycling
    >seven times every day for ... 39 years.
    >
    > Another device that has a particularly small 'power cycle'
    >life expectancy is one particular IBM hard drive - 40,000
    >cycles. That is seven times every day for ... 15 years.
    >
    > The idea that power cycling shortens life expectancy is
    >correct - until we apply engineering numbers and put those
    >numbers into perspective. Then power cycling worries belong
    >in a myth category. Some devices may have a shorter life
    >expectancy such as that power switch and that disk drive. But
    >who cares? Once numbers are applied, then reality takes on a
    >whole different perspective.
    >
    > Some components, such as CPU are power cycling most severely
    >when in normal operation. Did they forget to mention that?
    >If power cycling was so destructive to a computer, then it is
    >also so destructive to a TV. If power cycling shortens a
    >computer's life expectancy by a factor of ten, well, who cares
    >if the computer is still working 150 years from now.
    >
    > Those who say 'leave it on' never meet the criteria for
    >scientific response. A glaring missing detail - they post no
    >numbers. That alone says the post has no credibility. No
    >numbers suggests junk science reasoning. When done, turn it
    >off or put it to sleep. Clearly the best solution is we
    >eliminate those who post only their personal speculations -
    >not tempered by the numbers.
    >
    > Too much 'general feeling' is easily confused by those who
    >don't have numbers. Too many eyes glaze over when the numbers
    >are provided. But the answer to your question is found in
    >numbers - that junk scientists fear.
    >
    > We speculate because we don't have numbers. Then get
    >numbers so that we have knowledge. Now start filtering out
    >posts that only speculate. Your final answer will become
    >obvious.
    >

    The last definitive numbers that I saw on this were from the mid-1980s
    and were based on a study of computers at a University. These were
    IBM AT (80286 CPUs) models. One group of the computers were installed
    in a computer lab where they were turned on at the beginning of each 1
    hour class and turned off at the end of that class. The other group
    were installed in administration and faculty offices where they were
    switched on at the beginning of each work day and off at the end of
    the day.

    The computer lab machines began to encounter high rates of hardware
    failures (hard drives, RAM chips, motherboards, etc) after 18 months
    of use while the admin and faculty office machines were pretty well
    failure free after 3 years of use.

    Of course hardware reliability has improved by at least one order of
    magnitude since the mid 1980s but I believe that the factors
    identified with regard to the above noted study are still relevant.
    These specifics include:

    1. Hard drives contain electric motors, and like all electric motors
    are under the greatest load and therefore the most stress when they
    are first powered up. The vast majority of electric motor failures of
    all kinds, including refrigerator compressors, washing machine pumps,
    etc. occur when the machine is powered up not while it is actually
    running.

    2. Electronic components are comprised of different layers of
    materials. When power is applied to these components they become
    heated and with this heating there is expansion. However the
    different materials have different rates of expansion and therefore
    when they expand there will be stressed placed on the joins between
    these materials. And when the power is turned off the materials
    contract and the stresses are relieved. Repeated stressing and
    unstressing an object at the same point gives rise to a condition
    known as "metal fatigue" and the stressed item is likely to break or
    crack at some point because of this. Such a breakage or cracking
    within an electrical components would, of course, most likely result
    in the total failure of that component.

    But as "Bill" pointed out in his response the improvements in hardware
    reliability means that computers will be disposed of due to
    obsolescence long before these hardware effects reach any sort of
    critical level.


    Ron Martell Duncan B.C. Canada
    --
    Microsoft MVP
    On-Line Help Computer Service
    http://onlinehelp.bc.ca

    "The reason computer chips are so small is computers don't eat much."
  8. Archived from groups: microsoft.public.windowsxp.basics (More info?)

    Bill said the following on 01/12/2004 20:13:
    > In news:41AE169F.C88F9D29@hotmail.com,
    > w_tom <w_tom1@hotmail.com> typed:
    >
    >
    >> How to separate the responses.
    >
    >
    > <snip>
    >
    > On my goodness! I can't believe there is still life in this thread, in this
    > particular newsgroup, being that the topic has been beaten well-past death
    > all over the Internet for years.
    >
    > The "*ANSWER*" was given a long time ago, as I recall;
    >
    > These days? You're *far* more likely to replace a machine because it's
    > obsolete, than you are to replace any individual component of the *same*
    > machine which failed due to any kind of "power" issue.
    >
    > (E-Latrines, and unprotected lightning strikes excepted.)
    >
    > And please don't ask me to cite. :-)
    >
    Therefore depends entirely on who pays the energy bill!

    Roy
  9. Archived from groups: microsoft.public.windowsxp.basics (More info?)

    Constant operation causes heat sensitive components to wear
    or oxidize when powered. This destructive wear from too many
    hours of operation makes hardware tend to fail on powerup.
    Others then blame powerup rather than hours of operation.

    Yes power cycling is destructive. And then we apply
    numbers. 15 and 39 years. These numbers today are also
    higher than numbers from the first small disk drives on 1980.
    Numbers demonstrate that power cycling 'worries' are classic
    urban myth. Too many observations not tempered by the
    technical details - fundamental principles and the numbers -
    create urban myths.

    In the meantime, using personal experience of something
    under 100 computers - almost all problems were with computers
    left powered 24/7. Like the university experiment - it tells
    us nothing useful, unless information such as what failed and
    why is included. Autopsies are performed at the IC level to
    learn why failures happen. Summary observations are not
    sufficient and can create myths.

    The Challenger exploded. That proves that god does not want
    man in space? Without details and underlying theory, that too
    could be proposed as a valid conclusion. Those who suggest
    power cycling is destructive also cannot provide those details
    - and the numbers.

    Again a most damning example: If power cycling is so
    destructive to a computer, then it is also equally destructive
    to all expensive radios and TVs. Why power down those other
    appliances? Either leave all radios, TVs, and computers on,
    or power them all off when done. Consistency. One cannot
    have it both ways. Disk drives wear most and therefore fail
    due to hours of operation. A spec that most every component
    manufacturer provides because hours of operation is the most
    relevant number for failure. A disk drive with too many hours
    of operation will wear and therefore experience failure most
    often during power up. Those without the underlying knowledge
    wildly speculate that power up did the damage when, in
    reality, damage was due to too many hours of operation. Since
    they never learn details, the naive just wildly assumed
    powerup did the damage. This is how classic urban myths are
    invented.

    The numbers say something completely different. Damage from
    power cycling becomes totally irrelevant once numbers put a
    problem into perspective. Power it down or put it to sleep to
    maximize value from that computer. After too many hours of
    operation, a computer is most likely to fail on power up.
    Powerup did not cause the failure. Too many hours did the
    wear and damage.

    If power cycling damages semiconductors, then power off
    semiconductors when not in use. Digital semiconductors power
    cycle constantly. Early Pentiums even went from less than 1
    amp to more than 10 amps in microseconds. Far more
    destructive than an AC power on. Even more nonsense is
    massive expansion and contraction for thermal cycling. Please
    show me one IC that failed because the substrate cracked.
    Damage occurs during switching - during normal operation. One
    example is electro-migration. AC power cycling does not cause
    electro-migration.

    Thermal cycling is many hundreds of degrees cycled many
    times. And yet semiconductors are not damaged by this thermal
    cycling. Now we are told than tens of degrees causes damage
    that hundreds of degrees does not? Bull. Again, apply
    numbers. More wild speculation that power cycling causes
    damage - eliminated as soon as we apply a new perspective -
    the numbers. If expansion and contraction caused transistor
    failure, then it occurs when expansion and contraction
    actually occurs - during manufacturing. That failure from ten
    times more degrees just does not happen.

    If thermal cycling is so destructive, then it occurs during
    normal operation when most temperature changes occurs fastest
    - during the so many less than 1 amp to more than 10 amp
    demands for current. To avoid such damage, then don't leave
    the computer on 24/7.

    Those who claim powerup causes failures don't provide the
    supporting numbers. No numbers means junk science reasoning.
    Turn it off or put it to sleep when done - to maximize
    computer value. Too much posted about 24/7 advantages is
    provided without numbers - also called wild speculation or
    myth.

    The quick sound byte conclusion is that power cycling does
    damage. Reality means the post must be long. Must provide
    underlying principles and numbers. Turn it off or put it to
    sleep when done - once we replace myth with long posts based
    on science principles and experience.

    Ron Martell wrote:
    > The last definitive numbers that I saw on this were from the mid-1980s
    > and were based on a study of computers at a University. These were
    > IBM AT (80286 CPUs) models. One group of the computers were installed
    > in a computer lab where they were turned on at the beginning of each 1
    > hour class and turned off at the end of that class. The other group
    > were installed in administration and faculty offices where they were
    > switched on at the beginning of each work day and off at the end of
    > the day.
    >
    > The computer lab machines began to encounter high rates of hardware
    > failures (hard drives, RAM chips, motherboards, etc) after 18 months
    > of use while the admin and faculty office machines were pretty well
    > failure free after 3 years of use.
    >
    > Of course hardware reliability has improved by at least one order of
    > magnitude since the mid 1980s but I believe that the factors
    > identified with regard to the above noted study are still relevant.
    > These specifics include:
    >
    > 1. Hard drives contain electric motors, and like all electric motors
    > are under the greatest load and therefore the most stress when they
    > are first powered up. The vast majority of electric motor failures of
    > all kinds, including refrigerator compressors, washing machine pumps,
    > etc. occur when the machine is powered up not while it is actually
    > running.
    >
    > 2. Electronic components are comprised of different layers of
    > materials. When power is applied to these components they become
    > heated and with this heating there is expansion. However the
    > different materials have different rates of expansion and therefore
    > when they expand there will be stressed placed on the joins between
    > these materials. And when the power is turned off the materials
    > contract and the stresses are relieved. Repeated stressing and
    > unstressing an object at the same point gives rise to a condition
    > known as "metal fatigue" and the stressed item is likely to break or
    > crack at some point because of this. Such a breakage or cracking
    > within an electrical components would, of course, most likely result
    > in the total failure of that component.
    >
    > But as "Bill" pointed out in his response the improvements in hardware
    > reliability means that computers will be disposed of due to
    > obsolescence long before these hardware effects reach any sort of
    > critical level.
Ask a new question

Read More

Microsoft Power Windows XP