Sign in with
Sign up | Sign in
Your question

Any HDD _NOT_ using RAM uCode?

Last response: in Storage
Share
Anonymous
a b G Storage
May 28, 2005 3:33:01 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

I'd like to find a hard drive (almost anything over 1G) that
stores most or all of it's uCode data in flash ROM, as opposed to
in the minus / service tracks as done by most drives today.

I'm completely frustrated by the fact that a drive can be
physically fine, but all the data "lost" just from having the
drive run wild and write over this sensitive area with gibberish.

I know drive makers use the current technique to cut costs, as
flash ROM is more expensive than a few tracks of disk space. I
don't care if the drive costs a lot more, however, if it makes
the data easier to recover.

I know that I could spend $8,000 and get one of the special
controllers that permits you to ignore HDD error codes and
reflash the critical area of the hard drive, but that's a bit
spendy for my blood.

I know I could get data security by other means (RAID, mirroring,
backups), but I'd like to start off with making the underlying
drive itself more reliable.

Thanks in advance.

More about : hdd ram ucode

Anonymous
a b G Storage
May 29, 2005 4:50:08 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

"lasitter" <cl@ncdm.com> wrote in message news:1117305181.196455.242350@g14g2000cwa.googlegroups.com
> I'd like to find a hard drive (almost anything over 1G) that
> stores most or all of it's uCode data in flash ROM,

> as opposed to in the minus / service tracks as done by most drives today.

You got that backwards.

>
> I'm completely frustrated by the fact that a drive can be
> physically fine, but all the data "lost" just from having the
> drive run wild and write over this sensitive area with gibberish.
>
> I know drive makers use,

may have used

> the current technique to cut costs as flash ROM is more expensive
> than a few tracks of disk space.

I found that the later IBM SCSI drives (e.g. DMVSs) have flash roms
of same size or bigger than the firmware image. They also display the
same Vendor/Product/revision ID whether spun-up or not where as
previously this might differ, signalling different firmware rev in flash
and on platter.
Not anymore. Flashrom has become bigger and cheaper over the years.

> I don't care if the drive costs a lot more, however, if it makes the
> data easier to recover.

That remains to be seen.
There is a lot more in the reserved area that can still go wrong.

>
> I know that I could spend $8,000 and get one of the special
> controllers that permits you to ignore HDD error codes and
> reflash the critical area of the hard drive, but that's a bit
> spendy for my blood.
>
> I know I could get data security by other means (RAID, mirroring,
> backups), but I'd like to start off with making the underlying
> drive itself more reliable.
>
> Thanks in advance.
Anonymous
a b G Storage
May 29, 2005 12:55:25 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Folkert Rienstra wrote:

> "lasitter" <cl@ncdm.com> wrote in message
> news:1117305181.196455.242350@g14g2000cwa.googlegroups.com

>> I'd like to find a hard drive (almost anything over 1G) that
>> stores most or all of it's uCode data in flash ROM,

>> as opposed to in the minus / service tracks as done by most
>> drives today.

> You got that backwards.

I'm certain that modern drives make extensive use of service
tracks. There are so many older technologies out there that I
can't say about older drives. If you are in a position to
iterate which drives use what technique, or to point me to a URL
which does this, I would be very greatful for the information.

>> I'm completely frustrated by the fact that a drive can be
>> physically fine, but all the data "lost" just from having the
>> drive run wild and write over this sensitive area with gibberish.

>> I know drive makers use,

> may have used

Again, I know everything depends on the specific drive, but if
you have information on specific drives (below) then that would
be wonderful.

>> the current technique to cut costs as flash ROM is more expensive
>> than a few tracks of disk space.

> I found that the later IBM SCSI drives (e.g. DMVSs) have flash
> roms of same size or bigger than the firmware image. They also
> display the same Vendor/Product/revision ID whether spun-up or
> not where as previously this might differ, signalling different
> firmware rev in flash and on platter.

This is very useful. The BIOS would "see" these drives, as would
the IBM / Hitachi Drive Fittness Test. If the DFT can talk to
the drive thru the controller, then all sorts of things become
possible.

My problem is when the electronics package on the drive throws up
a brick wall between the drive hardware and the system
controller. Then it becomes very difficult to get anywhere
without some very expensive equipment.

> That remains to be seen. There is a lot more in the reserved area
> that can still go wrong.

True. But DFT or PowerMax (Maxtor) or Data LifeGuard, at least
have a chance if they can talk to the drive.
Related resources
Anonymous
a b G Storage
May 29, 2005 5:31:36 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

>> I'm certain that modern drives make extensive use of service tracks.

> Thats a separate issue to which drives keep essential info on
> those track and have a problem starting if access to the tracks
> isnt available tho.

I have asked before, and I ask again: If you know how specific
drives perform in relation to specific fault conditions, I would
like to hear it. Commentary without specifics does not help me
in terms of preparing for recoverability before loss of data in
the service tracks.

> Doesnt necessarily mean that you can do anything useful about the
> data on the drive tho.

I thought I laid out my target scenario before, but if not, here
it is again. Drive spins. No head crash has occurred. Media in
service / minus tracks is good, but data in that area is
corrupted.

Any information you have regarding drives that would be more
friendly towards the process of regenerating the service tracks
is relevant to my question.

> Nope. Not if it cant see the platters anymore and if say the
> drive isnt even spinning up anymore.

Take note of the scenario I've just laid out.

> Yes, older drives can often have the logic card swapped if the
> logic card develops a fault. There arent many current model
> drives that can do that, or even swap the logic card between two
> identical brand new drives successfully.

Partially because of how the bad track information may be stored,
or because the BIOS versions are different, or for other reasons.

> What matters much more is which drives can have the logic card
> swapped between drives. Failure of a logic card is much more
> likely and much more readily fixable with those drives than if
> you cant get access to the tracks on the platter anymore.

The last two instances of data loss were on Maxtor 20G drives
that had not crashed, and that would spin up just fine.

Unfortunately if you insist on using drives which can have the
logic card swapped, you're stuck with drives with a performance
that is well down on current drives.

The person I'm assisting is tired of losing their creative
writing efforts due to hardware failure, viruses and the like.
She now wants a computer only running M$ Office 97, not even
connected to a network or the internet. For this application, a
slower / smaller drive will never be noticed.

> Makes more sense to take another approach and use a redundant
> RAID system to protect yourself against drive failure.

A RAID is not a silver bullet. The only guarantee is that
you'll be running more drives with more noise, more heat and more
power used, and you won't be spinning your drives up and down.

I like RAID fine, but it's not for everyone.
Anonymous
a b G Storage
May 29, 2005 6:30:24 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Previously lasitter <cl@ncdm.com> wrote:
> I'd like to find a hard drive (almost anything over 1G) that
> stores most or all of it's uCode data in flash ROM, as opposed to
> in the minus / service tracks as done by most drives today.

> I'm completely frustrated by the fact that a drive can be
> physically fine, but all the data "lost" just from having the
> drive run wild and write over this sensitive area with gibberish.

> I know drive makers use the current technique to cut costs, as
> flash ROM is more expensive than a few tracks of disk space. I
> don't care if the drive costs a lot more, however, if it makes
> the data easier to recover.

> I know that I could spend $8,000 and get one of the special
> controllers that permits you to ignore HDD error codes and
> reflash the critical area of the hard drive, but that's a bit
> spendy for my blood.

> I know I could get data security by other means (RAID, mirroring,
> backups), but I'd like to start off with making the underlying
> drive itself more reliable.

Apart from your approach being bogus, since the incident type
you are afraid of allmost never happens and other serious
problems are much more likely, here is a suggestion:

Get a flash-drive. It has other problems, such as limited
overwrites, but it does not have the problem you are afraid of.

Arno
Anonymous
a b G Storage
May 30, 2005 1:47:36 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

lasitter wrote:

> Folkert Rienstra wrote:
>
>> "lasitter" <cl@ncdm.com> wrote in message
>> news:1117305181.196455.242350@g14g2000cwa.googlegroups.com
>
>>> I'd like to find a hard drive (almost anything over 1G) that
>>> stores most or all of it's uCode data in flash ROM,
>
>>> as opposed to in the minus / service tracks as done by most
>>> drives today.
>
>> You got that backwards.
>
> I'm certain that modern drives make extensive use of service
> tracks. There are so many older technologies out there that I
> can't say about older drives. If you are in a position to
> iterate which drives use what technique, or to point me to a URL
> which does this, I would be very greatful for the information.
>
>>> I'm completely frustrated by the fact that a drive can be
>>> physically fine, but all the data "lost" just from having the
>>> drive run wild and write over this sensitive area with gibberish.
>
>>> I know drive makers use,
>
>> may have used
>
> Again, I know everything depends on the specific drive, but if
> you have information on specific drives (below) then that would
> be wonderful.
>
>>> the current technique to cut costs as flash ROM is more expensive
>>> than a few tracks of disk space.
>
>> I found that the later IBM SCSI drives (e.g. DMVSs) have flash
>> roms of same size or bigger than the firmware image. They also
>> display the same Vendor/Product/revision ID whether spun-up or
>> not where as previously this might differ, signalling different
>> firmware rev in flash and on platter.
>
> This is very useful. The BIOS would "see" these drives, as would
> the IBM / Hitachi Drive Fittness Test. If the DFT can talk to
> the drive thru the controller, then all sorts of things become
> possible.
>
> My problem is when the electronics package on the drive throws up
> a brick wall between the drive hardware and the system
> controller. Then it becomes very difficult to get anywhere
> without some very expensive equipment.

There is no "system controller" with ATA drives unless you're talking about
hardware RAID or the like. There is a host bus adapter and the controller
itself is located on the drive. At one time that host bus adapter was just
a paddleboard that carried the signals from the AT bus to the connector for
the drive cable, but with PCI supplanting the AT bus and demands for
improved performance it has gotten more complex, but it is still basically
just a signal converter. What goes over the cable to the drive is commands
to the drive, messages from the drive, and data, all as bits. The actual
details of generating the analog signal to the head and moving the head and
translating from logical to physical geometry and the like are handled by
the hardware on the drive.

The only device in your machine that can generate the signals needed to
write data on the platter or interpret the signals generated by the
read-write head is the data separator circuit on the drive itself, and
there is thus no reasonable way that drive could be designed to allow
low-level access to a device external to the drive unless you want to
completely abandon the onboard controller and go back to the separate
controller boards that generate analog signals that are passed via
relatively long cables to the heads on the drive, and that's not going to
happen.

Now, that controller on the drive might be designed in such a way as to
facilitate efforts at data recovery from a trashed drive, but it would seem
to me that one would have to have considerable expertise with the specific
drive model before one was able to recover in that fashion, and I do not
believe that there are any codes defined in either the ATA or the SCSI
specification that would support more than rudimentary efforts in that
direction. It would not surprise me at all if there was a "factory" mode
on many drives that allows this sort of access using proprietary codes but
that would be handled by the manufacturer as a trade secret and good luck
getting the docs to use it.

>> That remains to be seen. There is a lot more in the reserved area
>> that can still go wrong.
>
> True. But DFT or PowerMax (Maxtor) or Data LifeGuard, at least
> have a chance if they can talk to the drive.

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
Anonymous
a b G Storage
May 30, 2005 3:34:09 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

> The backwards part is that no old drives had full firmware in
> flash, because of that cost aspect involved. It is *now* that it
> is affordable and put the full firmware in rom

Thanks. I now know what I was wrong about.

>> There are so many older technologies out there that I can't say
>> about older drives.

> Oh? Try me.

http://webpages.charter.net/dperr/diskguid.txt
http://www.ata-atapi.com/hiwfnf.htm#T10
http://members.iweb.net.au/~pstorr/pcbook/book4/hdinter...

> It's either in ROM or on the platters.

Yes.

>> If you are in a position to iterate which drives use what
>> technique, or to point me to a URL which does this, I would be
>> very greatful for the information.

> That info is usually not available.

Completely true. But I figured that someone in the field
(former Seagate engineer?) lurking here might step forward ...

> Sorry. Only once checked that for my own drive when someone made
> a claim like yours and I disagreed with that:

> http://groups.google.com/groups?as_umsgid=2nitmkF1ceo.....

Was unable to load this link. A TinyUrl perhaps?

> And I could only refute it by the fact that I could lookup the
> specs of the Flashrom used on that drive and the size of a
> firmware update plus the fact that I knew how to unpack it and
> know the real size of it.

>>> I found that the later IBM SCSI drives (e.g. DMVSs) have flash
>>> roms of same size or bigger than the firmware image. They also
>>> display the same Vendor/Product/revision ID whether spun-up or
>>> not where as previously this might differ, signalling different
>>> firmware rev in flash and on platter.

>> This is very useful. The BIOS would "see" these drives, as would
>> the IBM / Hitachi Drive Fittness Test. If the DFT can talk to
>> the drive thru the controller, then all sorts of things become
>> possible.

> But the drive may well still refuse to work when other things are
> wrong.

Yes. No one fix solves all problems.

>> Then it becomes very difficult to get anywhere without some
>> very expensive equipment.

>> Same goes for if the programmer decided that it is all that
>> there is going to happen. There will be nothing that you can do
>> about that.

> Obviously, to be able to laod the firmware from the platters the
> drive has the ability to do at least the minimum basics of
> recognizing and accessing tracks.

> And it should be relatively easy for those apps to download
> the platter firmware to drive RAM and rescue the data in the
> event of corrupted (platter) firmware. But noone does that.

http://tinyurl.com/dt383

So that is apparently the extra step taken by the folks at ACE
Laborary making the PC-3000 controller. It's interesting to see
in this document how they go about getting the data to / from a
drive with problems.

In the case of the DMVS drives, which "report" even before
spinning up, I wonder if they could be re-flashed even though the
RAM uCode on the platter was corrupted? Or is there any info
even being loaded from the service tracks of DMVS drives other
than bad sector info?
Anonymous
a b G Storage
May 30, 2005 3:43:39 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

We see things differently, and you obviously are not interested in
answering the question(s) I have posed, so:

Could you possibly watch some other thread in this group?

--------------------------

We're rubbing your nose in the fact that you are obsessing about
the wrong detail.

We dont plan to keep track of which drives do what you demand,
because it isnt something that matters.

What matters is how likely it is that JUST that data is
corrupted.

But you keep asking the WRONG question.

No thanks, what matters is how likely that 'scenario' is in the
real world.

Bad sector, actually. And yes, that is a major downside with what
you are demanding, drives that keep that list in nvram.

Doesnt explain why the logic card cant be swapped with two
identical brand new drives.

Waffle.

Yes, there is some evidence that some maxtor model lines do have
that problem.

Thats a good reason to avoid those model lines tho, not for
demanding a list of drives that keep the data in nvram.

Plenty of much more viable ways of avoiding that happening that
getting obsessed about the detail of how the drive is
implemented.

ALL hard drives can die and the only thing that makes any sense
at all is to backup what you care about to a different medium
often.

And it costs peanuts to do that now with thumb drives and DVD etc
if the volume is too great for thumb drives.

Still makes a lot more sense to use a thumb drive or a DVD burner
for real backup, used frequently, than to be insisting on a hard
drive that keeps its data in nvram when failure of that logic
card will ensure that the data cant be got back from that drive
once thats happened except at a very high cost.

Neither is a hard drive that keeps its maintenance data in nvram.

If you dont like those disadvantages, you dont get them with a
thumb drive.

And insisting on a hard drive which keeps its maintenance data in
nvram is a useless way of ensuring that nothing will get lost
Anonymous
a b G Storage
May 30, 2005 4:04:03 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

> The only device in your machine that can generate the signals
> needed to write data on the platter or interpret the signals
> generated by the read-write head is the data separator circuit on
> the drive itself, and there is thus no reasonable way that drive
> could be designed to allow low-level access to a device external
> to the drive unless you want to completely abandon the onboard
> controller and go back to the separate controller boards that
> generate analog signals that are passed via relatively long
> cables to the heads on the drive, and that's not going to happen.

The people that make this data recovery controller:

http://www.acelab.ru/products/pc-en/

still use the electronics package on the drive to do much of the
low level access that you've described above. But somehow their
controller returns a lot more than the "HDD Error" I see on
bootup from a bad 20G maxtor drive.

They apparently develop very specific information on drive
geometry, the location of the service tracks, etc ...

http://tinyurl.com/dt383

and can use this info to do a lot more with an otherwise
unresponsive drive than any regular user could.

> Now, that controller on the drive might be designed in such a way
> as to facilitate efforts at data recovery from a trashed drive,
> but it would seem to me that one would have to have considerable
> expertise with the specific drive model before one was able to
> recover in that fashion, and I do not believe that there are any
> codes defined in either the ATA or the SCSI specification that
> would support more than rudimentary efforts in that direction.

I would think that doing so might help the utilities listed here:

http://www.tacktech.com/display.cfm?ttid=287

Do a better job with recovery.
Anonymous
a b G Storage
May 30, 2005 4:58:43 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

"lasitter" <cl@ncdm.com> wrote in message news:1117382125.742977.70810@g43g2000cwa.googlegroups.com
> Folkert Rienstra wrote:
>
> > "lasitter" cl@ncdm.com> wrote in message news:1117305181.196455.242350@g14g2000cwa.googlegroups.com
>
> > > I'd like to find a hard drive (almost anything over 1G) that
> > > stores most or all of it's uCode data in flash ROM,
>
> > > as opposed to in the minus / service tracks as done by most
> > > drives today.
>
> > You got that backwards.
>
> I'm certain that modern drives make extensive use of service tracks.

Yup, like I say at the bottom of the post.
The backwards part is that no old drives had full firmware in flash,
because of that cost aspect involved. It is *now* that it is affordable
and put the full firmware in rom

> There are so many

Oh? Try me.

> older technologies out there that I can't say about older drives.

It's either in ROM or on the platters.

> If you are in a position to iterate which drives use what technique,
> or to point me to a URL which does this, I would be very greatful
> for the information.

That info is usually not available.

>
> > > I'm completely frustrated by the fact that a drive can be
> > > physically fine, but all the data "lost" just from having the
> > > drive run wild and write over this sensitive area with gibberish.
>
> > > I know drive makers use,
>
> > may have used
>
> Again, I know everything depends on the specific drive, but if
> you have information on specific drives (below) then that would
> be wonderful.

Sorry. Only once checked that for my own drive when someone made
a claim like yours and I disagreed with that:
http://groups.google.com/groups?as_umsgid=2nitmkF1ceoeU...

And I could only refute it by the fact that I could lookup the specs of
the Flashrom used on that drive and the size of a firmware update plus
the fact that I knew how to unpack it and know the real size of it.

>
> > > the current technique to cut costs as flash ROM is more expensive
> > > than a few tracks of disk space.
>
> > I found that the later IBM SCSI drives (e.g. DMVSs) have flash
> > roms of same size or bigger than the firmware image. They also
> > display the same Vendor/Product/revision ID whether spun-up or
> > not where as previously this might differ, signalling different
> > firmware rev in flash and on platter.
>
> This is very useful. The BIOS would "see" these drives, as would
> the IBM / Hitachi Drive Fittness Test. If the DFT can talk to
> the drive thru the controller, then all sorts of things become possible.

But the drive may well still refuse to work when other things are wrong.

>
> My problem is when the electronics package on the drive throws up
> a brick wall between the drive hardware and the system controller.

That may well be a programmers decision.

> Then it becomes very difficult to get anywhere without some very
> expensive equipment.

Same goes for if the programmer decided that it is all that there is
going to happen. There will be nothing that you can do about that.

>
> > That remains to be seen. There is a lot more in the reserved area
> > that can still go wrong.
>
> True. But DFT or PowerMax (Maxtor) or Data LifeGuard,

> at least have a chance if they can talk to the drive.

They always had.
Obviously, to be able to laod the firmware from the platters the
drive has the ability to do at least the minimum basics of recognizing
and accessing tracks. And it should be relatively easy for those apps
to download the platter firmware to drive RAM and rescue the data
in the event of corrupted (platter) firmware. But noone does that.
Anonymous
a b G Storage
May 30, 2005 8:18:32 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

lasitter <cl@ncdm.com> wrote in message
news:1117382125.742977.70810@g43g2000cwa.googlegroups.com...
> Folkert Rienstra wrote:
>> lasitter <cl@ncdm.com> wrote

>>> I'd like to find a hard drive (almost anything over 1G)
>>> that stores most or all of it's uCode data in flash ROM,
>>> as opposed to in the minus / service tracks as done
>>> by most drives today.

>> You got that backwards.

> I'm certain that modern drives make extensive use of service tracks.

Thats a separate issue to which drives keep essential info on those
track and have a problem starting if access to the tracks isnt available tho.

> There are so many older technologies out there that I can't
> say about older drives. If you are in a position to iterate
> which drives use what technique, or to point me to a URL
> which does this, I would be very greatful for the information.

>>> I'm completely frustrated by the fact that a drive can be
>>> physically fine, but all the data "lost" just from having the
>>> drive run wild and write over this sensitive area with gibberish.

>>> I know drive makers use,

>> may have used

> Again, I know everything depends on the specific
> drive, but if you have information on specific drives
> (below) then that would be wonderful.

>>> the current technique to cut costs as flash ROM is
>>> more expensive than a few tracks of disk space.

>> I found that the later IBM SCSI drives (e.g. DMVSs) have flash
>> roms of same size or bigger than the firmware image. They also
>> display the same Vendor/Product/revision ID whether spun-up or
>> not where as previously this might differ, signalling different
>> firmware rev in flash and on platter.

> This is very useful. The BIOS would "see" these
> drives, as would the IBM / Hitachi Drive Fittness Test.

Doesnt necessarily mean that you can do
anything useful about the data on the drive tho.

> If the DFT can talk to the drive thru the controller,
> then all sorts of things become possible.

Nope. Not if it cant see the platters anymore and
if say the drive isnt even spinning up anymore.

> My problem is when the electronics package on the drive
> throws up a brick wall between the drive hardware and
> the system controller. Then it becomes very difficult to
> get anywhere without some very expensive equipment.

Yes, older drives can often have the logic card swapped if
the logic card develops a fault. There arent many current
model drives that can do that, or even swap the logic card
between two identical brand new drives successfully.

>> That remains to be seen. There is a lot more
>> in the reserved area that can still go wrong.

> True. But DFT or PowerMax (Maxtor) or Data LifeGuard,
> at least have a chance if they can talk to the drive.

They cant do anything useful if the drive doesnt spin up tho.

I cant help feeling you are asking the wrong question.

What matters much more is which drives can have
the logic card swapped between drives. Failure of
a logic card is much more likely and much more
readily fixable with those drives than if you cant
get access to the tracks on the platter anymore.

Unfortunately if you insist on using drives which can have
the logic card swapped, you're stuck with drives with a
performance that is well down on current drives. Makes
more sense to take another approach and use a redundant
RAID system to protect yourself against drive failure.
Anonymous
a b G Storage
May 30, 2005 11:23:07 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

lasitter wrote:

>> The only device in your machine that can generate the signals
>> needed to write data on the platter or interpret the signals
>> generated by the read-write head is the data separator circuit on
>> the drive itself, and there is thus no reasonable way that drive
>> could be designed to allow low-level access to a device external
>> to the drive unless you want to completely abandon the onboard
>> controller and go back to the separate controller boards that
>> generate analog signals that are passed via relatively long
>> cables to the heads on the drive, and that's not going to happen.
>
> The people that make this data recovery controller:
>
> http://www.acelab.ru/products/pc-en/
>
> still use the electronics package on the drive to do much of the
> low level access that you've described above. But somehow their
> controller returns a lot more than the "HDD Error" I see on
> bootup from a bad 20G maxtor drive.
>
> They apparently develop very specific information on drive
> geometry, the location of the service tracks, etc ...
>
> http://tinyurl.com/dt383
>
> and can use this info to do a lot more with an otherwise
> unresponsive drive than any regular user could.

They appear to have reverse engineered the drive manufacturers' "factory"
modes--this can in principle be done by identifying the processor on the
onboard controller and then disassembling the ROM, but that's a lot more
work than I for one want to go through to save a drive that costs less than
a tank of gas. Note that there is a specific list of drives they support,
if yours is not on the list don't bet on their device being able to do
anything with it.

>> Now, that controller on the drive might be designed in such a way
>> as to facilitate efforts at data recovery from a trashed drive,
>> but it would seem to me that one would have to have considerable
>> expertise with the specific drive model before one was able to
>> recover in that fashion, and I do not believe that there are any
>> codes defined in either the ATA or the SCSI specification that
>> would support more than rudimentary efforts in that direction.
>
> I would think that doing so might help the utilities listed here:
>
> http://www.tacktech.com/display.cfm?ttid=287
>
> Do a better job with recovery.

Those are just the various drive manufacturers' end-user diagnostics, which
you can download by going directly to the manufacturer's web site. In
general they just read the diagnostic codes for the purpose of letting tech
support decide whether to issue an RMA.

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
Anonymous
a b G Storage
May 30, 2005 11:36:26 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

lasitter <cl@ncdm.com> wrote in message
news:1117398696.478729.105700@g47g2000cwa.googlegroups.com...

>>> I'm certain that modern drives make extensive use of service tracks.

>> Thats a separate issue to which drives keep
>> essential info on those track and have a problem
>> starting if access to the tracks isnt available tho.

> I have asked before, and I ask again: If you know
> how specific drives perform in relation to specific fault
> conditions, I would like to hear it. Commentary without
> specifics does not help me in terms of preparing for
> recoverability before loss of data in the service tracks.

We're rubbing your nose in the fact that
you are obsessing about the wrong detail.

We dont plan to keep track of which drives do what
you demand, because it isnt something that matters.

>> Doesnt necessarily mean that you can do
>> anything useful about the data on the drive tho.

> I thought I laid out my target scenario before, but if not, here
> it is again. Drive spins. No head crash has occurred. Media in
> service / minus tracks is good, but data in that area is corrupted.

What matters is how likely it is that JUST that data is corrupted.

> Any information you have regarding drives that would
> be more friendly towards the process of regenerating
> the service tracks is relevant to my question.

But you keep asking the WRONG question.

>>> If the DFT can talk to the drive thru the controller,
>>> then all sorts of things become possible.

>> Nope. Not if it cant see the platters anymore and
>> if say the drive isnt even spinning up anymore.

> Take note of the scenario I've just laid out.

No thanks, what matters is how likely
that 'scenario' is in the real world.

>>> My problem is when the electronics package on the drive
>>> throws up a brick wall between the drive hardware and
>>> the system controller. Then it becomes very difficult to
>>> get anywhere without some very expensive equipment.

>> Yes, older drives can often have the logic card swapped
>> if the logic card develops a fault. There arent many current
>> model drives that can do that, or even swap the logic card
>> between two identical brand new drives successfully.

> Partially because of how the bad track information may be stored,

Bad sector, actually. And yes, that is a major downside with
what you are demanding, drives that keep that list in nvram.

> or because the BIOS versions are different,

Doesnt explain why the logic card cant be
swapped with two identical brand new drives.

> or for other reasons.

Waffle.

>> What matters much more is which drives can have the logic card
>> swapped between drives. Failure of a logic card is much more
>> likely and much more readily fixable with those drives than if
>> you cant get access to the tracks on the platter anymore.

> The last two instances of data loss were on Maxtor 20G
> drives that had not crashed, and that would spin up just fine.

Yes, there is some evidence that some
maxtor model lines do have that problem.

Thats a good reason to avoid those model lines tho, not
for demanding a list of drives that keep the data in nvram.

>> Unfortunately if you insist on using drives which can have
>> the logic card swapped, you're stuck with drives with a
>> performance that is well down on current drives.

> The person I'm assisting is tired of losing their creative
> writing efforts due to hardware failure, viruses and the like.

Plenty of much more viable ways of avoiding that happening that
getting obsessed about the detail of how the drive is implemented.

ALL hard drives can die and the only thing that makes any sense
at all is to backup what you care about to a different medium often.

And it costs peanuts to do that now with thumb drives
and DVD etc if the volume is too great for thumb drives.

> She now wants a computer only running M$ Office 97,
> not even connected to a network or the internet. For this
> application, a slower / smaller drive will never be noticed.

Still makes a lot more sense to use a thumb drive or a DVD
burner for real backup, used frequently, than to be insisting
on a hard drive that keeps its data in nvram when failure of
that logic card will ensure that the data cant be got back
from that drive once thats happened except at a very high cost.

>> Makes more sense to take another approach and use a
>> redundant RAID system to protect yourself against drive failure.

> A RAID is not a silver bullet.

Neither is a hard drive that keeps its maintenance data in nvram.

> The only guarantee is that you'll be running more drives
> with more noise, more heat and more power used, and
> you won't be spinning your drives up and down.

If you dont like those disadvantages, you dont get them with a thumb drive.

> I like RAID fine, but it's not for everyone.

And insisting on a hard drive which keeps its maintenance data
in nvram is a useless way of ensuring that nothing will get lost.
Anonymous
a b G Storage
May 30, 2005 12:55:06 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

> They appear to have reverse engineered the drive manufacturers'
> "factory" modes -- this can in principle be done by identifying
> the processor on the onboard controller and then disassembling
> the ROM, but that's a lot more work than I for one want to go
> through to save a drive that costs less than a tank of gas.

I agree with you, but it's almost never the drive that people
care about. Family pictures, important presentations and the
like are what spur them to part with large sums of money for a
recovery effort.

> Note that there is a specific list of drives they support, if
> yours is not on the list don't bet on their device being able to
> do anything with it.

Especially newer drives over 128G. They have a new PCI
controller version coming out which will do those (for another
$8,000?), and they will NOT be supplying those routines for the
ISA controller.

http://www.acelab.ru/products/pc-en/pc3000.list.html

Aside from the drives they list explicitly, their sales pitch
suggests that they would work beyond what's in the list ...

http://www.acelab.ru/products/pc-en/pc3000.html#Support

Question: Can PC3000 repair and revive HDDs that are not on the
list of drives provided?

Answer: "PC-3000AT" and "PC-DEFECTOSCOPE" consist of a
collection of universal utilities, which allow the
diagnose and recovery of virtually any IDE drive.
However, the specialized tools provide greater
functionality, thus their effectiveness of recovery is
much higher.

But getting back to your point, it would also appear that
specific adapters are needed to operate a number of drives in
"special technological interface" (factory) mode ...

Question: What is the purpose of the PC-Kalok, PC-Seagate,
PC-Conner, PC-Quantum, PC-Teac adapters?

Answer: The adapters are essential for the accurate diagnosis
and successful recovery of HDDs via a special
technological interface (TSO).

>> I would think that doing so might help the utilities listed here:
>> http://www.tacktech.com/display.cfm?ttid=287

> Those are just the various drive manufacturers' end-user
> diagnostics, which you can download by going directly to the
> manufacturer's web site. In general they just read the
> diagnostic codes for the purpose of letting tech support decide
> whether to issue an RMA.

One could always hope for more, but this is what I expected.

I think the net of all this is that it would take a great deal of
looking to find a drive today that would permit the reflashing of
the minus / service tracks when those tracks were not already in
readable condition.

This is unfortunate, because in some ways it creates the setting
for a perfect virus: Identify the type of drive in the system
and then run a program to flash the service tracks with bad
information, leaving the user helpless to restore the drive to a
usable state.

Happily, this is unlikely due to the variety of hard drives out
there, and for other technical limitations, I'm sure, but it
makes me uncomfortable knowing that this would be possible under
any circumstances.
Anonymous
a b G Storage
May 30, 2005 8:13:17 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

lasitter wrote:

>> They appear to have reverse engineered the drive manufacturers'
>> "factory" modes -- this can in principle be done by identifying
>> the processor on the onboard controller and then disassembling
>> the ROM, but that's a lot more work than I for one want to go
>> through to save a drive that costs less than a tank of gas.
>
> I agree with you, but it's almost never the drive that people
> care about. Family pictures, important presentations and the
> like are what spur them to part with large sums of money for a
> recovery effort.
>
>> Note that there is a specific list of drives they support, if
>> yours is not on the list don't bet on their device being able to
>> do anything with it.
>
> Especially newer drives over 128G. They have a new PCI
> controller version coming out which will do those (for another
> $8,000?), and they will NOT be supplying those routines for the
> ISA controller.
>
> http://www.acelab.ru/products/pc-en/pc3000.list.html
>
> Aside from the drives they list explicitly, their sales pitch
> suggests that they would work beyond what's in the list ...
>
> http://www.acelab.ru/products/pc-en/pc3000.html#Support
>
> Question: Can PC3000 repair and revive HDDs that are not on the
> list of drives provided?
>
> Answer: "PC-3000AT" and "PC-DEFECTOSCOPE" consist of a
> collection of universal utilities, which allow the
> diagnose and recovery of virtually any IDE drive.
> However, the specialized tools provide greater
> functionality, thus their effectiveness of recovery is
> much higher.
>
> But getting back to your point, it would also appear that
> specific adapters are needed to operate a number of drives in
> "special technological interface" (factory) mode ...
>
> Question: What is the purpose of the PC-Kalok, PC-Seagate,
> PC-Conner, PC-Quantum, PC-Teac adapters?
>
> Answer: The adapters are essential for the accurate diagnosis
> and successful recovery of HDDs via a special
> technological interface (TSO).
>
>>> I would think that doing so might help the utilities listed here:
>>> http://www.tacktech.com/display.cfm?ttid=287
>
>> Those are just the various drive manufacturers' end-user
>> diagnostics, which you can download by going directly to the
>> manufacturer's web site. In general they just read the
>> diagnostic codes for the purpose of letting tech support decide
>> whether to issue an RMA.
>
> One could always hope for more, but this is what I expected.
>
> I think the net of all this is that it would take a great deal of
> looking to find a drive today that would permit the reflashing of
> the minus / service tracks when those tracks were not already in
> readable condition.
>
> This is unfortunate, because in some ways it creates the setting
> for a perfect virus: Identify the type of drive in the system
> and then run a program to flash the service tracks with bad
> information, leaving the user helpless to restore the drive to a
> usable state.

You're insufficiently creative. I can think of several ways that such a
virus could destroy the contents of the drive beyond even the NSA's ability
to recover, or even destroy it physically.

> Happily, this is unlikely due to the variety of hard drives out
> there, and for other technical limitations, I'm sure, but it
> makes me uncomfortable knowing that this would be possible under
> any circumstances.

There is no substitute for maintaining good backups. That's the bottom
line. If you didn't back it up and you lose it as a result, then you
screwed the pooch.

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
Anonymous
a b G Storage
May 30, 2005 10:11:24 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

> There is no substitute for maintaining good backups. That's the bottom
> line. If you didn't back it up and you lose it as a result, then you
> screwed the pooch.

Routine backups are a great idea. They just routinely fail to happen.

Backups become more tedious as the amount of backed up material
increases. What are your favorite methods for backing up hundreds of
gigabytes?

I've just been backing up from one hard drive to the next with
something like xcopy. I used to back up to DAT tape, but with the
passage of time bit rot is claiming many of those ...
May 31, 2005 1:25:27 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

> Backups become more tedious as the amount of backed up material
> increases. What are your favorite methods for backing up hundreds of
> gigabytes?

That depends on how frequently you want to backup, how valuable
is your data and how much of that data changes between backups.
Also, if there is a need to archive it from time to time or not.

Do you have that need from your friend, you have mention before?
"The person I'm assisting is tired of losing their creative
writing efforts due to hardware failure, viruses and the like.
She now wants a computer only running M$ Office 97, not even
connected to a network or the internet. For this application, a
slower / smaller drive will never be noticed."
Anonymous
a b G Storage
May 31, 2005 3:37:30 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

>> Backups become more tedious as the amount of backed up material
>> increases. What are your favorite methods for backing up hundreds of
>> gigabytes?

> That depends on how frequently you want to backup, how valuable
> is your data and how much of that data changes between backups.
> Also, if there is a need to archive it from time to time or not.

All valid factors that help set bounds for the problem.

> Do you have that need from your friend, you have mention before?

Specifically, no. Almost anything I could put together with two
drives and xcopy would work fine.

Part of my problem is how many in the general computing public relate
to their computers.

They want their computer to be like any other appliance. To them
it's so when it croaks, toss it out. But a toaster doesn't hold all
the data valuables as does a hard drive. By the time they
realize the difference, the data is gone.

Ideally, you'd give them a short course on the benefits of full
and incremental backups, snapshots for archiving, off-site
storage, etc. You'd have them evaluate the importance of
different types of data, determine the frequency of backup
needed, the amount of data to backed up, the different backup
media, speed, software ease of use, cost and so on.

After sorting out all the means and ends you'd come up with just
the right parameters and the solution would be obvious. But:

The toaster never demanded nearly as much from them in the past,
and now you're going to pile TRAINING on top of that? Getting
them to sit down and learn a new program?

Talk about eyes glazing over!

Try to explain this to many an average user before disaster
strikes and you're just a salesman. Spend $100 to avert a
disaster that MIGHT happen? They'd rather ignore the potential
for disaster and hope they don't have to spend $1,000 later to
recover from it.

The same people put off warnings from the plumbers until the
toilet backs up, and then something tangible happens that they
can immediately understand and cannot ignore: Water everywhere
and a stench that'll keep you up at night.

So this "human nature" factor is what got me started on this
thread. Of course I want a good backup on the end of everything,
but I also like to identify which kinds of drives are less likely
to fail, and while failing are more amenable to recovery
efforts.

That way if the backup process becomes too difficult or intrusive
for the user, and they disable it (been there!), you might still
have generally reliable drives that are inclined to cooperate
when called upon.

And thank you for trying to provide relevant answers without
flaming!
Anonymous
a b G Storage
May 31, 2005 6:58:39 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

"lasitter" <cl@ncdm.com> wrote in message
news:1117468506.226311.16680@g14g2000cwa.googlegroups.com...
>> They appear to have reverse engineered the drive manufacturers'
>> "factory" modes -- this can in principle be done by identifying
>> the processor on the onboard controller and then disassembling
>> the ROM, but that's a lot more work than I for one want to go
>> through to save a drive that costs less than a tank of gas.
>
> I agree with you, but it's almost never the drive that people
> care about. Family pictures, important presentations and the
> like are what spur them to part with large sums of money for a
> recovery effort.
>
>> Note that there is a specific list of drives they support, if
>> yours is not on the list don't bet on their device being able to
>> do anything with it.
>
> Especially newer drives over 128G. They have a new PCI
> controller version coming out which will do those (for another
> $8,000?), and they will NOT be supplying those routines for the
> ISA controller.
>
> http://www.acelab.ru/products/pc-en/pc3000.list.html
>
> Aside from the drives they list explicitly, their sales pitch
> suggests that they would work beyond what's in the list ...
>
> http://www.acelab.ru/products/pc-en/pc3000.html#Support
>
> Question: Can PC3000 repair and revive HDDs that are not on the
> list of drives provided?
>
> Answer: "PC-3000AT" and "PC-DEFECTOSCOPE" consist of a
> collection of universal utilities, which allow the
> diagnose and recovery of virtually any IDE drive.
> However, the specialized tools provide greater
> functionality, thus their effectiveness of recovery is
> much higher.
>
> But getting back to your point, it would also appear that
> specific adapters are needed to operate a number of drives in
> "special technological interface" (factory) mode ...
>
> Question: What is the purpose of the PC-Kalok, PC-Seagate,
> PC-Conner, PC-Quantum, PC-Teac adapters?
>
> Answer: The adapters are essential for the accurate diagnosis
> and successful recovery of HDDs via a special
> technological interface (TSO).
>
>>> I would think that doing so might help the utilities listed here:
>>> http://www.tacktech.com/display.cfm?ttid=287
>
>> Those are just the various drive manufacturers' end-user
>> diagnostics, which you can download by going directly to the
>> manufacturer's web site. In general they just read the
>> diagnostic codes for the purpose of letting tech support decide
>> whether to issue an RMA.
>
> One could always hope for more, but this is what I expected.
>
> I think the net of all this is that it would take a great deal of
> looking to find a drive today that would permit the reflashing of
> the minus / service tracks when those tracks were not already in
> readable condition.
>
> This is unfortunate, because in some ways it creates the setting
> for a perfect virus: Identify the type of drive in the system
> and then run a program to flash the service tracks with bad
> information, leaving the user helpless to restore the drive to a
> usable state.
>
> Happily, this is unlikely due to the variety of hard drives out
> there, and for other technical limitations, I'm sure, but it
> makes me uncomfortable knowing that this would be possible under
> any circumstances.

None of which would matter if you have proper backups, as you have
to do for the situation where say the drive doesnt even spin up at all.
May 31, 2005 1:59:37 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

> > That depends on how frequently you want to backup, how valuable
> > is your data and how much of that data changes between backups.
> > Also, if there is a need to archive it from time to time or not.
>
> All valid factors that help set bounds for the problem.

What a deep thought ;-)

> Part of my problem is how many in the general computing public relate
> to their computers.
>
> They want their computer to be like any other appliance. To them
> it's so when it croaks, toss it out. But a toaster doesn't hold all
> the data valuables as does a hard drive. By the time they
> realize the difference, the data is gone.

Yeah, it takes a while. But most do not throw out their old wallets
before transferring it contents to a new one.

> Ideally, you'd give them a short course on the benefits of full
> and incremental backups, snapshots for archiving, off-site
> storage, etc. You'd have them evaluate the importance of
> different types of data, determine the frequency of backup
> needed, the amount of data to backed up, the different backup
> media, speed, software ease of use, cost and so on.
>
> After sorting out all the means and ends you'd come up with just
> the right parameters and the solution would be obvious. But:
>
> The toaster never demanded nearly as much from them in the past,
> and now you're going to pile TRAINING on top of that? Getting
> them to sit down and learn a new program?

No. You make a solution for them. Explain what to watch for,
but backup happens by itself. You automate it for them.
Like a good antivirus software. Automatic updates, automatic
scan. Limit their ability to turn it off.
Check their machines on a regular basis.
That is a help/service you provide.

Obviously recommend or use good quality products.
This approach didn't change much and everyone understands
that.
Anonymous
a b G Storage
May 31, 2005 4:32:29 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Peter wrote:

>> > That depends on how frequently you want to backup, how valuable
>> > is your data and how much of that data changes between backups.
>> > Also, if there is a need to archive it from time to time or not.
>>
>> All valid factors that help set bounds for the problem.
>
> What a deep thought ;-)
>
>> Part of my problem is how many in the general computing public relate
>> to their computers.
>>
>> They want their computer to be like any other appliance. To them
>> it's so when it croaks, toss it out. But a toaster doesn't hold all
>> the data valuables as does a hard drive. By the time they
>> realize the difference, the data is gone.
>
> Yeah, it takes a while. But most do not throw out their old wallets
> before transferring it contents to a new one.
>
>> Ideally, you'd give them a short course on the benefits of full
>> and incremental backups, snapshots for archiving, off-site
>> storage, etc. You'd have them evaluate the importance of
>> different types of data, determine the frequency of backup
>> needed, the amount of data to backed up, the different backup
>> media, speed, software ease of use, cost and so on.
>>
>> After sorting out all the means and ends you'd come up with just
>> the right parameters and the solution would be obvious. But:
>>
>> The toaster never demanded nearly as much from them in the past,
>> and now you're going to pile TRAINING on top of that? Getting
>> them to sit down and learn a new program?
>
> No. You make a solution for them. Explain what to watch for,
> but backup happens by itself. You automate it for them.
> Like a good antivirus software. Automatic updates, automatic
> scan. Limit their ability to turn it off.
> Check their machines on a regular basis.
> That is a help/service you provide.

The problem with this is that good backup requires that something be
physically removed from the machine and placed in a different location,
otherwise whatever kills the machine may kill the backup.

> Obviously recommend or use good quality products.
> This approach didn't change much and everyone understands
> that.

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
Anonymous
a b G Storage
May 31, 2005 4:45:59 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

lasitter <cl@ncdm.com> wrote in message
news:1117501884.030869.138760@z14g2000cwz.googlegroups.com...

>> There is no substitute for maintaining good backups.
>> That's the bottom line. If you didn't back it up and you
>> lose it as a result, then you screwed the pooch.

> Routine backups are a great idea.
> They just routinely fail to happen.

Thats the thing you should be fixing rather than
obsessing about the internals of hard drive design.

Its completely trivial to completely automate backup now.

> Backups become more tedious as the
> amount of backed up material increases.

Wrong. Some approaches to backup are independant of
the amount backed up, most obviously with unattended
backup when the PC is idle to an external hard drive.

> What are your favorite methods for
> backing up hundreds of gigabytes?

To another hard drive, automated to happen when you
arent using the PC. With decent reporting of problems.

Using RAID if the data is very high turnover
and you cant afford to lose hours of work.

> I've just been backing up from one hard
> drive to the next with something like xcopy.

There are plenty of much more effective
tools than xcopy for backup.

> I used to back up to DAT tape, but with the
> passage of time bit rot is claiming many of those ...

Yeah, well past its useby date.
May 31, 2005 6:08:12 PM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

> The problem with this is that good backup requires that something be
> physically removed from the machine and placed in a different location,
> otherwise whatever kills the machine may kill the backup.

While "removable" backup has its undisputable advantages;
for the sole purpose of increasing reliability of a system,
duplication of data to a second device might be sufficient
enough in most cases.

It is most probable that "whatever kills the machine" causes
a single disk failure, while a second one is still perfectly OK.

I have never experienced an simultaneous disk crash with
two (or more disks), but heard about power surges or
fires which destroyed multiple disks at the same time.

I think single disk failures are much more common than
disasters in which all connected disks are gone at the same
time.
Anonymous
a b G Storage
June 1, 2005 9:11:12 AM

Archived from groups: comp.sys.ibm.pc.hardware.storage (More info?)

Peter <peterfoxghost@yahoo.ca> wrote in message
news:cK1ne.8638$yG4.578847@news20.bellglobal.com...

>> The problem with this is that good backup requires that something
>> be physically removed from the machine and placed in a different
>> location, otherwise whatever kills the machine may kill the backup.

It shouldnt be that hard to deal with that with DVDs.

> While "removable" backup has its undisputable advantages;
> for the sole purpose of increasing reliability of a system,
> duplication of data to a second device might be sufficient
> enough in most cases.

> It is most probable that "whatever kills the machine" causes
> a single disk failure, while a second one is still perfectly OK.

> I have never experienced an simultaneous disk crash with
> two (or more disks), but heard about power surges or
> fires which destroyed multiple disks at the same time.

Power surges are easy enough to avoid for more money.

Yes, fire and flood or just theft of the physical PC is a real
problem that only removable media can protect against, and
even then you really need to remove it from the building etc too.

> I think single disk failures are much more common than
> disasters in which all connected disks are gone at the same
> time.

Corse they are, but house fire and theft still happen.
!