Sign in with
Sign up | Sign in
Your question

have the bugs been worked out of the T5 yet?

Last response: in Cell Phones & Smartphones
Share
Anonymous
December 14, 2004 1:22:54 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

I would like to get the T5, but have been put off by the various bugs
and problems. Have those been worked out yet?

bblackmoor
2004-12-12

More about : bugs worked

Anonymous
December 14, 2004 6:16:34 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On Mon, 13 Dec 2004 22:22:54 -0500, Brandon Blackmoor had this to say...


> I would like to get the T5, but have been put off by the various bugs
> and problems. Have those been worked out yet?
>
> bblackmoor
> 2004-12-12
>

What bugs?

--
Hope this helps.
Jim Anderson
( 8(|) To email me just pull my_finger
Anonymous
December 15, 2004 1:13:22 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

"Jim Anderson" <fro2750@frontiernet.my_finger.net> wrote in message
news:MPG.1c28bfa077a7524a98994b@news.frontiernet.net...
> On Mon, 13 Dec 2004 22:22:54 -0500, Brandon Blackmoor had this to say...
>
>
>> I would like to get the T5, but have been put off by the various bugs
>> and problems. Have those been worked out yet?
>>
>> bblackmoor
>> 2004-12-12
>>
>
> What bugs?

Termites.

>
> --
> Hope this helps.
> Jim Anderson
> ( 8(|) To email me just pull my_finger
Related resources
Anonymous
December 15, 2004 8:55:34 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

Necron 99 wrote:
>>> I would like to get the T5, but have been put off by the
>>> various bugs and problems. Have those been worked out yet?
>>
>> What bugs?
>
> Termites.

(sigh)

Okay. I'll take that as a "no".

bblackmoor
2004-12-15
Anonymous
December 16, 2004 3:42:09 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On 2004-12-15, Brandon Blackmoor <bblackmoor@blackgate.net> wrote:

> (sigh)

Hehe, it's annoying when that happens ;-)

What bugs do you mean though, perhaps if you name the ones you know,
any users of the T5 present might be prompted to respond.

--
For every expert, there is an equal but opposite expert
Anonymous
December 16, 2004 8:44:46 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

Ian Rawlings wrote:
> What bugs do you mean though, perhaps if you name the ones you know,
> any users of the T5 present might be prompted to respond.

These are the ones I know of:

1. PDB records take 512 bytes minimum, even if they only store 1 byte
of information, so your data might be 10 times as big as you
expect from how it was stored on other devices.

2. Applications that improperly used DmQueryRecord() where they
should've used DmGetRecord() will possibly cause databases to
get corrupted (whereas previously they only screwed up
hotsyncing), since the T5 uses this info to determine whether
to write data back to flash.

3. DB Cache is limited to 10 MB, so if you have a PRC bigger than
10 MB, it cannot ever be opened. And other similar limitations
exist when you hit the 10 MB barrier in various ways.

4. If two databases have names differing only in uppercase vs.
lowercase, this is supposed to be legal, but one overwrites
the other on the T5 because the T5 uses the FAT filesystem
(which is case insensitive) to store databases when it writes
them to flash.

5. I haven't verified this one fully yet, but it seems that if
you use sample code from PalmSource for handling screen
resizing, then it can cause the T5 to go into an infinite loop
in some cases when an alert (little warning or error window)
pops up.

There may be others, but those are the ones I'm aware of.

- Logan
Anonymous
December 17, 2004 11:51:15 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

Ian Rawlings wrote:
>
> What bugs do you mean though, perhaps if
> you name the ones you know, any users of
> the T5 present might be prompted to respond.

I don't own a T5, so I can only tell you the ones I recall, but the two
that I recall are the slow memory (really, really slow memory, like
taking ten or fifteen seconds to start an app slow) and the calendar
crashing when you look at a particular view.

bblackmoor
2004-12-17
Anonymous
December 18, 2004 5:13:54 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

<< Ian Rawlings wrote:
>
> What bugs do you mean though, perhaps if
> you name the ones you know, any users of
> the T5 present might be prompted to respond.

I don't own a T5, so I can only tell you the ones I recall, but the two
that I recall are the slow memory (really, really slow memory, like
taking ten or fifteen seconds to start an app slow) and the calendar
crashing when you look at a particular view.
>>

And the fact that a file occupies a MINIMUM of 512 KB, regardless of the files
actual size.

Dennis B. Swaney
remove .zz to reply
Anonymous
December 18, 2004 5:13:55 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

ROMAD wrote:
>
> And the fact that a file occupies a MINIMUM
> of 512 KB, regardless of the files actual size.

I hadn't heard that one. That seems pretty serious on a device with
memory measured in MB.

bblackmoor
2004-12-17
December 18, 2004 5:59:42 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

Brandon Blackmoor wrote:

> ROMAD wrote:
>
>>
>> And the fact that a file occupies a MINIMUM
>
> > of 512 KB, regardless of the files actual size.
>
> I hadn't heard that one. That seems pretty serious on a device with
> memory measured in MB.
>
> bblackmoor
> 2004-12-17

Look back up in this thread for Logan Shaw's posting. His first item is
a 512-byte (half-KiloByte) minimum "cluster" size (to use FAT fs terms)
for PDB records, which makes much more sense.

--
--
ROC
Anonymous
December 18, 2004 11:51:41 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On 2004-12-18, ROC <NoSpam@for.me> wrote:

> Look back up in this thread for Logan Shaw's posting. His first
> item is a 512-byte (half-KiloByte) minimum "cluster" size (to use
> FAT fs terms) for PDB records, which makes much more sense.

Now I'm confused. If a file takes a minimum of 512 bytes then that's
totally standard and isn't to be worried about, however if a pdb
RECORD takes a minimum of 512 bytes then that's more of a problem
because a single file could have thousands of PDB records in it (I
think?) each only being a few bytes long. And of course if an entire
file takes up a minimum of 512Kilobytes then that's not too good
either.

I think there's a lot of confusion on this one.

--
For every expert, there is an equal but opposite expert
Anonymous
December 18, 2004 1:45:06 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

Ian Rawlings wrote:
> On 2004-12-18, ROC <NoSpam@for.me> wrote:
>>Look back up in this thread for Logan Shaw's posting. His first
>>item is a 512-byte (half-KiloByte) minimum "cluster" size (to use
>>FAT fs terms) for PDB records, which makes much more sense.

> Now I'm confused. If a file takes a minimum of 512 bytes then that's
> totally standard and isn't to be worried about, however if a pdb
> RECORD takes a minimum of 512 bytes then that's more of a problem
> because a single file could have thousands of PDB records in it (I
> think?) each only being a few bytes long.

I can see where that would be confusing.

First of all, you're right that you can have thousands of records
in a single PDB. For instance, every appointment in the datebook
is a separate record, all within one PDB.

The story I have heard on the T5 (and Treo 650) from Palm is this:
each *record* uses multiples of 512 bytes of flash for performance
reasons. Based on what I've inferred from their comments, this
doesn't really have anything much to do with cluster sizes directly.
Instead, it has to do with the type of flash they're using. It's
called NAND flash, and unlike some other memory, it doesn't allow
you to write only a few bytes at a time; instead, you must write a
whole 512-byte block even if all you want to change is a single byte.

So, they responded to this limitation by ensuring that no two PDB
records ever share one of these 512-byte blocks. If they did share,
then in order to modify a record (without unintentionally modifying
other records adjacent to it), you'd need to read the 512-byte block
into RAM, modify only part of it, and then write the 512-byte block
back out to flash. But if two PDB records never share a 512-byte
block, then when writing out a modified PDB record, you can simply
look at your PDB record's contents and create a new 512-byte block
and blast it out there without worrying about preserving anything
that was in that block before. So, very roughly speaking, the
method Palm chose is twice as fast as the obvious alternative.

Things are even more complicated and confusing when you realize
one other property of PDBs: you can change the size of a record
(or even delete a record) in the "middle" of the PDB. If you
have records 0, 1, 2, 3, and 4, you can change the size of
record 2 from 123 bytes to 6543 bytes without affecting the
others. This is not how regular files on regular computers work.
So, in order to make that happen in flash, it would probably
make things a bit easier if you restrict everything to 512-byte
boundaries. So that may be another part of the motivation.

If all that was too complicated, then here's the practical
implications:

(1) Palm's decision wasn't totally lame-brained; there is a
good technical reason to do it how they did.
(2) If you have a PDB with 10000 records in it, and they
each contain 10 bytes of data, the minimum theoretical
size it could be is 100,000 bytes. On regular Palms,
the per-record overhead is I think 14 bytes, so it would
be 240,000 bytes. On a Tungsten|T5 or Treo 650, it would
be at least 5,120,000 bytes. In other words, 20 times
as big.
(3) The 512-byte-per-record thing only has a negative effect
for PDBs that have a large number of small records.
If you had a PDB with 100 records of each 10 KB a piece,
the T5 and Treo 650 will perform no worse than any other
device.

Hope that helps.

- Logan
Anonymous
December 19, 2004 4:01:46 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

Ian Rawlings wrote:
> increase during the product lifetime. NAND requires forward error
> correction (not sure on overhead, article states 1-4 bits of
> correction data but not whether this relates to each page which is
> unlikely or each byte which would be nasty)

1-4 bites per byte isn't really that bad at all. That's about the
normal level of redundancy you'd expect if you will be needing to
both detect and correct errors. It's just built into the process
of manufacturing it, so that when you want to make a chip with
1 megabit of useful storage, you shoot for 1.5 of actual bits
and call the end product "1 megabit with error correction".

> It sounds like they aren't loading pdbs into RAM before accesing them
> for speed purposes, as this would do away with the 512-byte minimum
> block limitation on records,

That is correct. According to the PalmOne documentation, they
only load the individual records as they are used. This makes
sense because Palm OS has two calls that let the application
tell the system that it wants to access a PDB record: DmGetRecord()
and DmQueryRecord(). One tells the system you'll be doing read-only
and the other tells it you want read-write access to the record.
Then the system writes back the data when you close the database,
when the RAM is running low and it needs to free some up by shuffling
some stuff out to flash, or when the power is turned off.

> with so much available space I don't
> think this is too bad an issue but it does sound inefficient if a
> single telephone number in a contacts database is stored as an
> individual record (taking 512 bytes) but not so bad if an entire
> contact including all addresses, telephone numbers etc is stored as a
> single record.

In the regular Palm OS address book, I believe each contact is
stored along with all its information in a single record. So
it's really not that bad for contacts. But for items on the
to-do list, it's really not that great since they might actually
be only 10 bytes. For example, I do my grocery list as individual
to-do items under a category called "Shopping", and just putting
"onions" or "soap" in there doesn't take a whole lot of space.
So a fair amount will be wasted with the 512-byte minimum.

> This is certainly the worst case scenario, but having tiny records is
> inefficient in any database due to the overheads associated with
> tracking record location and allocation so such implementations would
> be bad on any platform, just even worse on a NAND-based platform.

Yes, it is kinda bad in general, but on traditional Palms, the overhead
didn't really need to be that bad and wasn't that bad. On a traditional
Palm, everything is stored in battery-backed memory, so if you need
to store (say) 22 bytes, you can do so without much waste. That's sort
of part of the charm of the Palm device -- that there was an easy
(relatively), lightweight way to store small little amounts of data
without having to create a file for everything or define a special
file format and mess with reading the file and locating records
within it.

By the way, I can't say that the 512-byte "bug" (really a design
decision) will never be fixed. There are ways to address the
problems that led to this decision. One way is, every time you
read a 512-byte block, even if you only use 16 of the bytes,
keep the entire 512-byte block around in RAM *somewhere*. If
you need to change the bytes, read the 512-byte block from the
place, modifying it, and then write it out again. This avoids
the extra write that they were trying to avoid by doing everything
in 512-byte increments. And yes, it wastes RAM, but you can
limit the amount of RAM it wastes pretty easily: in the worst
case, you simply toss out these blocks that you are keeping
around and re-read from flash. That's exactly what you're
trying to avoid, but if you are smart about how you manage just
which blocks you keep and which you discard, it should still be
able to give you really good performance without wasting space
on the flash.

- Logan
Anonymous
December 19, 2004 11:30:16 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On 2004-12-19, Logan Shaw <lshaw-usenet@austin.rr.com> wrote:

> 1-4 bites per byte isn't really that bad at all. That's about the
> normal level of redundancy you'd expect if you will be needing to
> both detect and correct errors.

Yes, I'm not sure if they are actually correcting or just detecting,
and am also not sure if the error correction/detection is built into
the NAND chips or must be implemented by the developer. If they are
just detecting (so they can simply perform a re-read until it gets it
right) then you could get away with simple parity bits per byte
depending on the level of bit-flipping errors and the likelihood of
two bits being flipped in the same byte, or perhaps four bits for
every three bytes.

Of course the bit-flipping errors might be so infrequent that they
*can* get away with 1-4 bits per 512-byte block. Not enough data for
us to guess on this one ;-)

> By the way, I can't say that the 512-byte "bug" (really a design
> decision) will never be fixed. There are ways to address the
> problems that led to this decision. One way is, every time you read
> a 512-byte block, even if you only use 16 of the bytes, keep the
> entire 512-byte block around in RAM *somewhere*. If you need to
> change the bytes, read the 512-byte block from the place, modifying
> it, and then write it out again. This avoids the extra write that
> they were trying to avoid by doing everything in 512-byte
> increments. And yes, it wastes RAM, but you can limit the amount of
> RAM it wastes pretty easily: in the worst case, you simply toss out
> these blocks that you are keeping around and re-read from flash.
> That's exactly what you're trying to avoid, but if you are smart
> about how you manage just which blocks you keep and which you
> discard, it should still be able to give you really good performance
> without wasting space on the flash.

I'm not sure what you were trying to say above came across very well,
but I suspect you mean something like the way that modern operating
systems do disc cacheing on writes, perhaps flushing the write cache
when the device power is turned off or once every 10 seconds or so.
This brings in a host of other problems though if you start splitting
multiple records over 512-byte boundaries as enlarging a record can
have a snowball effect so perhaps they're best off leaving it as it
is, respecially as they're apparently moving to a linux kernel-based
system. I'm not sure how much of the kernel they will use but there
is already work being done on NAND-flash filing systems by the linux
developers (JJFS I think) so perhaps Palm just slapped this together
as a temporary solution ;-)

--
For every expert, there is an equal but opposite expert
December 19, 2004 3:33:09 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On Sun, 19 Dec 2004 08:30:16 +0000, Ian Rawlings wrote:

> > 1-4 bites per byte isn't really that bad at all. That's about the
> > normal level of redundancy you'd expect if you will be needing to
> > both detect and correct errors.
>
> Yes, I'm not sure if they are actually correcting or just detecting,
> and am also not sure if the error correction/detection is built into
> the NAND chips or must be implemented by the developer.

One bit per byte would be enough to detect errors. As I recall
back in the days of the 486 and Pentium computers, non-parity SIMMs
were 32-bit wide memory, and parity SIMMs had 36 bits, ie, 1 extra
bit per byte. I don't recall how wide the ECC SIMMs were, but I'm
pretty sure that they didn't need anything like 4 bits/byte. I think
2 bits/byte would be sufficient, which is still a lot of memory (an
extra 25%). By 'developer' I assume that you mean the designer of
the hardware? It certainly wouldn't be implemented by application
developers, which is what usually comes to mind when someone says
'developer'.
Anonymous
December 19, 2004 4:25:33 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On 2004-12-19, BillB <rainbose@earthlink.newt> wrote:

> One bit per byte would be enough to detect errors.

It depends on the error rate, it's possible for two bitflips in a byte
to preserve the same parity so the parity bit would still match the
byte despite there being two bits having been flipped.

At any rate we don't know enough about it to really speculate, with
the right information (error rates, acceptable losses). The error
rate with these things is much higher than with the ECC SIMMs you talk
about later in your post.

> By 'developer' I assume that you mean the designer of the hardware?
> It certainly wouldn't be implemented by application developers,

It would either be implemented by the hardware developer or the
*operating system* developer. Error correction/detection is not
always needed, e.g. for the use of storing audio or video data so the
developer would not necessarily want to eat up valuable space with
needless error correction. The hardware manufacturer can leave it up
to the purchaser to decide how to best do it.

As an example, an audio CD can store more audio data than a data CD
because there is no error correction as the human ear doesn't notice
the occasional errors, a data CD loses a bit less than 100 megs of
useable space through the forward error correction used to compensate
for the bit error rates of the CD media. The loss of space is only
partially down to the formatting of the filesystem, contrary to what
some people believe.

So if you think that error-prone storage is a strange concept, you've
been using it for years ;-)

--
For every expert, there is an equal but opposite expert
Anonymous
December 19, 2004 7:21:25 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

Ian Rawlings wrote:

> On 2004-12-19, BillB <rainbose@earthlink.newt> wrote:
>
>> One bit per byte would be enough to detect errors.
>
> It depends on the error rate, it's possible for two bitflips in a byte
> to preserve the same parity so the parity bit would still match the
> byte despite there being two bits having been flipped.

A parity bit will detect single-bit errors, but it can't correct them, so in
something as error-prone as being described, you'd want more than that.

> At any rate we don't know enough about it to really speculate, with
> the right information (error rates, acceptable losses). The error
> rate with these things is much higher than with the ECC SIMMs you talk
> about later in your post.

One of the previous posters mentioned having 1.5x as much storage for error
detection and recovery. A Hamming Code takes 3 bits per byte, and can
detect and recover from 1 bit errors in a byte, and detect (and maybe
recover from -- I can't remember off-hand) 2 bit errors in a byte as well.

3 extra bits per byte would be roughly 1.5x the memory, so that might be
what's being used.

--
ZZzz |\ _,,,---,,_ Travis S. Casey <efindel@earthlink.net>
/,`.-'`' -. ;-;;,_ No one agrees with me. Not even me.
|,4- ) )-,_..;\ ( `'-'
'---''(_/--' `-'\_)
December 19, 2004 8:11:38 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On Sun, 19 Dec 2004 13:25:33 +0000, Ian Rawlings wrote:

> > One bit per byte would be enough to detect errors.
>
> It depends on the error rate, it's possible for two bitflips in a byte
> to preserve the same parity so the parity bit would still match the
> byte despite there being two bits having been flipped.

Right, but with the high error rate hinted at for NAND chips I
doubt that simple byte parity is used.


> At any rate we don't know enough about it to really speculate, with
> the right information (error rates, acceptable losses). The error
> rate with these things is much higher than with the ECC SIMMs you talk
> about later in your post.

We may not have enough information to *know* what's what, but this
is the internet, aka Speculation Central. :) 


>> By 'developer' I assume that you mean the designer of the hardware?
>> It certainly wouldn't be implemented by application developers,
>
> It would either be implemented by the hardware developer or the
> *operating system* developer. Error correction/detection is not
> always needed, e.g. for the use of storing audio or video data so the
> developer would not necessarily want to eat up valuable space with
> needless error correction. The hardware manufacturer can leave it up
> to the purchaser to decide how to best do it.

In devices not used exclusively for audio or video, data integrity
is paramount, and so I have a strong suspicion that the T5 uses ECC,
since it's apparently much more susceptible to errors than other
computers and PDAs, and where a single bit error can be sufficient
to crash a program or OS.


> So if you think that error-prone storage is a strange concept, you've
> been using it for years ;-)

Next you'll be telling me that I've been using an error-prone
operation system for years. And since I use more than PalmOS you'd
be right!
Anonymous
December 19, 2004 10:29:48 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On 2004-12-19, BillB <rainbose@earthlink.newt> wrote:

> In devices not used exclusively for audio or video, data integrity
> is paramount, and so I have a strong suspicion that the T5 uses ECC,

Yes it certainly will use ECC, however the statement I was answering
wasn't about the T5, but about the memory in general. Usenet is not
only speculation central, it's nit-pick central too!

> Next you'll be telling me that I've been using an error-prone
> operation system for years. And since I use more than PalmOS you'd
> be right!

Alas if only they'd implement forward error correction for an entire
computer...

--
For every expert, there is an equal but opposite expert
Anonymous
December 20, 2004 2:05:38 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

Ian Rawlings wrote:

> On 2004-12-19, BillB <rainbose@earthlink.newt> wrote:
>
>
>>One bit per byte would be enough to detect errors.

> It depends on the error rate, it's possible for two bitflips in a byte
> to preserve the same parity so the parity bit would still match the
> byte despite there being two bits having been flipped.

It's ALWAYS possible for any error correction scheme to fail, no
matter how many bits you use. Even if you store an entire redundant
copy of the 8 bits, it's still possible that the exact same 5 bits
out of each of the two copies of the 8 bits might all spontaneously
flip together. It's astronomically unlikely, but it's always possible.

- Logan
Anonymous
December 20, 2004 3:07:00 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On 2004-12-19, Logan Shaw <lshaw-usenet@austin.rr.com> wrote:

> Even if you store an entire redundant copy of the 8 bits, it's still
> possible that the exact same 5 bits out of each of the two copies of
> the 8 bits might all spontaneously flip together. It's
> astronomically unlikely, but it's always possible.

Plus of course if you store two copies of the data and they don't
match, how do you tell which one is wrong without adding even more
checksumming ;-)

This whole error correction stuff is quite fascinating, I did the
theory many moons ago at University and it was one of the more
interesting parts of the course.

--
For every expert, there is an equal but opposite expert
December 20, 2004 3:44:21 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On Mon, 20 Dec 2004 00:07:00 +0000, Ian Rawlings wrote:

> Plus of course if you store two copies of the data and they don't
> match, how do you tell which one is wrong without adding even more
> checksumming ;-)

That's easy. Everybody knows that it's a mistake to store two
copies of the data. Three copies are needed, and with luck, two of
the data sets will match. Then, extending the principles of
democracy to disk drives, majority rules. Unfortunately, just as in
real life, the majority may sometimes get it wrong and the truth
would be found only on the outlier. In such cases, when you no
longer have faith in the accuracy of data being fetched and
presented by MS's code, it may be time to consult with an Oracle.
December 20, 2004 3:58:46 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On Sun, 19 Dec 2004 19:29:48 +0000, Ian Rawlings wrote:

> Yes it certainly will use ECC, however the statement I was answering
> wasn't about the T5, but about the memory in general. Usenet is not
> only speculation central, it's nit-pick central too!

Always has been, but one eventually gets used to it, and then the
usenet fauna can be appreciated for the fineness of their fur, even
if the skin is occasionally a mite too thin. :) 


> Alas if only they'd implement forward error correction for an entire
> computer...

They're working on it, but the word is out that it will first
appear in a computer built exclusively with fuzzy logic circuits
constructed with fast switching B & M gates.
Anonymous
December 20, 2004 5:10:50 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On 2004-12-20, BillB <rainbose@earthlink.newt> wrote:

> They're working on it, but the word is out that it will first appear
> in a computer built exclusively with fuzzy logic circuits
> constructed with fast switching B & M gates.

I don't think computers will become fault-free until they become
sentient, then you can threaten the bleeders and they'll listen for a
change!

--
For every expert, there is an equal but opposite expert
Anonymous
December 20, 2004 5:14:07 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On 2004-12-20, BillB <rainbose@earthlink.newt> wrote:

> That's easy. Everybody knows that it's a mistake to store two
> copies of the data. Three copies are needed, and with luck, two of
> the data sets will match.

That's a bit of a brute-force method though, it can be done much more
efficiently with some clever maths, for example hashing blocks of data
using an algorithm along the lines of MD5, which can still suffer from
hash collisions but it's not that likely to happen.

As an aside, a fair few people I've met think that Raid-5 protects you
against things like disc errors --- it doesn't, it only protects you
against total disc failure, if a disc gives you bad data it won't help
in the slightest!

Anyway, I ought to try to get some sleep again, can't drift off
tonight for some reason...

--
For every expert, there is an equal but opposite expert
Anonymous
December 20, 2004 5:14:08 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

In message <slrncscdbf.oqh.news05@gate-int.tarcus.org.uk> Ian Rawlings
<news05@tarcus.org.uk> wrote:

>As an aside, a fair few people I've met think that Raid-5 protects you
>against things like disc errors --- it doesn't, it only protects you
>against total disc failure, if a disc gives you bad data it won't help
>in the slightest!

Sure it will, but not in real time. You'll still get bad data when you
read the drive, but when you discover the problem you can still recover
the original valid data (you need to identify the disc with errors and
fail it, then let the RAID array rebuild it to a spare)


--
I don't approve of political jokes...
I've seen too many of them get elected.
Anonymous
December 20, 2004 11:22:11 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On 2004-12-20, DevilsPGD <devilspgd@crazyhat.net> wrote:

> (you need to identify the disc with errors and fail it, then let the
> RAID array rebuild it to a spare)

I wouldn't like to use this method to try and recover a database after
it's been sat there reading rubbish and writing rubbish for a while..

A friend works for a data backup firm, they don't actually back up
data in the conventional sense, they copy it across the internet onto
mirrored discs. One of their scripts went wrong and deleted all their
copies for a few customers, and of course they couldn't recover the
deletions from the mirror on account of it being a mirror...

Oops!

--
For every expert, there is an equal but opposite expert
Anonymous
December 20, 2004 4:21:25 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

Ian Rawlings wrote:
> On 2004-12-20, DevilsPGD <devilspgd@crazyhat.net> wrote:

>> (you need to identify the disc with errors and fail it, then let the
>> RAID array rebuild it to a spare)
>
> I wouldn't like to use this method to try and recover a database after
> it's been sat there reading rubbish and writing rubbish for a while..

The thing is, on a real RAID 5, it won't have, unless two disks are failing,
and probably not even then. RAID 5 reads and writes the parity information
*every time*. Further, the parity is done so that there's a parity bit
corresponding to each bit on a drive. So, unless *two* drives happen to
give a bad read in a way such that the parity comes up the same, when a
drive gives bad information, the RAID system will notice.

Now, if the drive "doesn't realize" that it's giving bad data, then the RAID
system will have no way to know which drive is bad -- but it will know that
one of them is bad, and will try to re-read until it either (a) gets a read
that is correct according to the parity information or (b) reaches its
limit on retries, gives up, and sends an error upstream.

Unless two drives happen to fail in a way such that both "cancel out" for
parity, your database won't be reading rubbish.

A similar thing holds true for a bad write -- the parity information is
written with every write. Thus, even if one drive writes something
different from what you told it to, the parity can still enable you to
recover once you've figured out which drive is bad. Only if *two* drives
write the wrong information will it be impossible to recover.

> A friend works for a data backup firm, they don't actually back up
> data in the conventional sense, they copy it across the internet onto
> mirrored discs. One of their scripts went wrong and deleted all their
> copies for a few customers, and of course they couldn't recover the
> deletions from the mirror on account of it being a mirror...

Then they're not really a backup firm; a mirror is not a backup. Any good
backup administrator should be able to tell you that the most common cause
of needing to restore something from backup is that a user accidentally
deleted something, or deleted it on purpose, then realized (usually a month
or so later...) that they actually still needed it.

--
ZZzz |\ _,,,---,,_ Travis S. Casey <efindel@earthlink.net>
/,`.-'`' -. ;-;;,_ No one agrees with me. Not even me.
|,4- ) )-,_..;\ ( `'-'
'---''(_/--' `-'\_)
Anonymous
December 20, 2004 8:06:19 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On 2004-12-20, Travis Casey <efindel@earthlink.net> wrote:

> Then they're not really a backup firm; a mirror is not a backup.

Yes I know, but I don't think they want their customers to know that ;-)

Buyer beware!

--
For every expert, there is an equal but opposite expert
December 21, 2004 6:38:09 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On Sat, 18 Dec 2004 12:35:21 +0000, Ian Rawlings wrote:

> So the NAND memory is cheaper, can't be used for execute-in-place (so
> code needs to be loaded into RAM before it can run which is I think
> what palmos has always done anyway), and also suffers from bad blocks
> although it seems these are fixed in place at manufacture and don't
> increase during the product lifetime.

That last may not be true (about bad blocks not increasing during
the product's lifetime). I just read a manual for a Fuji camera
that uses xD-Picture Cards, which are smaller than SD cards BTW.
They note that the xD cards use NAND-type flash memory, but
ominously add:

> The xD Picture Card will work well for long time, but will sooner
> or later lose its ability to store and play back images or movies. If
> this happens replace it with a new xD-Picture Card.

I'm aware that something like this was a problem with flash memory,
but several years ago improvements made this no longer much of a
concern. Has the problem returned because NAND-type flash memory is
more susceptible to "wearing out" than the flash memory used in CF,
SD and MMC cards? If they are all comparable, then Fuji is to be
commended for even mentioning a potential problem that most of their
customers are unlikely to ever experience. But if NAND-type memory
is much more susceptible to wearing out, then how might that effect
the T5, whose NAND memory presumably isn't contained in an easily
replaced xD card?
Anonymous
December 21, 2004 8:44:13 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On 2004-12-21, BillB <rainbose@earthlink.newt> wrote:

> That last may not be true (about bad blocks not increasing during
> the product's lifetime).

I don't know why I typed that, I know it's not true because like other
types of flash memory, NAND flash has a limited number of writes
before the memory cells expire, sommat like 100,000 writes I think.
The filesystem is supposed to rectify this by never writing the data
back to the same block it reads it from, so it reads the block in,
writes modified data out to a new block then adds the original block
to the end of its free blocks list.

> I'm aware that something like this was a problem with flash memory,
> but several years ago improvements made this no longer much of a
> concern.

It's still a problem, older memory used to be limited to 1,000 or so
writes, but it's much higher now. It still begs the question of where
they store things like the file system indexes and free block lists
though ;-)

> But if NAND-type memory is much more susceptible to wearing out,
> then how might that effect the T5, whose NAND memory presumably
> isn't contained in an easily replaced xD card?

Presumably an expired block gets marked as "bad" and doesn't get used
any more, of course if they implement round-robin writes as suggested
above and as used in JJFS (IIRC), then once one block goes you'll
start getting more and more bad ones. How long that would take to
start happening would depend on usage patterns. I wonder if they'd
cover that under warranty or would regard it as "wear and tear"....

If they're using round-robin writes then it would ease matters
considerably, for example if you had 1,000 free blocks and you
repeatedly read and wrote one block of data, each free block would be
written to once in every 1,000 writes or so.

--
For every expert, there is an equal but opposite expert
December 22, 2004 12:00:26 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On Tue, 21 Dec 2004 17:44:13 +0000, Ian Rawlings wrote:

> NAND flash has a limited number of writes
> before the memory cells expire, sommat like 100,000 writes I think.
> The filesystem is supposed to rectify this by never writing the data
> back to the same block it reads it from, so it reads the block in,
> writes modified data out to a new block then adds the original block
> to the end of its free blocks list.

What had me thinking that xD's NAND might have a shorter lifetime
than other flash RAM is that the way most people take pictures, the
xD cards shouldn't fail during the owner's lifetime. 100,000
pictures is equivalent to several thousand rolls of film, and while
it would take an awfully long time to take this number of pictures,
the memory cells would only experience 100,000 writes by the time
100,000 pictures were taken if they were all written to the same
part of the xD card (take a single picture, send it to the computer,
erase from card, repeat 100,000 times).

But as most people fill a card with dozens or even hundreds of
pictures before transferring them to a computer, the cards will most
likely have to store many millions of pictures before any individual
cells will have been written to 100,000 times. Some cells will be
written to more frequently than others, but even the ones most
frequently used should still have been written to fewer than 100,000
times by the time the camera has taken a couple of million pictures.

How long would this be? Anyone taking pictures at the rate of
two rolls per day (40 pictures) each day for the rest of their life,
would die of old age long before they reached a couple of million
pictures. And it's highly unlikely that Fuji's 2004 cameras and xD
cards would still be used in the year 2140, so for an xD card to
wear out, even if only for a very small number of xD card users, it
would seem to imply that the NAND technology used in xD cards
permits far fewer than 100,000 writes per cell. Otherwise, why
would Fuji feel the need to mention the fact that xD cards wear out?

It may be that Fuji's version of NAND cells sacrifice longevity
for something else, perhaps speed, and the NAND cells used by the T5
are much more durable. But even using the filesystem techniques to
distribute writes that you mentioned, the T5's NAND cells are good
for 100,000 write cycles, the T5 may by design have a much shorter
useful life than any other PDA on the market, irregardless of the
type of batteries used. With luck, my old TRGPro might still be
working 20 or 30 years from now (whether I like it or not). The
same may be true for early T's and Clies, but I have a hunch that
some T3s will still be working when the last T5 bites the dust.
Anonymous
December 22, 2004 12:28:44 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On 2004-12-21, BillB <rainbose@earthlink.newt> wrote:

> But as most people fill a card with dozens or even hundreds of
> pictures before transferring them to a computer, the cards will most
> likely have to store many millions of pictures before any individual
> cells will have been written to 100,000 times.

It depends on how the camera filesystem copes with metadata, the data
written to a card isn't just pictures, it's also indexes of data
blocks, write times, directory structures, etc. You've probably heard
of the FAT (file allocation table), on an MS-DOS based filesystem
which almost all cameras use, the FAT will be updated for each and
every chunk of data written to the card and so will be the first up
against the wall. If the location of parts of the FAT are fixed (and
I believe they are) then after a while the computer will be unable to
update the FAT as the blocks holding vital parts of it will no longer
be writeable, and the card will fail. I'm no expert on the features
of the MS-DOS based filesystems, but I don't believe they have any way
to cope with this.

This problem is the reason why flash-specific filesystems are
available as they work around this problem of continuously updating
the same blocks of filesystem metadata. A simple camera with
effectively disposable cards probably isn't going to worry about it,
but a more advanced device with fixed memory can work around it using
specific filesystems designed to cope.

However, everything I've seen regarding the T5 in particular so far
states that it uses VFAT, so it'll probably suffer from this metadata
problem unless they've figured out a way around it.. It may be that
it uses a flash-specific low-level driver or sommat so only the
higher-level parts of the filesystem are VFAT-compatible, or it could
just be that Palm thought "sod it" and just used a stock filesystem
that doesn't take the limited-write nature of flash memory into
account, so they'll all screw up after the FAT has been written to
more than 100,000 or whatever times.... That could be a problem!!

Really we need someone who knows more about the T5 to clear this up, I
certainly don't and am not interested in the T5 as it's a step down
from the T3, so I'm not going to research it further.

> But even using the filesystem techniques to distribute writes that
> you mentioned, the T5's NAND cells are good for 100,000 write
> cycles, the T5 may by design have a much shorter useful life than
> any other PDA on the market, irregardless of the type of batteries
> used.

I'm not sure that the T5 uses a distributed-write filesystem, it ought
to, but if I led you to believe that the T5 uses it then that's my
fault for mixing my speculation up ;-) If it does use a
distributed-write flash-specific filesystem then I don't think that
the memory will expire within the normally-expected lifetime of the
product, but if it doesn't use flash-specific filesystems then 100,000
or so updates of the filesystem could be enough to kill it....

> With luck, my old TRGPro might still be working 20 or 30 years from
> now (whether I like it or not). The same may be true for early T's
> and Clies, but I have a hunch that some T3s will still be working
> when the last T5 bites the dust.

I've not played with a T5 yet but from what others have said it seems
to have been built as cheaply as possible, so I suspect you might be
right. I tend to replace my PDA once every year or so anyway so if
they last more than a year then I might still go for any future
machine based on a similar design.

--
For every expert, there is an equal but opposite expert
December 22, 2004 5:13:42 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On Tue, 21 Dec 2004 21:28:44 +0000, Ian Rawlings wrote:

> It depends on how the camera filesystem copes with metadata, the data
> written to a card isn't just pictures, it's also indexes of data
> blocks, write times, directory structures, etc. You've probably heard
> of the FAT (file allocation table), on an MS-DOS based filesystem
> which almost all cameras use, the FAT will be updated for each and
> every chunk of data written to the card and so will be the first up
> against the wall.

Probably, but it would be nice if FAT and directories were cached
and only updated when necessary or just prior to the camera being
powered down. I had some of that in a custom CP/M BIOS long before
MSDOS hatched. :)  Good point though about the indexes and directory
updates worsening the situation for NAND longevity. That may well
be the reason for Fuji's warning.


> . . . or it could
> just be that Palm thought "sod it" and just used a stock filesystem
> that doesn't take the limited-write nature of flash memory into
> account, so they'll all screw up after the FAT has been written to
> more than 100,000 or whatever times.... That could be a problem!!

For Palm owners, but some Palm execs probably look at it as just
another route to planned obsolescence, and for them it's a solution.


> I've not played with a T5 yet but from what others have said it seems
> to have been built as cheaply as possible, so I suspect you might be
> right. I tend to replace my PDA once every year or so anyway so if
> they last more than a year then I might still go for any future
> machine based on a similar design.

I like it that some old devices not only remain useful, but
sometimes outperform modern ones. An example are some nearly 50
year old Zenith portable transistor radios that are more sensitive,
go much farther on a set of batteries, and sound better than most
modern radios. Of course they lack a few features found on modern
radios, such as FM, and even the Conelrad dial markers don't quite
compensate for that. :) 
Anonymous
December 23, 2004 8:07:36 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

BillB wrote:

> On Tue, 21 Dec 2004 21:28:44 +0000, Ian Rawlings wrote:

>>. . . or it could
>>just be that Palm thought "sod it" and just used a stock filesystem
>>that doesn't take the limited-write nature of flash memory into
>>account, so they'll all screw up after the FAT has been written to
>>more than 100,000 or whatever times.... That could be a problem!!

> For Palm owners, but some Palm execs probably look at it as just
> another route to planned obsolescence, and for them it's a solution.

It's also possible that the controller (whether on the chip or in
the hardware) takes care of bad block remapping. That way you don't
have to change the way everything works that uses the flash. It's
much cleaner if the flash does all the magic internally and just
presents a reliable device to the operating system.

By the way, can't remember where I saw this (could've even been
here), but somewhere they were discussing the longevity of flash,
and someone did the math on the limited number of writes cycles.
The conclusion was that if you have a system that constantly
circulates things around so that the write cycles are spread
equally across the memory, then based on the maximum transfer speed
to and from the flash, you'd have to be updating the filesystem
24 hours a day for a number of years before you actually do wear
out the flash. Unfortunately, I can't remember the specific
device that was being discussed, so take that with a grain of
salt, but the point is that it might not be any less reliable
than, say, a hard disk, and could even be more reliable.
Especially on a device like a Palm which is usually used very
lightly by computer standards.

- Logan
Anonymous
December 23, 2004 11:38:30 AM

Archived from groups: comp.sys.palmtops.pilot (More info?)

On 2004-12-23, Logan Shaw <lshaw-usenet@austin.rr.com> wrote:

> It's also possible that the controller (whether on the chip or in
> the hardware) takes care of bad block remapping. That way you don't
> have to change the way everything works that uses the flash. It's
> much cleaner if the flash does all the magic internally and just
> presents a reliable device to the operating system.

Entirely possible, I know that the page I looked at featured NAND
flash that didn't because not everyone needs it, I don't know if
there's flash on the market that implements this. Remember that the
idea behind using this stuff in the first place is because it's
physically small and packs more memory in the space than other types
of memory, so taking up a load of space with complex correction that
could be done by the processor would probably be less likely than just
leaving it to the processor.

> The conclusion was that if you have a system that constantly
> circulates things around so that the write cycles are spread equally
> across the memory, then based on the maximum transfer speed to and
> from the flash, you'd have to be updating the filesystem 24 hours a
> day for a number of years before you actually do wear out the flash.

I can entirely believe it, which is why I posted another thread asking
if anyone knows if the T5 actually does this, it's possible that it
doesn't and just uses the memory as normal memory, in which case the
file allocation tables are likely to go much sooner. If you updated
your filesystem 100 times per day, then they could die after just 100
days, 10 times a day they'd go after 1,000 days (which is OK).

No answers in the other thread though :-(

--
For every expert, there is an equal but opposite expert
Anonymous
December 23, 2004 2:27:35 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

In article news:<slrncsh5cb.8j4.news05@gate-int.tarcus.org.uk>, Ian
Rawlings wrote:
> If the location of parts of the FAT are fixed (and
> I believe they are) then after a while the computer will be unable to
> update the FAT as the blocks holding vital parts of it will no longer
> be writeable, and the card will fail.

It is my understanding -- though I may be wrong -- that the process whereby
the card chooses the least-recently-used sector every time a write occurs
is implemented in a logical/physical address mapping layer *beneath* the
filesystem. That is, the way that physical addresses on the card are
translated to logical addresses on the 'disk' is not linear or fixed.

So, when a FAT sector is written it will always have the same logical
address, but may have any physical address on the card.

The worst-case scenario is for a card that is mostly full of data that do
not change, the rest of whose capacity is updated often. The card's LRU
algorithm will then only be able to cycle through the small number of
physical blocks that are being written frequently, and these few sectors
may become worn out while those used for the fixed data will still be
almost as new. This suggests that it is a good idea to erase all data from
a card periodically and reload it, as that gives the LRU algorithm the
chance to use all the blocks more evenly. This is typical usage for a
camera's flash card, but not for a PDA's.

Note that the card does not 'fail' as a whole. When a particular block on
the card can no longer be written reliably the card should map out that
block and continue to function with reduced capacity, so the process of
failure is quite gradual and can be detected by observing the dimishing
capacity of the 'disk'.

Cheers,
Daniel.
Anonymous
December 23, 2004 2:27:35 PM

Archived from groups: comp.sys.palmtops.pilot (More info?)

In article news:<sosyd.3336$wD4.656@fe1.texas.rr.com>, Logan Shaw wrote:
> ... somewhere they were discussing the longevity of flash,
> and someone did the math on the limited number of writes cycles.
> The conclusion was that if you have a system that constantly
> circulates things around so that the write cycles are spread
> equally across the memory, then based on the maximum transfer speed
> to and from the flash, you'd have to be updating the filesystem
> 24 hours a day for a number of years before you actually do wear
> out the flash.

I've seen a similar discussion about the feasibility of using a flash card
for a swap partition on a linux PDA.

Googling for [zaurus linux swap flash] gives 927 hits, but the first is
this: http://www.linuxjournal.com/article/5902, which says:

| However, while the NAND-type Flash used in most cards is usually
| specified at 100,000 erase/write cycles, this is only the minimum
| specification. Moreover, the Flash card controller adds ECC bytes
| to each 512-byte block and does wear leveling.
|
| We can assume a continuous writing on a swap area at 100KB per
| second. Thanks to the wear leveling, the controller will cycle
| though the sectors, so if we have an 8MB swap area, it will do
| one erase/write per sector every 80 seconds. A real smart controller
| might even distribute them over the whole card.
|
| The ECC algorithms will increase the erase/write cycles to at least
| 1M cycles per sector. That will give you two and a half years before
| seeing any failing sectors, and at that time the controller will
| start using spare sectors.
|
| Therefore, if used properly, a Flash card will last for the lifetime
| of the system and definitely beat the MTBF (mean time before failure)
| of any hard disk even if you put your swap partition on it.

Cheers,
Daniel.
!