# How many mp's to match resolution of 4x5 neg film cam?

Tags:

Last response: in Digital Camera

Anonymous

April 29, 2005 2:55:37 AM

Archived from groups: rec.photo.digital (More info?)

I'm sure this has been asked before but if any one knows the hard data

can you please answer this question, I'm just curious to know the

answer. Approximately how many mp's would a DC need to match the

resolution of a 4x5 neg film camera?

I'm sure this has been asked before but if any one knows the hard data

can you please answer this question, I'm just curious to know the

answer. Approximately how many mp's would a DC need to match the

resolution of a 4x5 neg film camera?

More about : match resolution 4x5 neg film cam

Anonymous

April 29, 2005 3:15:20 AM

Codex wrote:

>

> I'm sure this has been asked before

right.

codex, meet google.com

google.com, meet codex

<http://www.google.com/search?hl=en&q=4x5+film+megapixel...;

> but if any one knows the hard data

> can you please answer this question, I'm just curious to know the

> answer. Approximately how many mp's would a DC need to match the

> resolution of a 4x5 neg film camera?

Anonymous

April 29, 2005 4:39:42 AM

On Thu, 28 Apr 2005 23:15:20 -0700, Crownfield <Crownfield@cox.net>

wrote:

>right.

>

>codex, meet google.com

>google.com, meet codex

Yes, I've met google.com before but it's more fun asking on usenet.

Just making conversation, you know?

><http://www.google.com/search?hl=en&q=4x5+film+megapixel...;

Now you've spoiled my fun. The answer is more than 200mp's. This

thread is done. Thanks. :-)

Related resources

- How many mp's to match resolution of 4x5 neg film cam? - Forum
- How many mp's to match resolut - Forum

Anonymous

April 29, 2005 4:42:49 AM

"Codex" <no@email.here> wrote in message

news:kuo37192egr02v1hc515los1c8o9sk8hmo@4ax.com...

> On Thu, 28 Apr 2005 23:15:20 -0700, Crownfield <Crownfield@cox.net>

> wrote:

>

>

>>right.

>>

>>codex, meet google.com

>>google.com, meet codex

>

> Yes, I've met google.com before but it's more fun asking on usenet.

> Just making conversation, you know?

>

>><http://www.google.com/search?hl=en&q=4x5+film+megapixel...;

>

> Now you've spoiled my fun. The answer is more than 200mp's. This

> thread is done. Thanks. :-)

Ya. -According to SOMEONE...

The answer is entirely debatable.

Anonymous

April 29, 2005 7:40:32 AM

Codex wrote:

> I'm sure this has been asked before but if any one knows the hard

data

> can you please answer this question, I'm just curious to know the

> answer. Approximately how many mp's would a DC need to match the

> resolution of a 4x5 neg film camera?

In french, this kind of question is called a "troll" (a scandinavian

name for, somehow, nagging spirits), is there a similar word in

english?

But I'll answer you on the basis of Norman Koren data (see

http://www.normankoren.com/Tutorials/MTF7.html ), who states that it

takes around 100pixel by mm (ie 3600*2400px for 24*36) to have the same

resolution as a scanned Provia100, that I find realistic (even

pessimistic, because it does only consider resolution, not noise).

Applied to 4*5" = 100*125mm, that rule of thumb would make

10.000*12.500 = 125Mpix. Yes, it's quite a lot.

Given the low loise in a big imaging chip (I assume you don't want a

pocket camera???), you could decrease pixel count a bit, but the answer

would still be "no way under 60Mpix". Still a lot.

Did the troll have enough exercise?

Greetings from France

Nicolas

Anonymous

April 29, 2005 11:00:05 AM

nikojorj_jaimepaslapub@yahoo.Fr wrote:

> Codex wrote:

>

>>I'm sure this has been asked before but if any one knows the hard

>

> data

>

>>can you please answer this question, I'm just curious to know the

>>answer. Approximately how many mp's would a DC need to match the

>>resolution of a 4x5 neg film camera?

>

>

> In french, this kind of question is called a "troll" (a scandinavian

> name for, somehow, nagging spirits), is there a similar word in

> english?

>

> But I'll answer you on the basis of Norman Koren data (see

> http://www.normankoren.com/Tutorials/MTF7.html ), who states that it

> takes around 100pixel by mm (ie 3600*2400px for 24*36) to have the same

> resolution as a scanned Provia100, that I find realistic (even

> pessimistic, because it does only consider resolution, not noise).

>

> Applied to 4*5" = 100*125mm, that rule of thumb would make

> 10.000*12.500 = 125Mpix. Yes, it's quite a lot.

> Given the low loise in a big imaging chip (I assume you don't want a

> pocket camera???), you could decrease pixel count a bit, but the answer

> would still be "no way under 60Mpix". Still a lot.

The signal-to-noise issue is a good one. In noise testing as well

as perception, the signal-to-noise of a DSLR allows improvement

of spatial resolution by on the order of 2x, e.g. by

Richardson-Lucy deconvolution, see:

http://clarkvision.com/imagedetail/image-restoration1

Film megapixel versus digital is film dependent. Fujichrome

Velvia 4x5 is around 200 megapixels, see:

http://www.clarkvision.com/imagedetail/scandetail.html

Here are charts of film versus digital for many films

and formats:

http://www.clarkvision.com/imagedetail/film.vs.digital....

There are other articles about signal-to-noise, dynamic range,

etc on the site. The effects of signal-to-noise on megapixel

equivalent are discussed in the summary page:

http://www.clarkvision.com/imagedetail/film.vs.digital....

Roger

Anonymous

April 29, 2005 4:18:09 PM

On 29 Apr 2005 03:40:32 -0700, nikojorj_jaimepaslapub@yahoo.Fr wrote:

>Did the troll have enough exercise?

>Greetings from France

>Nicolas

It wasn't a troll. A real troll said on a forum that 9mp matches 4x5

neg and I called him a bullshitter and thus I turned up here to get

the low down. Is that a troll? No.

James

April 29, 2005 9:47:32 PM

In article <msi371t8s1g8esn3pjes8s6q9fe8no38vi@4ax.com>,

Codex <no@email.here> wrote:

>

>

>I'm sure this has been asked before but if any one knows the hard data

>can you please answer this question, I'm just curious to know the

>answer. Approximately how many mp's would a DC need to match the

>resolution of a 4x5 neg film camera?

Don't put the Crown Graphic or the Linhof in the yard sale just yet.

Anonymous

April 29, 2005 9:47:33 PM

On Fri, 29 Apr 2005 17:47:32 GMT, fishbowl@conservatory.com (james)

wrote:

>Don't put the Crown Graphic or the Linhof in the yard sale just yet.

>

>

I don't own one but would like to. Processing costs are expensive

though. I own a Nikon 35mm and an Olympus DC. I don't even have my

darkroom equipment anymore. I used to have a Beseler enlarger with

Nikkor lense that could do 35mm or medium format negs. My kitchen was

my darkroom. :-)

Anonymous

April 29, 2005 9:47:34 PM

Ask those that make them!

www.betterlight.com

www.phaseone.com

---

Honestly, the way to do it is to ask what resolution do you expect to

image on a 4x5 neg? If it's 50lp/mm (a very decent value for MF), then

you'll get:

4x5" = 102.8mm x 128.5mm by 50lp/mm = 5140 x 6425 pixels digital

equivalent (approximate; ignores nyquest, analog vs. digital, etc.) = 33

Megapixels

Of course, the above two companies do sell high-end MF cameras that top

100 megapixels, so you'll easily find something that'll match your

current analog camera for resolution.

Anonymous

April 29, 2005 11:38:31 PM

I've still got mine.

mike

"Codex" <no@email.here> wrote in message

news:i62571d3sbent8guafqa4eb4kuspev0bt8@4ax.com...

> On Fri, 29 Apr 2005 17:47:32 GMT, fishbowl@conservatory.com (james)

> wrote:

>

> I used to have a Beseler enlarger with

> Nikkor lense that could do 35mm or medium format negs.

Anonymous

April 30, 2005 3:32:00 AM

In article <d4u85q$b33$1@news.service.uci.edu>,

David Chien <chiendh@uci.edu> wrote:

>

>Honestly, the way to do it is to ask what resolution do you expect to

>image on a 4x5 neg? If it's 50lp/mm (a very decent value for MF), then

>you'll get:

>4x5" = 102.8mm x 128.5mm by 50lp/mm = 5140 x 6425 pixels digital

>equivalent (approximate; ignores nyquest, analog vs. digital, etc.) = 33

>Megapixels

Ok, there are a couple of mistakes here. Firstly, a 4x5 sheet is actually

only 120*96mm. Secondly, you can't represent a line *pair* by a single

pixel. If we go for the "pefrect world" figure of 2 pixels per line-pair,

you get:

100 * 120 * 100 * 96

....which gives is 115 million pixels.

In reality, you aren;t going to get pixels per line-pair in the digital

world, so the actual *resolving power* is going to be higher than this, but

this will be compensated for by digital pixels (assuming we're talking about

a DSLR) being "cleaner", so in terms of overall image-quality, that 115

megapixel figure is probably somewhere in the right ballpark. In terms of

actual extinction resolution, the film should do a fair bit better.

If we do the same calculation for 35mm, we get about 8 megapixels, which is

a good match for the results people get from scanned 35mm slides, so it's

probably not a bad estimate.

Anonymous

April 30, 2005 4:57:28 AM

[A complimentary Cc of this posting was sent to

Codex

<no@email.here>], who wrote in article <msi371t8s1g8esn3pjes8s6q9fe8no38vi@4ax.com>:

> I'm sure this has been asked before but if any one knows the hard data

> can you please answer this question, I'm just curious to know the

> answer. Approximately how many mp's would a DC need to match the

> resolution of a 4x5 neg film camera?

If you mean "best possible resolution", then you already got many

estimates. However, note that most large-format shots are not made at

the "best resolution" f-stop. Thus *the particular shots* may be

equivalent to much more modest megapixel count.

Somewhat extreme example: starting from f/45, you may be able to get

results which are not much worse even with current 4/3'' digicams (at

low ISO sensitivity); at these f-stops the only thing which is

stressed is the noise level of the film. And digicams have much lower

noise level than film (even with much smaller sensor area).

[Well, this estimate assumes that numbers for film noise on Roger Clark

site are relevant. He still did not answer my queries about these

numbers, so I would take the estimage above with a grain of salt.]

Hope this helps,

Ilya

Anonymous

April 30, 2005 4:57:29 AM

Ilya Zakharevich wrote:

> [A complimentary Cc of this posting was sent to

> Codex

> <no@email.here>], who wrote in article <msi371t8s1g8esn3pjes8s6q9fe8no38vi@4ax.com>:

>

>>I'm sure this has been asked before but if any one knows the hard data

>>can you please answer this question, I'm just curious to know the

>>answer. Approximately how many mp's would a DC need to match the

>>resolution of a 4x5 neg film camera?

>

>

> If you mean "best possible resolution", then you already got many

> estimates. However, note that most large-format shots are not made at

> the "best resolution" f-stop. Thus *the particular shots* may be

> equivalent to much more modest megapixel count.

>

> Somewhat extreme example: starting from f/45, you may be able to get

> results which are not much worse even with current 4/3'' digicams (at

> low ISO sensitivity); at these f-stops the only thing which is

> stressed is the noise level of the film. And digicams have much lower

> noise level than film (even with much smaller sensor area).

Is this another theoretical result?

Take a look at these tests on a 4x5 image at f/45:

http://www.clarkvision.com/imagedetail/scandetail.html

Have you ever seen a 30x40 inch digital print from a drum scanned

velvia 4x5 image? You can walk right up and examine it at

a few inches away and see extremely fine detail. Not even

current high end pro DSLRs can match the the image detail of

a 4x5 velvia, even at f/45.

> [Well, this estimate assumes that numbers for film noise on Roger Clark

> site are relevant. He still did not answer my queries about these

> numbers, so I would take the estimage above with a grain of salt.]

Why do you continue to attack me? What is it you have against me?

I have tried to help you multiple times and you turn around and

attack me. I have answered dozens of questions, and

recently answered your questions on film noise, after answering

previous questions on film noise. I don't understand your agenda.

My numbers for film noise:

http://clarkvision.com/imagedetail/digital.signal.to.no...

I measure velvia to have a maximum S/N of 70, and the Canon

1D Mark II DSLR 228. Ilya, what is your problem with these values,

and do you have any actual data to prove a different result?

Roger

Anonymous

April 30, 2005 11:09:52 AM

[A complimentary Cc of this posting was sent to

Roger N. Clark (change username to rnclark)

<username@qwest.net>], who wrote in article <4272FA30.8070904@qwest.net>:

> > [Well, this estimate assumes that numbers for film noise on Roger Clark

> > site are relevant. He still did not answer my queries about these

> > numbers, so I would take the estimage above with a grain of salt.]

> Why do you continue to attack me? What is it you have against me?

I'm sorry if you consider this as a personal attack. I asked you a

question, and did not get an anwer (until today, when you did answer).

You have absolutely no obligation to answer my emails. However,

without the answer the numbers on your site do not have an exact

interpretation. Now, when I know the answer, they did.

A lot of thanks. [And I hope you update your web page so that other

people get the same advantage as I did. ;-]

> My numbers for film noise:

> http://clarkvision.com/imagedetail/digital.signal.to.no...

> I measure velvia to have a maximum S/N of 70, and the Canon

> 1D Mark II DSLR 228. Ilya, what is your problem with these values,

> and do you have any actual data to prove a different result?

I told you what are my problems with these numbers, and from your

answers I understood that you understand well what are my problems.

So why this question now?

For the benefits of other people (until Roger writes this on his web

page): the "film" numbers are for a square window with side 6.3

microns; the "film" numbers are for density noise, not for luminance

noise. [This is not the same as for digital sensors.]

So the numbers for luminance S/N in a 8.2 microns window into the film

(one used by other curves in the graph) should be about

gamma*(8.2/6.3) times higher. Taking gamma for film of 1.5 (is it a

good value?), one gets 1.95x higher values of S/N of film (using the

same units as for digital sensor).

[However, on other pages Roger shows that one *needs* to use smaller

window into the film to get comparable resolution to one of the

digital sensors. So it is quite probable that using a different

window size for film *is* justified. However, IMO, comparing density

noise of film with luminance noise of digital is not helpful.]

Again: a lot of thanks for your web pages and your answers to my questions,

Ilya

Anonymous

April 30, 2005 11:28:33 AM

Ilya Zakharevich wrote:

> [A complimentary Cc of this posting was sent to

> Roger N. Clark (change username to rnclark)

> <username@qwest.net>], who wrote in article <4272FA30.8070904@qwest.net>:

>

>

>>>[Well, this estimate assumes that numbers for film noise on Roger Clark

>>>site are relevant. He still did not answer my queries about these

>>>numbers, so I would take the estimage above with a grain of salt.]

>

>

>>Why do you continue to attack me? What is it you have against me?

>

>

> I'm sorry if you consider this as a personal attack. I asked you a

> question, and did not get an anwer (until today, when you did answer).

You constantly take what I say and twist to being negative.

What you do below is another example of this.

> You have absolutely no obligation to answer my emails. However,

> without the answer the numbers on your site do not have an exact

> interpretation. Now, when I know the answer, they did.

>

> A lot of thanks. [And I hope you update your web page so that other

> people get the same advantage as I did. ;-]

>

>

>>My numbers for film noise:

>>http://clarkvision.com/imagedetail/digital.signal.to.no...

>>I measure velvia to have a maximum S/N of 70, and the Canon

>>1D Mark II DSLR 228. Ilya, what is your problem with these values,

>>and do you have any actual data to prove a different result?

>

> I told you what are my problems with these numbers, and from your

> answers I understood that you understand well what are my problems.

> So why this question now?

>

> For the benefits of other people (until Roger writes this on his web

> page): the "film" numbers are for a square window with side 6.3

> microns; the "film" numbers are for density noise, not for luminance

> noise. [This is not the same as for digital sensors.]

Here again, you take what I said and reinterpret it in a negative way.

From your other statements that you have not yet made the switch

from film, it seems this negative twisting of digital results

is your way of not accepting the advancements of digital.

Here are portions of the private email exchange:

Ilya> And another question: was this noise for density, or for initial

luminance?

Roger> I calibrated the intensities using linear detectors, independently

of the film. This way I calibrated the film's transfer curve.

Example transfer curves are on my web site.

So now you assume that the Linear detectors are the film scanner

sensor. How then did I derive the film's transfer curve?

The film's transfer curve is clearly shown on my web site.

No, the linear detector was not the scanner, but a digital camera.

This is described on the above page at:

http://clarkvision.com/imagedetail/dynamicrange2

>

> So the numbers for luminance S/N in a 8.2 microns window into the film

> (one used by other curves in the graph) should be about

> gamma*(8.2/6.3) times higher. Taking gamma for film of 1.5 (is it a

> good value?), one gets 1.95x higher values of S/N of film (using the

> same units as for digital sensor).

Film does not have a single gamma value. So this assumption is

wrong in principle. While it may have close to a gamma

value over a range in the mid portion of the characteristic curve,

it still deviates from the value as you move closer to the

ends (shoulder and toe). My measured curves of digital and

film are shown at:

http://clarkvision.com/imagedetail/dynamicrange2

See Figure 8.

If your equation gamma*(8.2/6.3) were applied to Velvia, Gamma~2

in the mid point, we would get a factor of 2.6. My values for

Velvia, at http://clarkvision.com/imagedetail/digital.signal.to.no...

are about S/N = 40 in the mid range (3 stops down from maximum

signal), thus you would say the film should be 40*2.6 ~ 100.

At that same level, the 1D Mark II camera at ISO 100 produces

a S/N ~135, just barely higher than the film. Then at higher

intensities, the film would do better than the 1D mark II.

Well, that is not what is observed in the real world: the

1D Mark II images have much less noise than film. Here is

an example of velvia (scan = 6 micron pixels) versus a

canon D60 (6 micron pixels) (the D60 has significantly higher

noise than the 1D II):

http://www.clarkvision.com/imagedetail/film.vs.6mpxl.di...

Note the noise, prominent in the bright areas, of the film,

but the D60 is much smoother.

> However, IMO, comparing density

> noise of film with luminance noise of digital is not helpful.]

But isn't that what one actually sees in the final image that you

view? It is the final image that is important, not how you get there

(film or digital).

Roger

Anonymous

April 30, 2005 1:28:50 PM

In article <4272FA30.8070904@qwest.net>,

Roger N. Clark (change username to rnclark) <username@qwest.net> wrote:

>Ilya Zakharevich wrote:

>

>> [Well, this estimate assumes that numbers for film noise on Roger Clark

>> site are relevant. He still did not answer my queries about these

>> numbers, so I would take the estimage above with a grain of salt.]

>

>Why do you continue to attack me? What is it you have against me?

>I have tried to help you multiple times and you turn around and

>attack me. I have answered dozens of questions, and

>recently answered your questions on film noise, after answering

>previous questions on film noise. I don't understand your agenda.

You inject reality into his make-believe world, by rudely pointing out that

real 4x5 film actually captures far more detail than his pessimistic

estimates that he needs it to deliver so that his magic sensor and lens made

out of unobtanium can match it, and he sees that as rude.

Just a guess. ;-)

Anonymous

May 1, 2005 12:01:44 PM

[A complimentary Cc of this posting was sent to

Roger N. Clark (change username to rnclark)

<username@qwest.net>], who wrote in article <42738801.7020907@qwest.net>:

> > I'm sorry if you consider this as a personal attack. I asked you a

> > question, and did not get an anwer (until today, when you did answer).

> You constantly take what I say and twist to being negative.

I'm absolutely lost here... What is "negative"? My apology for not

being able to write in such a way that you do not consider it as

personal? Or what?

> > For the benefits of other people (until Roger writes this on his web

> > page): the "film" numbers are for a square window with side 6.3

> > microns; the "film" numbers are for density noise, not for luminance

> > noise. [This is not the same as for digital sensors.]

> Here again, you take what I said and reinterpret it in a negative way.

Roger, maybe it is not entirely my fault that what you wrote is not

interpreted the way you intended it? [I still have no idea what is

"negative" here...]

> Ilya> And another question: was this noise for density, or for initial

> luminance?

>

> Roger> I calibrated the intensities using linear detectors, independently

> of the film. This way I calibrated the film's transfer curve.

> Example transfer curves are on my web site.

> So now you assume that the Linear detectors are the film scanner

> sensor.

So I did; was I wrong? So far, you did not answer... (And I still do

not know what "This way I calibrated the film's transfer curve"

means.)

> How then did I derive the film's transfer curve?

Sorry, I'm lost again: how this is related to the question at hand?

> No, the linear detector was not the scanner, but a digital camera.

> This is described on the above page at:

> http://clarkvision.com/imagedetail/dynamicrange2

And note that I still have no idea whether your numbers for film are

for density, or for original luminance...

> If your equation gamma*(8.2/6.3) were applied to Velvia, Gamma~2

> in the mid point, we would get a factor of 2.6. My values for

> Velvia, at http://clarkvision.com/imagedetail/digital.signal.to.no...

> are about S/N = 40 in the mid range (3 stops down from maximum

> signal)

Are we looking at the same image? What I see on

http://clarkvision.com/imagedetail/digital.signal.to.no...

is the maximum at about 51e3 DN; I assume it is white (100% gray); I

would not go down 3 stops (to 12.5% gray), but to 18% gray, or 9e3

DN. The S/N value I see is 16. At 12.5% gray it is closer to 8.

Where did you take S/N = 40 from?

> > However, IMO, comparing density

> > noise of film with luminance noise of digital is not helpful.]

> But isn't that what one actually sees in the final image that you

> view? It is the final image that is important, not how you get there

> (film or digital).

Let me do it slowly: you want to say that you want to compare digital

image at a correct gamma with a film image scanned at wrong gamma?

I'm completely lost again at what kind of digital workflow you

consider for your slide scans...

I hope there is some confusion and we are talking about something

different...

Thanks,

Ilya

Anonymous

May 1, 2005 6:05:52 PM

Ilya Zakharevich wrote:

> [A complimentary Cc of this posting was sent to

> Roger N. Clark (change username to rnclark)

> <username@qwest.net>], who wrote in article <42738801.7020907@qwest.net>:

>

>

>>>I'm sorry if you consider this as a personal attack. I asked you a

>>>question, and did not get an anwer (until today, when you did answer).

>

>>You constantly take what I say and twist to being negative.

>

> I'm absolutely lost here... What is "negative"? My apology for not

> being able to write in such a way that you do not consider it as

> personal? Or what?

Ilya,

This newsgroup is commonly disrupted by trolls. One thing they

do is personal attacks, telling people they are idiots, stupid,

etc. Often this occurs in film versus digital wars. Then

others tune out, or tell others to stop feeding the troll.

I usually tune out, but I have had been in long battles with

at least one troll. Some of your responses have been troll like,

but I sincerely do not believe you are a troll. I believe

you have a different point of view which I think I can learn

from, and I hope visa versa. The internet can be quite

impersonal, and people sometimes say things they would never

say face to face. Here are some examples from your

emails (in this newsgroup and personal emails to me):

"Yes, I suspected that you would answer something like this; until some

numeric "pseudo-scientific" explanation is available..."

"What planet are you on?"

"Where did you learn your excellent arguing technique? What I blame

you is that your data is almost an order of magnitude off."

And in the current exchange:

"Well, this estimate assumes that numbers for film noise on Roger Clark

site are relevant. He still did not answer my queries about these

numbers, so I would take the estimage above with a grain of salt."

In this last exchange you use the fact that I haven't answered an email

to imply the results are wrong. I do have a life beyond this newsgroup.

Besides a heavy workload, I did take a 2+ week vacation to

Australia and New Zealand, and did not answer any emails while

I was away (that was great-no computers!). Another way to reword

to something like the last statement above would be:

The numbers for the S/N of Velvia on Roger Clark's web site imply

xyz, but does anyone know of similar data that agrees with this?

I am unclear if Roger's numbers refer to density or original scene

intensity.

See the difference?

> Roger, maybe it is not entirely my fault that what you wrote is not

> interpreted the way you intended it?

I agree with this! The problem with my web site I do not have enough

time to really do it right. But I try to take constructive

criticisms and comments and improve the web pages. Many people in

this newsgroup have asked questions, and a few have pointed out mistakes

(fortunately minor ones so far).

That is why, instead of responding to tear down. you should respond with

a more detailed question, or like in this thread, when you say I don't

understand this. Keep the discussion on the technical issues at hand

and not on the personal side. Everyone makes mistakes, and no one is

perfect. By keeping a civil discussion everyone can learn.

>>Ilya> And another question: was this noise for density, or for initial

>>luminance?

>>

>>Roger> I calibrated the intensities using linear detectors, independently

>>of the film. This way I calibrated the film's transfer curve.

>>Example transfer curves are on my web site.

>

>>So now you assume that the Linear detectors are the film scanner

>>sensor.

>

> So I did; was I wrong? So far, you did not answer... (And I still do

> not know what "This way I calibrated the film's transfer curve"

> means.)

I did answer:

"No, the linear detector was not the scanner, but a digital camera."

which you quote below.

> Sorry, I'm lost again: how this is related to the question at hand?

This is a good response. It is not personal and it puts the

burden on me to clarify what I have said. I'll do so below.

>>No, the linear detector was not the scanner, but a digital camera.

>>This is described on the above page at:

>>http://clarkvision.com/imagedetail/dynamicrange2

>

> And note that I still have no idea whether your numbers for film are

> for density, or for original luminance...

This page:

http://clarkvision.com/imagedetail/dynamicrange2

Figure 8:

http://clarkvision.com/imagedetail/dynamicrange2/dynami...

Shows the curve shape for Velvia. This is a log-log

plot and the slope of the curve is the gamma. So if film had a

constant gamma, the transfer curve would be a straight line.

If you did a google groups search, you would see a few months ago

when I was trying to come up with a mathematical relationship

for these curves. Several made suggestions, but none were good enough

in my opinion. I even tried a many term polynomial. In the end

I use piecewise local functions over narrow regions and did calculations

by hand to convert film scan data to linear original scene

intensity. Notice the horizontal axis on this plot is labeled

"Scene Intensity." This is the same scale as the plot you

refer to below:

http://clarkvision.com/imagedetail/digital.signal.to.no...

that is labeled "Linear Intensity." Both are the original scene

intensity in linear units. This is well described on the

http://clarkvision.com/imagedetail/dynamicrange2

(in fact people in this newsgroup had a lot of input improving that

very discussion.) Using the transfer curve from density to

original scene intensity, I converted the film scan values to

original scene intensity. I did this only for a few points on the

plot because it is so laborious.

>>If your equation gamma*(8.2/6.3) were applied to Velvia, Gamma~2

>>in the mid point, we would get a factor of 2.6. My values for

>>Velvia, at http://clarkvision.com/imagedetail/digital.signal.to.no...

>>are about S/N = 40 in the mid range (3 stops down from maximum

>>signal)

>

> Are we looking at the same image? What I see on

>

> http://clarkvision.com/imagedetail/digital.signal.to.no...

>

> is the maximum at about 51e3 DN; I assume it is white (100% gray); I

> would not go down 3 stops (to 12.5% gray), but to 18% gray, or 9e3

> DN. The S/N value I see is 16. At 12.5% gray it is closer to 8.

>

> Where did you take S/N = 40 from?

The maximum signal is ~65500 (the film was exposed so that the brightest

white paper was as close to the limit as I could get.

18% of 65500 ~ 11800. Read that on the horizontal axis on

the digital-s-to-n.v1.gif plot. The Velvia line is between S/N 20 and 24

on the vertical axis.

>>>However, IMO, comparing density

>>>noise of film with luminance noise of digital is not helpful.]

>

>>But isn't that what one actually sees in the final image that you

>>view? It is the final image that is important, not how you get there

>>(film or digital).

>

> Let me do it slowly: you want to say that you want to compare digital

> image at a correct gamma with a film image scanned at wrong gamma?

> I'm completely lost again at what kind of digital workflow you

> consider for your slide scans...

I hope it is clear now the original scene intensity was used and

that film density was converted correctly. But I personally am

not convinced this is correct way to do things, as we do not view

images converted to original scene intensity. We view them after this

complex function is applied to the original scene intensity. This

gets down to how people perceive images and noise in images.

But that is a future research topic, and much more subjective

than the mathematical linear scene intensity way.

Does this clear things up enough on the topic?

Roger

Anonymous

May 1, 2005 6:05:53 PM

Top-posted since this is the only comment...

Roger,

I commend you on your patience and civility.

Far beyond what I could muster were I in your place (in this discussion).

Well done.

-Mark

"Roger N. Clark (change username to rnclark)" <username@qwest.net> wrote in

message news:427536A0.7080308@qwest.net...

> Ilya Zakharevich wrote:

>> [A complimentary Cc of this posting was sent to

>> Roger N. Clark (change username to rnclark)

>> <username@qwest.net>], who wrote in article <42738801.7020907@qwest.net>:

>>

>>

>>>>I'm sorry if you consider this as a personal attack. I asked you a

>>>>question, and did not get an anwer (until today, when you did answer).

>>

>>>You constantly take what I say and twist to being negative.

>>

>> I'm absolutely lost here... What is "negative"? My apology for not

>> being able to write in such a way that you do not consider it as

>> personal? Or what?

>

> Ilya,

> This newsgroup is commonly disrupted by trolls. One thing they

> do is personal attacks, telling people they are idiots, stupid,

> etc. Often this occurs in film versus digital wars. Then

> others tune out, or tell others to stop feeding the troll.

> I usually tune out, but I have had been in long battles with

> at least one troll. Some of your responses have been troll like,

> but I sincerely do not believe you are a troll. I believe

> you have a different point of view which I think I can learn

> from, and I hope visa versa. The internet can be quite

> impersonal, and people sometimes say things they would never

> say face to face. Here are some examples from your

> emails (in this newsgroup and personal emails to me):

>

> "Yes, I suspected that you would answer something like this; until some

> numeric "pseudo-scientific" explanation is available..."

>

> "What planet are you on?"

>

> "Where did you learn your excellent arguing technique? What I blame

> you is that your data is almost an order of magnitude off."

>

> And in the current exchange:

> "Well, this estimate assumes that numbers for film noise on Roger Clark

> site are relevant. He still did not answer my queries about these

> numbers, so I would take the estimage above with a grain of salt."

>

> In this last exchange you use the fact that I haven't answered an email

> to imply the results are wrong. I do have a life beyond this newsgroup.

> Besides a heavy workload, I did take a 2+ week vacation to

> Australia and New Zealand, and did not answer any emails while

> I was away (that was great-no computers!). Another way to reword

> to something like the last statement above would be:

>

> The numbers for the S/N of Velvia on Roger Clark's web site imply

> xyz, but does anyone know of similar data that agrees with this?

> I am unclear if Roger's numbers refer to density or original scene

> intensity.

>

> See the difference?

>

>> Roger, maybe it is not entirely my fault that what you wrote is not

>> interpreted the way you intended it?

>

> I agree with this! The problem with my web site I do not have enough

> time to really do it right. But I try to take constructive

> criticisms and comments and improve the web pages. Many people in

> this newsgroup have asked questions, and a few have pointed out mistakes

> (fortunately minor ones so far).

>

> That is why, instead of responding to tear down. you should respond with

> a more detailed question, or like in this thread, when you say I don't

> understand this. Keep the discussion on the technical issues at hand

> and not on the personal side. Everyone makes mistakes, and no one is

> perfect. By keeping a civil discussion everyone can learn.

>

>>>Ilya> And another question: was this noise for density, or for initial

>>>luminance?

>>>

>>>Roger> I calibrated the intensities using linear detectors, independently

>>>of the film. This way I calibrated the film's transfer curve.

>>>Example transfer curves are on my web site.

>>

>>>So now you assume that the Linear detectors are the film scanner

>>>sensor.

>>

>> So I did; was I wrong? So far, you did not answer... (And I still do

>> not know what "This way I calibrated the film's transfer curve"

>> means.)

>

> I did answer:

> "No, the linear detector was not the scanner, but a digital camera."

> which you quote below.

>

>

>> Sorry, I'm lost again: how this is related to the question at hand?

>

> This is a good response. It is not personal and it puts the

> burden on me to clarify what I have said. I'll do so below.

>

>>>No, the linear detector was not the scanner, but a digital camera.

>>>This is described on the above page at:

>>>http://clarkvision.com/imagedetail/dynamicrange2

>>

>> And note that I still have no idea whether your numbers for film are

>> for density, or for original luminance...

>

> This page:

> http://clarkvision.com/imagedetail/dynamicrange2

> Figure 8:

> http://clarkvision.com/imagedetail/dynamicrange2/dynami...

>

> Shows the curve shape for Velvia. This is a log-log

> plot and the slope of the curve is the gamma. So if film had a

> constant gamma, the transfer curve would be a straight line.

> If you did a google groups search, you would see a few months ago

> when I was trying to come up with a mathematical relationship

> for these curves. Several made suggestions, but none were good enough

> in my opinion. I even tried a many term polynomial. In the end

> I use piecewise local functions over narrow regions and did calculations

> by hand to convert film scan data to linear original scene

> intensity. Notice the horizontal axis on this plot is labeled

> "Scene Intensity." This is the same scale as the plot you

> refer to below:

> http://clarkvision.com/imagedetail/digital.signal.to.no...

> that is labeled "Linear Intensity." Both are the original scene

> intensity in linear units. This is well described on the

> http://clarkvision.com/imagedetail/dynamicrange2

> (in fact people in this newsgroup had a lot of input improving that

> very discussion.) Using the transfer curve from density to

> original scene intensity, I converted the film scan values to

> original scene intensity. I did this only for a few points on the

> plot because it is so laborious.

>

>

>>>If your equation gamma*(8.2/6.3) were applied to Velvia, Gamma~2

>>>in the mid point, we would get a factor of 2.6. My values for

>>>Velvia, at http://clarkvision.com/imagedetail/digital.signal.to.no...

>>>are about S/N = 40 in the mid range (3 stops down from maximum

>>>signal)

>>

>> Are we looking at the same image? What I see on

>>

>>

>> http://clarkvision.com/imagedetail/digital.signal.to.no...

>>

>> is the maximum at about 51e3 DN; I assume it is white (100% gray); I

>> would not go down 3 stops (to 12.5% gray), but to 18% gray, or 9e3

>> DN. The S/N value I see is 16. At 12.5% gray it is closer to 8.

>>

>> Where did you take S/N = 40 from?

>

> The maximum signal is ~65500 (the film was exposed so that the brightest

> white paper was as close to the limit as I could get.

> 18% of 65500 ~ 11800. Read that on the horizontal axis on

> the digital-s-to-n.v1.gif plot. The Velvia line is between S/N 20 and 24

> on the vertical axis.

>

>>>>However, IMO, comparing density

>>>>noise of film with luminance noise of digital is not helpful.]

>>

>>>But isn't that what one actually sees in the final image that you

>>>view? It is the final image that is important, not how you get there

>>>(film or digital).

>>

>> Let me do it slowly: you want to say that you want to compare digital

>> image at a correct gamma with a film image scanned at wrong gamma?

>> I'm completely lost again at what kind of digital workflow you

>> consider for your slide scans...

>

> I hope it is clear now the original scene intensity was used and

> that film density was converted correctly. But I personally am

> not convinced this is correct way to do things, as we do not view

> images converted to original scene intensity. We view them after this

> complex function is applied to the original scene intensity. This

> gets down to how people perceive images and noise in images.

> But that is a future research topic, and much more subjective

> than the mathematical linear scene intensity way.

>

> Does this clear things up enough on the topic?

>

> Roger

Anonymous

May 2, 2005 2:18:04 AM

[A complimentary Cc of this posting was sent to

Roger N. Clark (change username to rnclark)

<username@qwest.net>], who wrote in article <4272FA30.8070904@qwest.net>:

> Is this another theoretical result?

Do you have anything about theoretical results (as far as they are

correct ;-)?

The question I discussed was very easy for theoretical analysis (since

performance of a lens at f/45 should not depend on lens design). Given

the data on MTF and the spectral curve for the film noise, one can make

very good estimates of equivalent digital (read: low noise) sensor.

[Of course, this assumes that film granularity can be modelled by

Gaussian noise with certain spectral density; and as I read the

data on your website, this is a very good approximation.]

All that I can add to this praise of "theoretical analysis" is that

one *I* did was a clear goof! ;-) :-(

Sorry for this. The only excuse is the time I posted on; however,

usually I perform better at this time of "day"...

So what I said was: with f/45 on 4x5in one gets equivalent resolution

close to 8MP. Obviously, this is not so even with 6x6cm: the pixel

size of 8MP 6x6cm sensor is 21 microns; the sensor can take frequencies

up to 24 lp/mm. On this frequencies MTF of a good film is close to 100%

(Velvia 50), so only MTF of the lens matters: the digital sensor

has MTF of 2/pi at the Nyquist frequency, so it will behave much

worse than film. At f/45 the cutoff frequency is close to 40 lp/mm.

So the MTF of the lens is close to 28%.

Assuming that density noise of film is 0.9% in 48micron diameter circle

(Velvia 50 again), and taking gamma=2, the equivalent luminosity noise

is 0.45% in this circle. Rescaling (in approximation of white noise; it

should work good according to your results for small window) to 21

microns square, one gets luminosity noise of the "pixel" on film 0.9%.

What information is lost if the Nyquist cutoff happens where MTF is about

28% on media with noise 0.9%? The common wisdom is that the "features

which matter" have contrast close to 1:1.5; this corresponds to 20%

modulation (80:120 = 1:1.5). With 28% MTF you get 5.6% modulation in

the image. This is more than 6 for S/N. So the "critical" features

at the cutoff frequency are very much above the noise level.

Conclusion: even with 6x6cm, a *lot* of information is lost if one uses

8MP sensor. What is a "reasonable minimum"? Take S/N ratio of

"important" features 3; this leads to MTF of 14%; this happens close to

29 lp/mm; so 12MP sensor is the "practical minimum" for 6x6cm image at

f/45.

The "practical maximum" is the Nyquist cutoff at the frequency where

S/N for 100% modulation becomes 1. (100% modulation is quite often in

practice: consider tree branches on the background of sky.) This is

MTF of about 1%; with Velvia 50 and f/45 it is achieved at 38 lp/mm.

Since 38 is very close to the "actual lens cutoff" frequency 40 lp/mm,

one can use the latter instead (and lose NO information contained in

the image provided by the lens; even the information which may be

restored only by a priory knowledge via "spread spectrum" techniques).

These result in 21MP and 23MP correspondingly.

The counts for 4x5in are proportionally larger; 42MP, 75MP, and 83MP.

Of course, this assumes that demosaicer can extract information up to

Nyquist frequency, which is manifestly wrong. So with Bayer filter

sensors, one needs to increase this by (at least!) 25%.

On the other hand, gamma=2 is not what the Fuji curve says: it is

closer to 1.7 at density=1.0. So the noise is slightly larger, and

the megapixel counts should be slightly lower.

Anonymous

May 2, 2005 2:18:05 AM

Ilya Zakharevich wrote:

> The "practical maximum" is the Nyquist cutoff at the frequency where

> S/N for 100% modulation becomes 1.

>

> The counts for 4x5in are proportionally larger; 42MP, 75MP, and 83MP.

> Of course, this assumes that demosaicer can extract information up to

> Nyquist frequency, which is manifestly wrong. So with Bayer filter

> sensors, one needs to increase this by (at least!) 25%.

Hey, we've made some progress. You were at 8 mpixel, now,

if I understand correctly 83 mp, and larger with bayer filter

comparisons. This is getting into the ballpark of what

real images when compared are showing.

Another factor is sampling. You are citing the Nyquist as

some maximum cuttoff. Nyquist sampling applies to sampling

that is in phase with the information being sampled.

Image detail is not all lined up with scanner pixels or digital

camera pixels. Thus Nyquist sampling does NOT apply, and one

actually must sample higher than Nyquist sampling in order

to record the detail. Here is a page that illustrates this:

http://www.clarkvision.com/imagedetail/sampling1.html

Roger

Anonymous

May 2, 2005 2:30:20 AM

[A complimentary Cc of this posting was sent to

Roger N. Clark (change username to rnclark)

<username@qwest.net>], who wrote in article <427536A0.7080308@qwest.net>:

> "Well, this estimate assumes that numbers for film noise on Roger Clark

> site are relevant. He still did not answer my queries about these

> numbers, so I would take the estimage above with a grain of salt."

> Another way to reword

> to something like the last statement above would be:

>

> The numbers for the S/N of Velvia on Roger Clark's web site imply

> xyz, but does anyone know of similar data that agrees with this?

> I am unclear if Roger's numbers refer to density or original scene

> intensity.

> See the difference?

Let me tell you what *I* see: I use your numbers in my calculations, I

have no way to know the "units" used for your numbers, so I write what

I think about reliability of *my calculations*.

Compare this what you write; just google for what you wrote about me!

I do not expect any apology from people who fiercely attack my

estimages and show little clue about physics; but apparently, you

*have* a clue. Hint hint...

Yours,

Ilya

Anonymous

May 2, 2005 2:59:09 AM

[A complimentary Cc of this posting was sent to

Roger N. Clark (change username to rnclark)

<username@qwest.net>], who wrote in article <427536A0.7080308@qwest.net>:

> > So I did; was I wrong? So far, you did not answer... (And I still do

> > not know what "This way I calibrated the film's transfer curve"

> > means.)

> I did answer:

> "No, the linear detector was not the scanner, but a digital camera."

> which you quote below.

This is correct English. However, it is not an answer to my simple

question (density noise vs luminance noise). All I can do is guess

what you did mean here, since my questions are about film only; no

digital camera in the context...

> intensity. Notice the horizontal axis on this plot is labeled

> "Scene Intensity." This is the same scale as the plot you

> refer to below:

> http://clarkvision.com/imagedetail/digital.signal.to.no...

> that is labeled "Linear Intensity." Both are the original scene

> intensity in linear units. This is well described on the

> http://clarkvision.com/imagedetail/dynamicrange2

This is all very well. However, my question is about S/N, so it is

not about horizontal axis...

> > Are we looking at the same image? What I see on

> >

> > http://clarkvision.com/imagedetail/digital.signal.to.no...

> >

> > is the maximum at about 51e3 DN; I assume it is white (100% gray); I

> > would not go down 3 stops (to 12.5% gray), but to 18% gray, or 9e3

> > DN. The S/N value I see is 16. At 12.5% gray it is closer to 8.

> >

> > Where did you take S/N = 40 from?

>

> The maximum signal is ~65500 (the film was exposed so that the brightest

> white paper was as close to the limit as I could get.

???? again. What I see is the right edge of the graph is at 60K. The

rightmost point on the graph for film is at 51K.

> 18% of 65500 ~ 11800. Read that on the horizontal axis on

> the digital-s-to-n.v1.gif plot. The Velvia line is between S/N 20 and 24

> on the vertical axis.

18% of 51K is about 9K; and I see a circle over 9K; it is below 16.

> > Let me do it slowly: you want to say that you want to compare digital

> > image at a correct gamma with a film image scanned at wrong gamma?

> > I'm completely lost again at what kind of digital workflow you

> > consider for your slide scans...

>

> I hope it is clear now the original scene intensity was used and

> that film density was converted correctly.

But my question was not about this... You again discuss horizontal

axis, while my question is about vertical one.

Just trying to make things a little bit more clear: this is how I

would do it (if I had equipement, time and patience); this is not to

show any "methodology". Just to show *what* I want to be measured.

Consider one point on the graph: 18% gray. Take "correct" exposure on

film. Develop as normal. Scan area. (Maybe one needs an extra step:

calibrate gamma of the scanner so that you know it emits the density,

and not something else.) Now calculate the standard deviation of the

numbers.

What you got is the density noise. Multiply it by gamma (1.7 at

density 1, if I read the graph on the Fuji sheet correct). What you

got is the original luminosity noise.

And my question was: which one of these two numbers is graphed on

your web page?

No "meticulous calculation" is needed; no need to approximate the

density vs exposure curve by polynomials...

Actually, one can do better: calculate the square of Fourier transform

of the area "deep in" 18% gray patch. Average results over wide

enough bins. What you got is the spectral curve of the density noise

of correctly exposed 18% gray patch. This may be very useful in finer

comparisons of different situations (e.g., predict results of scanning

at different resolutions, compare with published Fuji number 0.9%,

etc).

> But I personally am not convinced this is correct way to do things,

> as we do not view images converted to original scene intensity. We

> view them after this complex function is applied to the original

> scene intensity. This gets down to how people perceive images and

> noise in images. But that is a future research topic, and much more

> subjective than the mathematical linear scene intensity way.

How I see it: you have two collections of numbers: one from digital

sensor, another from scanning the slide film. To convert them to an

image, you need to decide the "throughput gamma" of digitizing +

postprocessing + printer-or-monitor + ambient-light.

I can believe that some situation may require the "throughput gamma"

to be not equal to 1. However: do you want to compare images with

*different* throughput gamma when you compare film+scanner vs digital

camera? If so, why?

> Does this clear things up enough on the topic?

Sorry, I cannot say so...

Yours,

Ilya

Anonymous

May 2, 2005 2:59:10 AM

Ilya Zakharevich wrote:

> [A complimentary Cc of this posting was sent to

> Roger N. Clark (change username to rnclark)

> <username@qwest.net>], who wrote in article <427536A0.7080308@qwest.net>:

>

>>>So I did; was I wrong? So far, you did not answer... (And I still do

>>>not know what "This way I calibrated the film's transfer curve"

>>>means.)

>

>

>>I did answer:

>>"No, the linear detector was not the scanner, but a digital camera."

>>which you quote below.

>

>

> This is correct English. However, it is not an answer to my simple

> question (density noise vs luminance noise). All I can do is guess

> what you did mean here, since my questions are about film only; no

> digital camera in the context...

The CMOS sensor of a digital camera is linear. One can use the

output of a digital camera to calibrate scene intensity

to linear units. That is what I did.

.......

> This is all very well. However, my question is about S/N, so it is

> not about horizontal axis...

S/N is a ratio. It is dimensionless. No units.

> ???? again. What I see is the right edge of the graph is at 60K. The

> rightmost point on the graph for film is at 51K.

I also cut off the top of the graph. I wanted to show what

happens at the low end because that might be where deviations from

Poisson statistics show. But I told you the max signal

is ~65500. Use that value.

>

>

>>18% of 65500 ~ 11800. Read that on the horizontal axis on

>>the digital-s-to-n.v1.gif plot. The Velvia line is between S/N 20 and 24

>>on the vertical axis.

>

> 18% of 51K is about 9K; and I see a circle over 9K; it is below 16.

No. The maximum is 65500. The 18% gray is .18*65500 ~ 11,800.

But I see your confusion. I did not make the plot for the

purpose you are using it for, so I will add info to the page

to clear that up.

> But my question was not about this... You again discuss horizontal

> axis, while my question is about vertical one.

You asked if linear scene intensity, or density was used.

The implication of a linear horizontal axis implies the data

were corrected to linear. To put it extremely obviously,

the density data from the scanner were corrected to a linear

scale.

> What you got is the density noise. Multiply it by gamma (1.7 at

> density 1, if I read the graph on the Fuji sheet correct). What you

> got is the original luminosity noise.

> And my question was: which one of these two numbers is graphed on

> your web page?

original luminosity noise. that is why the horizontal axis

is linear, not density.

> No "meticulous calculation" is needed; no need to approximate the

> density vs exposure curve by polynomials...

But the gamma for the film constantly changes with scene intensity.

But one could approximate it as one gamma value over a small

intensity range.

> How I see it: you have two collections of numbers: one from digital

> sensor, another from scanning the slide film. To convert them to an

> image, you need to decide the "throughput gamma" of digitizing +

> postprocessing + printer-or-monitor + ambient-light.

No, you are making it much more complex. The digital camera

data is converted to 16 bit tif linearly. That data provides

a precise "light meter" measurement of each pixel in the scene.

That data can then be used to calibrate the film's response.

read: http://www.clarkvision.com/imagedetail/dynamicrange2

Roger

Anonymous

May 2, 2005 4:22:45 AM

I wrote in article <d53mvt$1er9$1@agate.berkeley.edu>:

> Just trying to make things a little bit more clear: this is how I

> would do it (if I had equipement, time and patience); this is not to

> show any "methodology". Just to show *what* I want to be measured.

>

> Consider one point on the graph: 18% gray. Take "correct" exposure on

> film. Develop as normal. Scan area. (Maybe one needs an extra step:

> calibrate gamma of the scanner so that you know it emits the density,

> and not something else.) Now calculate the standard deviation of the

> numbers.

>

> What you got is the density noise. Multiply it by gamma (1.7 at

> density 1, if I read the graph on the Fuji sheet correct). What you

> got is the original luminosity noise.

Drat! Of course it should be more like "divide by gamma" (if we

assume gamma=2 or gamma=1.7 for Velvia at density 1; "fortunately", in

some notations it is gamma=0.5 or gamma=0.6 ;-) ;-().

Yours,

Ilya

Anonymous

May 2, 2005 2:09:51 PM

[A complimentary Cc of this posting was sent to

Roger N. Clark (change username to rnclark)

<username@qwest.net>], who wrote in article <4275A503.6090309@qwest.net>:

> >>I did answer:

> >>"No, the linear detector was not the scanner, but a digital camera."

> >>which you quote below.

> > This is correct English. However, it is not an answer to my simple

> > question (density noise vs luminance noise). All I can do is guess

> > what you did mean here, since my questions are about film only; no

> > digital camera in the context...

> The CMOS sensor of a digital camera is linear. One can use the

> output of a digital camera to calibrate scene intensity

> to linear units. That is what I did.

This is clear now. Thanks. You were talking about horizontal axis.

> > This is all very well. However, my question is about S/N, so it is

> > not about horizontal axis...

> S/N is a ratio. It is dimensionless. No units.

Of course it has units; but the units are more exotic than with time

or wavelength. "Units" appear when you need to compare two numbers

which may or may not relate to different conventions in measuring

them. And the may be different conventions even for dimensionless

values.

A similar example: consider *ratio* of sizes of two squares; it is

dimensionless: so assume one square is 2x as large as another. Until

we reach a convention *how* we compare sizes, the preceeding sentence

has no meaning: it it the linear size of the area?

Comparing S/N of non-linear systems has the same problem: you change

gamma of the image, and S/N changes correspondingly. Looking on your

graphs, it is easy to assume that S/N value is for "signal in linear

intensity" over "noise in linear intensity" (though it is not

explicitely stated).

However, it is absolutely not clear what the meaning of S/N for

scanned film is: is it for "signal measured as density" over "noise in

density", or "signal measured as linear intensity" over "noise in

calculated linear intensity". These two values differ by the value of

"local" gamma to recalculate from density to intensity.

*This* was my original question. Maybe you misunderstood it, since

apparently all your answers are about horizontal axis, while the

question was about the vertical one?

> > ???? again. What I see is the right edge of the graph is at 60K. The

> > rightmost point on the graph for film is at 51K.

> I also cut off the top of the graph. I wanted to show what

> happens at the low end because that might be where deviations from

> Poisson statistics show. But I told you the max signal

> is ~65500. Use that value.

I see; so the rightmost point of the graph is NOT 100% white! (This

is a very confusing graph indeed...)

> > 18% of 51K is about 9K; and I see a circle over 9K; it is below 16.

> No. The maximum is 65500. The 18% gray is .18*65500 ~ 11,800.

> But I see your confusion. I did not make the plot for the

> purpose you are using it for, so I will add info to the page

> to clear that up.

A lot of thanks. 18% of 65K is 12K, and the graph intersects the

S/N=20 line over this point. You agree?

BTW, a couple of times during the last two days you

mentioned something like "but the caption on the graph is BLAH". I

rechecked today, and it is not as you say at least in one of the cases:

a) The first graph on

http://clarkvision.com/imagedetail/digital.signal.to.no...

is captioned "correct" (I mean "theoretical curve assumes...")

modulo "1electron=1photon", but I'm afraid that the fact that I

can correctly interpret it now is due to my long discussions with

you in this respect, not to clarity of the caption. IMO, it is

still confusing. If you add corresponding discussion, maybe it

makes sense to add a cross-link?

b) Table 3 is still marked as "max DN at iso 100", although the table

4 is corrected for S60 being at iso 50.

> > What you got is the density noise. Multiply it by gamma (1.7 at

> > density 1, if I read the graph on the Fuji sheet correct). What you

> > got is the original luminosity noise.

> > And my question was: which one of these two numbers is graphed on

> > your web page?

> original luminosity noise. that is why the horizontal axis

> is linear, not density.

The question is not about the horizontal axis. So this answer

("original luminosity noise") is clear, but the second sentence taints

it again; it leads to possibility of confusion. So: should I just

trust the first sentence, and disregard the second one? Given other

information you provided, I will just do this...

> But the gamma for the film constantly changes with scene intensity.

> But one could approximate it as one gamma value over a small

> intensity range.

In my example I used one value for brightness: 18% gray; so I needed

one value for "local gamma". Of course, on your graph, with 7 points,

you need more work 7 values of "local gamma".

> No, you are making it much more complex. The digital camera

> data is converted to 16 bit tif linearly. That data provides

> a precise "light meter" measurement of each pixel in the scene.

> That data can then be used to calibrate the film's response.

> read: http://www.clarkvision.com/imagedetail/dynamicrange2

Sorry, I still have no idea what the data on this page means. For

simplicity, assume that we are discussing only Figure 8a. What is the

vertical axis for the 4 graphs on this figure? Only one meaning (for

Canon) is (somewhat) documented; I could not find docs for others.

Now go this "somewhat"; go back to figure 6. Here I'm completely

confused; apparently, there are statements that both the horizontal

axis and vertical axis depend linearly w.r.t. "spot metering"; but the

dependence between the axes is non-linear... So I'm completely lost

which axis means what...

Without clear understanding of these fundamental issues, I cannot

reliably discuss anything else on this page.

Thanks,

Ilya

Anonymous

May 2, 2005 2:09:52 PM

Ilya Zakharevich wrote:

>>S/N is a ratio. It is dimensionless. No units.

>

> Of course it has units; but the units are more exotic than with time

> or wavelength. "Units" appear when you need to compare two numbers

> which may or may not relate to different conventions in measuring

> them. And the may be different conventions even for dimensionless

> values.

Example:

Signal: photons/second

Noise: photons/second

S/N = dimensionless

> However, it is absolutely not clear what the meaning of S/N for

> scanned film is: is it for "signal measured as density" over "noise in

> density", or "signal measured as linear intensity" over "noise in

> calculated linear intensity". These two values differ by the value of

> "local" gamma to recalculate from density to intensity.

I've answered this question multiple times.

IT IS LINEAR. IT ISA NOT DENSITY!

> The question is not about the horizontal axis. So this answer

> ("original luminosity noise") is clear, but the second sentence taints

> it again; it leads to possibility of confusion. So: should I just

> trust the first sentence, and disregard the second one? Given other

> information you provided, I will just do this...

What sentences are you talking about?

>>No, you are making it much more complex. The digital camera

>>data is converted to 16 bit tif linearly. That data provides

>>a precise "light meter" measurement of each pixel in the scene.

>>That data can then be used to calibrate the film's response.

>>read: http://www.clarkvision.com/imagedetail/dynamicrange2

>

> Sorry, I still have no idea what the data on this page means. For

> simplicity, assume that we are discussing only Figure 8a. What is the

> vertical axis for the 4 graphs on this figure? Only one meaning (for

> Canon) is (somewhat) documented; I could not find docs for others.

Output intensity means what you get in your digital file:

output from the digital camera, or output from the film scanner.

>

> Now go this "somewhat"; go back to figure 6. Here I'm completely

> confused; apparently, there are statements that both the horizontal

> axis and vertical axis depend linearly w.r.t. "spot metering"; but the

> dependence between the axes is non-linear... So I'm completely lost

> which axis means what...

Does this help:

The horizontal axis is scene intensity in C*electrons where C is a constant

and electrons are from the sensor.

The vertical axis is what you get in your digital file: that is

"output intensity."

Roger

Anonymous

May 2, 2005 2:16:30 PM

[A complimentary Cc of this posting was sent to

Roger N. Clark (change username to rnclark)

<username@qwest.net>], who wrote in article <42759C70.3070701@qwest.net>:

> Another factor is sampling. You are citing the Nyquist as

> some maximum cuttoff. Nyquist sampling applies to sampling

> that is in phase with the information being sampled.

Nope. There is nothing related to phase there. IIRC the names, the

mathematical formulation is the Paley-Winer theorem: if your input has

no frequencies above one with wavelength l, then sampling with step

l/2 is 1-to-1: no information is lost, and all the possible results of

sampling correspond (uniquely!) to physically possible input.

Hope this helps,

Ilya

Anonymous

May 2, 2005 2:16:31 PM

Ilya Zakharevich wrote:

> [A complimentary Cc of this posting was sent to

> Roger N. Clark (change username to rnclark)

> <username@qwest.net>], who wrote in article <42759C70.3070701@qwest.net>:

>

>

>>Another factor is sampling. You are citing the Nyquist as

>>some maximum cuttoff. Nyquist sampling applies to sampling

>>that is in phase with the information being sampled.

>

>

> Nope. There is nothing related to phase there. IIRC the names, the

> mathematical formulation is the Paley-Winer theorem: if your input has

> no frequencies above one with wavelength l, then sampling with step

> l/2 is 1-to-1: no information is lost, and all the possible results of

> sampling correspond (uniquely!) to physically possible input.

Wrong. Look at the page:

http://www.clarkvision.com/imagedetail/sampling1.html

The example there clearly shows where Nyquist sampling

gets the wrong detail.

Roger

Anonymous

May 2, 2005 7:54:46 PM

Ilya Zakharevich <nospam-abuse@ilyaz.org> writes:

>Nope. There is nothing related to phase there. IIRC the names, the

>mathematical formulation is the Paley-Winer theorem: if your input has

>no frequencies above one with wavelength l, then sampling with step

>l/2 is 1-to-1: no information is lost, and all the possible results of

>sampling correspond (uniquely!) to physically possible input.

First, the sampling theorem applies only to frequencies *strictly less

than* this limit. If the wavelength is exactly 1, then the sampling

process captures anywhere between an artificially high amplitude (if you

sample only the peaks) and zero amplitude (if you happen to sample only

the zero crossings). It cannot predictably sample any signal at exactly

the cutoff frequency.

Also, the theory is true only under a particular set of conditions

assumed by the theory: that the input really does have no frequencies

with wavelength shorter than 1, and that the sampling is true point

sampling. Also, to reconstruct this image with no loss, you need to

use a sinc function in returning it to the continuous domain.

Thus, it provides a useful conceptual upper limit to the performance of

any sampled imaging system, but doesn't necessarily describe the real

limits of a real digital camera.

Real scenes are not bandlimited, so some sort of low-pass filter is

necessary to reduce amplitude to zero at the Nyquist limit and above.

This necessarily reduces response at frequencies somewhat below Nyquist

as well. A CCD or CMOS sensor does *not* do point sampling; it samples

a certain area of the image, and this adds an additional sin(x)/x

rolloff to the frequency response. Thus, properly-built digital cameras

that minimize aliasing *also* cannot resolve at the Nyquist limit; the

best they can do is resolve up to 70-80% of that limit.

Thus, in practice, it takes about 3 pixels' distance to resolve one

cycle (one line pair) at a useful contrast in a digital camera. Not 2.

Dave

Anonymous

May 2, 2005 11:43:04 PM

[A complimentary Cc of this posting was sent to

Roger N. Clark (change username to rnclark)

<username@qwest.net>], who wrote in article <427624AF.7000305@qwest.net>:

> Example:

> Signal: photons/second

> Noise: photons/second

> S/N = dimensionless

> > However, it is absolutely not clear what the meaning of S/N for

> > scanned film is: is it for "signal measured as density" over "noise in

> > density", or "signal measured as linear intensity" over "noise in

> > calculated linear intensity". These two values differ by the value of

> > "local" gamma to recalculate from density to intensity.

> I've answered this question multiple times.

> IT IS LINEAR. IT ISA NOT DENSITY!

Thanks. [This is the first non-ambiguous answer I saw.]

> > The question is not about the horizontal axis. So this answer

> > ("original luminosity noise") is clear, but the second sentence taints

> > it again; it leads to possibility of confusion. So: should I just

> > trust the first sentence, and disregard the second one? Given other

> > information you provided, I will just do this...

> What sentences are you talking about?

The sentences you removed. But it is probably not relevant after this

message.

So let me summarize things deduced so far:

When scanned with pixel size 6.3 microns, the noise/granularity of

Velvia 50 at the image of 18% gray is "equivalent" to noise in

luminosity with S/N level 20.

[This is a higher noise than what Fuji numbers suggest, but Fuji

numbers are considered too optimistic in other places too.]

Thanks,

Ilya

Anonymous

May 2, 2005 11:50:33 PM

[A complimentary Cc of this posting was sent to

Roger N. Clark (change username to rnclark)

<username@qwest.net>], who wrote in article <427624AF.7000305@qwest.net>:

> >>No, you are making it much more complex. The digital camera

> >>data is converted to 16 bit tif linearly. That data provides

> >>a precise "light meter" measurement of each pixel in the scene.

> >>That data can then be used to calibrate the film's response.

> >>read: http://www.clarkvision.com/imagedetail/dynamicrange2

> > Sorry, I still have no idea what the data on this page means. For

> > simplicity, assume that we are discussing only Figure 8a. What is the

> > vertical axis for the 4 graphs on this figure? Only one meaning (for

> > Canon) is (somewhat) documented; I could not find docs for others.

> Output intensity means what you get in your digital file:

> output from the digital camera, or output from the film scanner.

And the green line is just a linear fit with slope 2.2, or what? BTW,

given the graphs, it is clear that the bit count of the scanner does

not matter much; anyway, it would be nice to have it written too.

> > Now go this "somewhat"; go back to figure 6. Here I'm completely

> > confused; apparently, there are statements that both the horizontal

> > axis and vertical axis depend linearly w.r.t. "spot metering"; but the

> > dependence between the axes is non-linear... So I'm completely lost

> > which axis means what...

> Does this help:

> The horizontal axis is scene intensity in C*electrons where C is a constant

> and electrons are from the sensor.

> The vertical axis is what you get in your digital file: that is

> "output intensity."

There are two digital files in question... Is it

"The vertical axis is what you get in your TIFF file"

? And then remove the sentence

"spot metering agrees with linear numbers derived from the 16-bit tif

file over the entire dynamic range"

(unless "agrees" has some meaning I do not follow).

Thanks,

Ilya

Anonymous

May 2, 2005 11:53:49 PM

[A complimentary Cc of this posting was NOT [per weedlist] sent to

Ilya Zakharevich

<nospam-abuse@ilyaz.org>], who wrote in article <d560a9$2frt$1@agate.berkeley.edu>:

> > Output intensity means what you get in your digital file:

> > output from the digital camera, or output from the film scanner.

>

> And the green line is just a linear fit with slope 2.2, or what? BTW,

> given the graphs, it is clear that the bit count of the scanner does

> not matter much; anyway, it would be nice to have it written too.

Oups, I pressed "post" button too quick. It *is* there

The film was scanned on a sprintscan 4000 scanner at 4000 dpi using

linear 12-bits, outputting a 16-bit tif file using standard

settings.

Sorry,

Ilya

Anonymous

May 3, 2005 12:10:18 AM

[A complimentary Cc of this posting was sent to

Dave Martindale

<davem@cs.ubc.ca>], who wrote in article <d55ig6$dj7$1@mughi.cs.ubc.ca>:

> First, the sampling theorem applies only to frequencies *strictly less

> than* this limit.

That's correct.

> If the wavelength is exactly 1, then the sampling

> process captures anywhere between an artificially high amplitude (if you

> sample only the peaks) and zero amplitude (if you happen to sample only

> the zero crossings). It cannot predictably sample any signal at exactly

> the cutoff frequency.

But given that MTF of the lens is 0 at the cutoff frequency, there

*no* "signal at exactly the cutoff frequency". So while your remark

is correct, it is irrelevant.

> Also, the theory is true only under a particular set of conditions

> assumed by the theory: that the input really does have no frequencies

> with wavelength shorter than 1, and that the sampling is true point

> sampling.

The part before "and" is irrelevant; yes, the theorem "really" applies

only its conditions are "really" satisfied. ;-) And this is what

"really" happens with wave propagation through a hole.

The part after "and" is not true. *As stated*, the theorem applies to

"true point sampling" only. However, e.g., "bucket sampling" (as used

by photo sensors) is equivalent to a certain convolution followed by a

true point sampling. It is trivial to check that in the conditions of

the theorem the convolution is 1-to-1.

> Also, to reconstruct this image with no loss, you need to use a

> sinc function in returning it to the continuous domain.

To reconstructe the image, the easiest way to proceed is to perform a

discrete Fourier transform, then perform the inverse *continuous*

Fourier transform. In practice the latter continuous Fourier

transform would be probably substituted by a discrete Fourier

transform with a much finer grid.

> Thus, it provides a useful conceptual upper limit to the performance of

> any sampled imaging system, but doesn't necessarily describe the real

> limits of a real digital camera.

Here we disagree.

> Real scenes are not bandlimited,

Sorry, but high frequencies *cannot* pass through the entrance to the

lens. So the scenes *are* bandlimited.

> rolloff to the frequency response. Thus, properly-built digital cameras

> that minimize aliasing *also* cannot resolve at the Nyquist limit; the

> best they can do is resolve up to 70-80% of that limit.

B&W camera will easily resolve to Nyquist (provided that the lens

cutoff is below the Nyquist). The color ones can't due to aliasing of

color info with grayscale info. Thus while I agree that resolving

high above 80% of Nyquist is not practical, it is due to demosaicing,

not to limitations of Nyquist theorem.

> Thus, in practice, it takes about 3 pixels' distance to resolve one

> cycle (one line pair) at a useful contrast in a digital camera. Not 2.

Contemporary cameras easily resolve the period 2.5 pixels at contrast

75%. (There is some price in noise and Gibbs artefacts, of course,

for going this far.)

Hope this helps,

Ilya

Anonymous

May 3, 2005 12:17:30 AM

[A complimentary Cc of this posting was sent to

Roger N. Clark (change username to rnclark)

<username@qwest.net>], who wrote in article <42762195.4090106@qwest.net>:

> >>Another factor is sampling. You are citing the Nyquist as

> >>some maximum cuttoff. Nyquist sampling applies to sampling

> >>that is in phase with the information being sampled.

> > Nope. There is nothing related to phase there. IIRC the names, the

> > mathematical formulation is the Paley-Winer theorem: if your input has

> > no frequencies above one with wavelength l, then sampling with step

> > l/2 is 1-to-1: no information is lost, and all the possible results of

> > sampling correspond (uniquely!) to physically possible input.

> Wrong.

Who is wrong? Nyquist? Do not you think you have too much fighting

spirit?

> Look at the page:

> http://www.clarkvision.com/imagedetail/sampling1.html

> The example there clearly shows where Nyquist sampling

> gets the wrong detail.

The example clearly shows that you have a wrong idea what the Nyquist

statement says. It says that the *information is there*. It does not

say that using linear approximation will give you a good match for the

initial data.

What it says that with certain (simple) postprocessing one can get

back *all* of the initial data. But postprocessing is *needed*.

See nearby messages in this thread for more info.

Hope this helps,

Ilya

Anonymous

May 3, 2005 2:49:38 AM

Ilya Zakharevich wrote:

> [A complimentary Cc of this posting was sent to

> Roger N. Clark (change username to rnclark)

> <username@qwest.net>], who wrote in article <42762195.4090106@qwest.net>:

>

>>>>Another factor is sampling. You are citing the Nyquist as

>>>>some maximum cuttoff. Nyquist sampling applies to sampling

>>>>that is in phase with the information being sampled.

>

>

>>>Nope. There is nothing related to phase there. IIRC the names, the

>>>mathematical formulation is the Paley-Winer theorem: if your input has

>>>no frequencies above one with wavelength l, then sampling with step

>>>l/2 is 1-to-1: no information is lost, and all the possible results of

>>>sampling correspond (uniquely!) to physically possible input.

>

>

>>Wrong.

>

>

> Who is wrong? Nyquist? Do not you think you have too much fighting

> spirit?

No. Nyquist is absolutely correct. But he defined a specific

solution to a specific problem. People misuse his work.

>>Look at the page:

>>http://www.clarkvision.com/imagedetail/sampling1.html

>>The example there clearly shows where Nyquist sampling

>>gets the wrong detail.

>

> The example clearly shows that you have a wrong idea what the Nyquist

> statement says. It says that the *information is there*. It does not

> say that using linear approximation will give you a good match for the

> initial data.

>

> What it says that with certain (simple) postprocessing one can get

> back *all* of the initial data. But postprocessing is *needed*.

> See nearby messages in this thread for more info.

No. Your are misunderstanding. Let's do a very basic math example.

Consider a sine wave with period X:

sine( 0) = 0 (we will use degrees for the argument to the sine)

sine( 90) = 1

sine(180) = 0

sine(270) =-1

sine(360) = 0

Nyquist says you need two samples to get all the information

possible to reconstruct the signal. But what most people ignore is

that the sampling must be at the correct phase. Nyquist says

get 2 samples per cycle, but those samples must be at, in this case:

sine(90) and sine(270).

If you sample at 0 and 180 degrees, you get zero information.

This is illustrated on my page above.

Problem for the student: what information do you get

when you sample at the Nyquist sampling rate, but the

samples occur at A and A+180 degrees, when A = 23, 54,

9, 47, or any other angle that is not 90, 270?

When you can't sample the signal in phase, you need higher

sampling than Nyquist to properly recover the signal.

image data is not necessarily in phase with pixel spacing

in a digital camera.

Is this clear?

Roger

Anonymous

May 3, 2005 6:43:16 AM

Ilya Zakharevich <nospam-abuse@ilyaz.org> writes:

>But given that MTF of the lens is 0 at the cutoff frequency, there

>*no* "signal at exactly the cutoff frequency". So while your remark

>is correct, it is irrelevant.

How can you take this as given, if it doesn't apply to any real cameras?

A lens whose MTF is zero at the cutoff frequency will have unreasonably

low contrast far below the cutoff frequency. If you could have

arbitrarily large sensors with arbitrarily small pixels and still get

decent S/N, and at no extra cost, it would make sense to use a sensor

dense enough to put its cutoff frequency out there.

But real digital cameras that are available today are designed so that

the lens still has substantial contrast at the sensor cutoff frequency,

then depend on an anti-aliasing filter to rapidly roll off modulation

to zero at cutoff *while affecting lower frequencies as little as

possible*. You can't get that rapid cutoff with a lens alone.

So your comment, though theoretically correct, doesn't apply to any real

cameras.

>To reconstructe the image, the easiest way to proceed is to perform a

>discrete Fourier transform, then perform the inverse *continuous*

>Fourier transform. In practice the latter continuous Fourier

>transform would be probably substituted by a discrete Fourier

>transform with a much finer grid.

And who actually uses the Fourier transform to reconstruct their images

in digital photographic practice?

>> Thus, it provides a useful conceptual upper limit to the performance of

>> any sampled imaging system, but doesn't necessarily describe the real

>> limits of a real digital camera.

>Here we disagree.

>> Real scenes are not bandlimited,

>Sorry, but high frequencies *cannot* pass through the entrance to the

>lens. So the scenes *are* bandlimited.

The scene, on the subject side of the lens, is not bandlimited. The

image, as focused on the sensor, *is* bandlimited by the lens - but

that's the image, not the scene.

And the frequency limit due to the lens is several times the cutoff

frequency of the sensor, in practical cameras.

>> Thus, in practice, it takes about 3 pixels' distance to resolve one

>> cycle (one line pair) at a useful contrast in a digital camera. Not 2.

>Contemporary cameras easily resolve the period 2.5 pixels at contrast

>75%. (There is some price in noise and Gibbs artefacts, of course,

>for going this far.)

One can argue about whether a frequency of 2.5 pixels/cycle is resolved

cleanly enough or not. What is clear is that 2 pixels/cycle never works

without artifacts, and that many cameras do manage 3 pixels/cycle

cleanly.

Dave

Anonymous

May 3, 2005 10:16:00 AM

Ilya Zakharevich wrote:

>>>>read: http://www.clarkvision.com/imagedetail/dynamicrange2

>>>simplicity, assume that we are discussing only Figure 8a.

>

> And the green line is just a linear fit with slope 2.2, or what? BTW,

> given the graphs, it is clear that the bit count of the scanner does

> not matter much; anyway, it would be nice to have it written too.

The key on the graph says the green line is the human eye response.

It has a slope of 2.51 (fifth root of 100). The origin of this

line is how the stellar magnitude scale is defined.

>>>Now go this "somewhat"; go back to figure 6. Here I'm completely

>>>confused; apparently, there are statements that both the horizontal

>>>axis and vertical axis depend linearly w.r.t. "spot metering"; but the

>>>dependence between the axes is non-linear... So I'm completely lost

>>>which axis means what...

>

>>Does this help:

>>The horizontal axis is scene intensity in C*electrons where C is a constant

>>and electrons are from the sensor.

>

>>The vertical axis is what you get in your digital file: that is

>>"output intensity."

>

> There are two digital files in question... Is it

>

> "The vertical axis is what you get in your TIFF file"

>

> ? And then remove the sentence

>

> "spot metering agrees with linear numbers derived from the 16-bit tif

> file over the entire dynamic range"

>

> (unless "agrees" has some meaning I do not follow).

The section on spot metering was put in at the request of discussions

on this newsgroup months ago. Someone wanted independent proof

of the dynamic range in the image and linearity of the sensor. I had

taken spot measurements in the setup of the test so I knew the dynamic

range I was getting before I started taking data. So it was easy to add.

The spot metering and digital data from the sensor agree, showing

the sensor response is linear.

Roger

Anonymous

May 4, 2005 1:55:42 AM

[A complimentary Cc of this posting was sent to

Roger N. Clark (change username to rnclark)

<username@qwest.net>], who wrote in article <42776B80.7040401@qwest.net>:

> The key on the graph says the green line is the human eye response.

It does. But I do not think a lot of readers to your site know to

translate it into the following specs:

> It has a slope of 2.51 (fifth root of 100). The origin of this

> line is how the stellar magnitude scale is defined.

Add a crosslink to this definition?

> > ? And then remove the sentence

> >

> > "spot metering agrees with linear numbers derived from the 16-bit tif

> > file over the entire dynamic range"

> >

> > (unless "agrees" has some meaning I do not follow).

>

> The section on spot metering was put in at the request of discussions

> on this newsgroup months ago. Someone wanted independent proof

> of the dynamic range in the image and linearity of the sensor. I had

> taken spot measurements in the setup of the test so I knew the dynamic

> range I was getting before I started taking data. So it was easy to add.

> The spot metering and digital data from the sensor agree, showing

> the sensor response is linear.

Yes, I understood all this (from the discussion earlier on this page).

But then I saw the sentence quoted above, and all became murky again.

Note that it says "derived from the 16-bit tif", not "the sensor

response".

Hope this helps,

Ilya

Anonymous

May 4, 2005 2:00:47 AM

[A complimentary Cc of this posting was sent to

Roger N. Clark (change username to rnclark)

<username@qwest.net>], who wrote in article <427702E2.2060804@qwest.net>:

> > What it says that with certain (simple) postprocessing one can get

> > back *all* of the initial data. But postprocessing is *needed*.

> > See nearby messages in this thread for more info.

>

> No. Your are misunderstanding. Let's do a very basic math example.

>

> Consider a sine wave with period X:

>

> sine( 0) = 0 (we will use degrees for the argument to the sine)

> sine( 90) = 1

> sine(180) = 0

> sine(270) =-1

> sine(360) = 0

>

> Nyquist says you need two samples to get all the information

> possible to reconstruct the signal.

Nope. One needs *more* than 2 samples per period; but, e.g.,

2.00000001 samples per period is enough.

> Is this clear?

Clear, but wrong. As I said, see other nearby messages in this thread.

Hope this helps,

Ilya

Anonymous

May 4, 2005 2:00:48 AM

"Ilya Zakharevich" <nospam-abuse@ilyaz.org> wrote in message

news 58saf$oq7$1@agate.berkeley.edu...

> [A complimentary Cc of this posting was sent to

> Roger N. Clark (change username to rnclark)

> <username@qwest.net>], who wrote in article <427702E2.2060804@qwest.net>:

>

>> > What it says that with certain (simple) postprocessing one can get

>> > back *all* of the initial data. But postprocessing is *needed*.

>> > See nearby messages in this thread for more info.

>>

>> No. Your are misunderstanding. Let's do a very basic math example.

>>

>> Consider a sine wave with period X:

>>

>> sine( 0) = 0 (we will use degrees for the argument to the sine)

>> sine( 90) = 1

>> sine(180) = 0

>> sine(270) =-1

>> sine(360) = 0

>>

>> Nyquist says you need two samples to get all the information

>> possible to reconstruct the signal.

>

> Nope. One needs *more* than 2 samples per period; but, e.g.,

> 2.00000001 samples per period is enough.

>

>> Is this clear?

>

> Clear, but wrong. As I said, see other nearby messages in this thread.

>

> Hope this helps,

> Ilya

How about you start your own web-site and do your own in-depth, on-going

tests.

Do this for several years to the benefit of all.

Next, let every nay-sayer/nitpicker chime in and pester you with endless

jabs and snide commentary.

If you could receive from others what Roger keeps catching from you...and

yet keep the attitude of patience and grace Roger has kept with you, you'd

be worthy of praise.

Save'?

Anonymous

May 4, 2005 2:00:48 AM

Ilya Zakharevich wrote:

> [A complimentary Cc of this posting was sent to

> Roger N. Clark (change username to rnclark)

> <username@qwest.net>], who wrote in article <427702E2.2060804@qwest.net>:

>

>

>>>What it says that with certain (simple) postprocessing one can get

>>>back *all* of the initial data. But postprocessing is *needed*.

>>>See nearby messages in this thread for more info.

>>

>>No. Your are misunderstanding. Let's do a very basic math example.

>>

>>Consider a sine wave with period X:

>>

>>sine( 0) = 0 (we will use degrees for the argument to the sine)

>>sine( 90) = 1

>>sine(180) = 0

>>sine(270) =-1

>>sine(360) = 0

>>

>>Nyquist says you need two samples to get all the information

>>possible to reconstruct the signal.

>

>

> Nope. One needs *more* than 2 samples per period; but, e.g.,

> 2.00000001 samples per period is enough.

WRONG. PHASE IS CRITICAL.

In either case, 2.00000001 is not enough. 3 samples/cycle

still produces artifacts when sampling is NOT IN PHASE WITH

THE HIGHEST FREQUENCY. IMAGE DATA ARE NOT IN PHASE WITH PIXELS.

Some references on this subject:

http://www.themusicpage.org/articles/SamplingTheory.htm...

http://www.normankoren.com/Tutorials/MTF2.html

http://www.gris.uni-tuebingen.de/publics/paper/Schillin...

Try searching for Nyquist and phase or out of phase or phase error.

You will find many references. Thousands to choose from.

Roger

Anonymous

May 4, 2005 2:15:09 AM

[A complimentary Cc of this posting was sent to

Dave Martindale

<davem@cs.ubc.ca>], who wrote in article <d56og4$mro$1@mughi.cs.ubc.ca>:

> >But given that MTF of the lens is 0 at the cutoff frequency, there

> >*no* "signal at exactly the cutoff frequency". So while your remark

> >is correct, it is irrelevant.

> How can you take this as given, if it doesn't apply to any real cameras?

This is a law of physics. It applies to *everything*. You do not

even need a lens: just a hole in the wall is enough to get 0 in the

MTF at some frequency. All the lens does is that you do not need to

"look from infinite distance", the image is created at finite

distance: focal plane.

> A lens whose MTF is zero at the cutoff frequency will have

> unreasonably low contrast far below the cutoff frequency.

Are not you confusing Nyquist frequency of the sensor with the cutoff

frequency of the lens?

> But real digital cameras that are available today are designed so that

> the lens still has substantial contrast at the sensor cutoff frequency,

I see: your cutoff is for sensor. Mine was for lens.

> then depend on an anti-aliasing filter to rapidly roll off modulation

> to zero at cutoff *while affecting lower frequencies as little as

> possible*. You can't get that rapid cutoff with a lens alone.

You do not need do. Blur filter *removes* information. The only

purpose of blur filter is to fight with low QE of the sensor assembly:

to get low noise, you need large pixels. So a lousy lens will do (or

a good lens used at f-stops far from the sweet spot); but with

interchangeable lenses you do not know which lens will be used. So

you decrease the quality of *any* lens used with your camera.

If you have a low noise sensor with Nyquist frequency close to the

cutoff frequency of the lens, you

a) get no aliasing;

b) can use postprocessing to compensate for decrease of MTF at

high frequencies (remember that I assumed low noise!).

> And the frequency limit due to the lens is several times the cutoff

> frequency of the sensor, in practical cameras.

Actually, this depends on the definition of "several times". So it is

hard to argue. ;-)

> One can argue about whether a frequency of 2.5 pixels/cycle is resolved

> cleanly enough or not. What is clear is that 2 pixels/cycle never works

> without artifacts, and that many cameras do manage 3 pixels/cycle

> cleanly.

It is very easy to contract a camera with clear 2 pixels/cycle without

any artefacts. It just would not use the capabilities of the lens

well; but current dSLRs do not too, and nobody complains. ;-)

Hope this helps,

Ilya

Anonymous

May 4, 2005 2:56:10 AM

[A complimentary Cc of this posting was sent to

Dave Martindale

<davem@cs.ubc.ca>], who wrote in article <d56og4$mro$1@mughi.cs.ubc.ca>:

> >To reconstructe the image, the easiest way to proceed is to perform a

> >discrete Fourier transform, then perform the inverse *continuous*

> >Fourier transform. In practice the latter continuous Fourier

> >transform would be probably substituted by a discrete Fourier

> >transform with a much finer grid.

> And who actually uses the Fourier transform to reconstruct their images

> in digital photographic practice?

Any linear DSP (unless extremely trivial) is much quickier to do in

the Fourier domain. E.g., the outlined above algorithm will beat

yours hands down (FFT being essentially free).

E.g., I assume that Adobe demosaicer is using some manipulations in

Fourier domain: the calculated throughput MTF curve is horizontal at

the origin (with Adobe demosaicer). Since the lens MTF curve is not

horizontal at the origin, one needs *a lot* of computing power to fix

it in "space domain". And the calculation becomes more or less

trivial in Fourier domain.

Hope this helps,

Ilya

Anonymous

May 4, 2005 11:13:18 AM

[A complimentary Cc of this posting was sent to

Roger N. Clark (change username to rnclark)

<username@qwest.net>], who wrote in article <427831B6.7010704@qwest.net>:

> > Nope. One needs *more* than 2 samples per period; but, e.g.,

> > 2.00000001 samples per period is enough.

>

> WRONG. PHASE IS CRITICAL.

>

> In either case, 2.00000001 is not enough. 3 samples/cycle

> still produces artifacts when sampling is NOT IN PHASE WITH

> THE HIGHEST FREQUENCY. IMAGE DATA ARE NOT IN PHASE WITH PIXELS.

Feel free to persist. However, I advocate for not sharing this

opinion in public on media with estimated lifetime longer than yours.

> http://www.themusicpage.org/articles/SamplingTheory.htm...

>

> http://www.normankoren.com/Tutorials/MTF2.html

>

> http://www.gris.uni-tuebingen.de/publics/paper/Schillin...

Right, see the second paragraph of 10.4.1.

> Try searching for Nyquist and phase or out of phase or phase error.

> You will find many references. Thousands to choose from.

Thanks. But I think I can find yet more references on circle quadrature

sightings... Should be more entertaining too. ;-)

Hope this helps,

Ilya

Anonymous

May 4, 2005 12:09:23 PM

Ilya Zakharevich <nospam-abuse@ilyaz.org> writes:

>Are not you confusing Nyquist frequency of the sensor with the cutoff

>frequency of the lens?

>> But real digital cameras that are available today are designed so that

>> the lens still has substantial contrast at the sensor cutoff frequency,

>I see: your cutoff is for sensor. Mine was for lens.

Ok. But since the cutoff frequency for the sensor (in current cameras

anyway) is well below the cutoff frequency of the lens, it is the former

that determines the performance of the camera as a whole.

>You do not need do. Blur filter *removes* information. The only

>purpose of blur filter is to fight with low QE of the sensor assembly:

>to get low noise, you need large pixels. So a lousy lens will do (or

>a good lens used at f-stops far from the sweet spot); but with

>interchangeable lenses you do not know which lens will be used. So

>you decrease the quality of *any* lens used with your camera.

You selectively low-pass filter the output of the lens in order to get a

response that is near zero where it needs to be (the Nyquist frequency

of the sensor), but still rather high up to about 60 % of that

frequency. That gives the best image for a given number of pixels.

Without the filter, you need the lens' own cutoff frequency to be

comparable to the Nyquist frequency - but lenses don't provide a steep

cutoff. So you end up with lower contrast through almost the whole

spatial frequency range. Or if the lens cutoff is well above the sensor

Nyquist frequency, you get aliasing. Neither is good.

>If you have a low noise sensor with Nyquist frequency close to the

>cutoff frequency of the lens, you

> a) get no aliasing;

> b) can use postprocessing to compensate for decrease of MTF at

> high frequencies (remember that I assumed low noise!).

Nice theory. But to get this, you'd need several times the spatial

resolution of current sensors. You'd have perhaps 10X as many pixels to

deal with for only a modest increase in apparent image resolution.

And, with current sensor technology, those tiny pixels would have

substantially higher noise, so you "assumed low noise" is really a

rather extravagant bit of handwaving.

Dave

Anonymous

May 4, 2005 12:20:03 PM

Ilya Zakharevich <nospam-abuse@ilyaz.org> writes:

>> And who actually uses the Fourier transform to reconstruct their images

>> in digital photographic practice?

>Any linear DSP (unless extremely trivial) is much quickier to do in

>the Fourier domain. E.g., the outlined above algorithm will beat

>yours hands down (FFT being essentially free).

By your standards, I'm sure that almost all of digital photography is

extremely trivial. For example, most resizing is done in the spatial

domain using bilinear or bicubic resampling schemes. Even using a

resampling filter with a somewhat larger kernel like Lanczos, still in

the spatial domain, is considered somewhat exotic.

FFTs are not free. They aren't even faster until you get above a

certain kernel size in image processing. They also require a certain

amount of messiness in dealing with border conditions. If your image

won't fit into memory and needs to be accessed using a tile-based

scheme, that adds more messiness. And FFTs really need floating-point

math, or at least lots of bits in scaled integer, to avoid loss of

precision.

In comparison, spatial-domain convolution is easy, fast for small

kernels, fits in with tiles nicely, and can often make use of 8- or

16-bit math to do multple operations at once using MMX-style

instructions.

Dave

Anonymous

May 4, 2005 12:20:04 PM

davem@cs.ubc.ca (Dave Martindale) writes:

> By your standards, I'm sure that almost all of digital photography is

> extremely trivial. For example, most resizing is done in the spatial

> domain using bilinear or bicubic resampling schemes. Even using a

> resampling filter with a somewhat larger kernel like Lanczos, still in

> the spatial domain, is considered somewhat exotic.

Any idea what Genuine Fractals does? Does anyone still use that?

Is using a Lanczos kernel (whatever that is, I presume I can find it

in a book) actually worthwhile for resizing photos? It shouldn't be

that big a deal to add it to GIMP.

Anonymous

May 4, 2005 12:48:47 PM

On 04 May 2005 01:23:52 -0700, Paul Rubin

<http://phr.cx@NOSPAM.invalid> wrote:

>davem@cs.ubc.ca (Dave Martindale) writes:

>> By your standards, I'm sure that almost all of digital photography is

>> extremely trivial. For example, most resizing is done in the spatial

>> domain using bilinear or bicubic resampling schemes. Even using a

>> resampling filter with a somewhat larger kernel like Lanczos, still in

>> the spatial domain, is considered somewhat exotic.

>

>Any idea what Genuine Fractals does? Does anyone still use that?

>

>Is using a Lanczos kernel (whatever that is, I presume I can find it

>in a book) actually worthwhile for resizing photos? It shouldn't be

>that big a deal to add it to GIMP.

The original writer of GF is a fellow named

Mark Jaress, who often posts on the Epson Wide

format group on Yahoo.

IMO, from brief personal experience, GF is/was

fairly worthless. Properly speaking, GF is a

high-quality (but lossy) compression scheme, more

comparable to JPG than an upsampling/downsampling

scheme.

On the other hand I read all sorts of praises of

Qimage (ddisoftware.com) for having a great

collection of terrific upsampling tools. Qimage

is something of a poor man's RIP.

I have no direct experience with Qimage, but

some good experience with another product from

the same outfit.

rafe b.

http://www.terrapinphoto.com

Read discussions in other Digital Camera categories

!