Sign in with
Sign up | Sign in
Your question

[LONG] Theoretical estimates for film-equivalent digital s..

Tags:
Last response: in Digital Camera
Share
Anonymous
March 6, 2005 11:50:36 AM

Archived from groups: rec.photo.digital (More info?)

This or similar topics appear quite often, but most treatments avoid
starting "from first principles". In particular, the issues of
photon Poisson noise are often mixed up with electron Poisson noise,
thus erring close to an order of magnitude. Additionally, most people
assume RGB sensors; I expect that non-RGB can give "better" color
noise parameters than (high photon loss) RGB. [While I can easily
detect such errors in calculations of others, I'm in no way a
specialist, my estimates may be flawed as well... Comments welcome.]

Initial versions of this document were discussed with Roger N Clark;
thanks for a lot of comments which lead to major rework of
calculations; however, in order of magnitude the conclusions are the
same as in the beginning of the exchange... [I do not claim his
endorsement of what I write here - though I will be honored if he does
;-]

I start with conclusions, follow with assumptions (and references
supporting them), then conclude by calculations, and consideration of
possible lenses.

CONCLUSIONS:
~~~~~~~~~~~

Theoretical minimal size of a color sensor of sensitivity 1600ISO,
(which is equivalent to Velvia 50 36x24mm in resolution and noise)
is 13mm x 8.7mm. Similar B&W sensor can be 12x8mm. Likewise,
theoretical maximum sensitivity of 3/4'' 8MP color sensor is
1227 ISO.

[All intermediate numbers are given with quite high precision; of
course, due to approximations in assumptions, very few significant
digits are trustworthy.]

These numbers assume QE=1, and non-RGB sensor (to trade non-critical
chrominance noise vs. critical luminance noise). For example, in a
2x2 matrix one can have 2 cells with "white" (visible-transparent)
filter, 1 cell with yellow (passes R+G) filter, another with cyan
(passes G+B) filter.

ASSUMPTIONS:
~~~~~~~~~~~

a) Photopic curve can be well approximated by Gaussian curve
V(lambda) = 1.019 * exp( -285.4*(lambda-0.559)^2 )
see
http://home.tiscali.se/pausch/comp/radfaq.html

b) Solar irradiation spectrum on the sea level can be well approximated
by const/lambda in the visible spectrum (at least for the purpose
of integration of photopic curve). See

http://www.jgsee.kmutt.ac.th/exell/Solar/Intensity.html
http://www.clas.ufl.edu/users/emartin/GLY3074S03/images...

In the second one lower horizontal axis is obviously in nm, and the
upper one complete junk. Sigh...)

c) Sensitivity of the sensor is noise-bound. Thus sensitivity of
a cell of a sensor should be measured via certain noise level
at image of 18% gray at normal exposure for this sensitivity.

d) The values of noise given by Velvia 50 film and Canon 1D Mark II
at 800ISO setting at image of 18% gray are "acceptable". These
two are comparable, see
http://clarkvision.com/imagedetail/digital.signal.to.no...
Averaging 15 and 28 correspondingly, one gets 21.5 as the "acceptable"
value of S/N in the image of 18% gray.

e) Noise of the sensor is limited by the electron noise (Poisson noise
due to discrete values of charge); other sources of noise are
negligeable (with exposition well below 40sec). See
http://www.astrosurf.com/buil/d70v10d/eval.htm

f) The AE software in digital cameras is normalizing the signal so
that the image of 100% reflective gray saturates the sensor.
[from private communication of Roger Clark; used in "d"]

g) Normal exposure for 100ISO film exposes 18% gray at 0.08 lux-sec.
See
http://www.photo.net/bboard/q-and-a-fetch-msg?msg_id=00...

h) The color "equivalent resolution" numbers in
http://clarkvision.com/imagedetail/film.vs.digital.1.ht...
may be decrease by 25% to take into account recent (as of
2005) improvements in demosaicing algorithms. E.g., see
http://www.dpreview.com/reviews/konicaminoltaa200/page1...
Taking largest numbers (Velvia 50 again, and Tech Pan), this gives
16MP B&W sensor, and 12MP color sensor.

i) Eye is much less sensitive to the chrominance noise than to
luminance noise. Thus it makes sense to trade chrominance
noise if this improves luminance noise (up to some limits).

In particular, sensors with higher-transparency filter mask give
much lower luminance noise; the increased chrominance noise (due
to "large" elements in the to-RGB-translation matrix) does not
"spoil" the picture too much.

j) To estimate Poisson noise is very simple: to get S/N ratio K, one
needs to receive K^2 particles (electrons, or, assuming QE=1,
photons).

METAASSUMPTION
~~~~~~~~~~~~~~

In any decent photographic system the most important component
of performance/price ratio is the lenses. Since the price of the
lens scales as 4th or 5th power of its linear size, decreasing
the size of the sensor (while keeping S/N ratio) may lead to
very significant improvements of performance/price.

Details in the last section...

[This ignores completely the issue of the price of accumulated
"legacy" lenses, so is not fully applicable to professionals.]

Since sensor is purely electronic, so (more or less) subject to
Moore law, the theoretical numbers (which are currently an order
of magnitude off) have a chance to be actually relevant in not
so distant time. ;-)

PHOTON FLOW OF NORMAL EXPOSURE
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

First, we need to recalculate 0.08 lux-sec exposure into the the
photon flow.

Assume const/lambda energy spectral density (assumption b),
integration of photonic curve gives const*0.192344740294
filtered flow. With constant spectral density at
1 photon/(sec*mkm*m^2), const = h * c, so the eye-corrected energy
flow is 3.82082403851941e-20 W/m^2 = 2.60962281830876e-17 lux.

Thus 0.08 lux-sec corresponds to (constant) spectral density
3065.57711860622 photon/(mkm*mkm^2). This is the total photon flow
of the image of 18% gray normally exposed for 100ISO film.

B&W SENSOR
~~~~~~~~~~
One can imagine (among others) 3 different theoretical types of B&W
sensor: one giving "physiologically correct" response of the
photopic curve, one accepting all photons in the "extended visible"
range of the spectrum 380 to 780nm, and an intermediate one, one
accepting all photons in the "normal visible" range of the spectrum
400 to 700nm. See
http://en.wikipedia.org/wiki/Visible_light

To cover the first case, one needs to multiply the value obtained in
the previous section by the integral of the photopic curve,
0.106910937 mkm; for the other, one needs to multiply by the width
of the window, 0.4 mkm, and 0.3 mkm. Resulting values are
327.7437, 1226.23, and 919.673 photon/mkm^2 as the flow of 18% gray
normally exposed for 100ISO film.

However, since photopic curve should not produce any particularly
spectacular artistic effect, it makes sense to have the sensor
of maximal possible sensitivity, and achieve the photopic response
(if needed) by application of a suitable on-the-lens filter. So we
ignore the first value, and use the other two. For example, the
smaller value gives photon Poisson noise S/N ratio of 21.5 with a
square cell of 0.70896 mkm. The larger value of the window,
0.4 mkm, results in a square cell of 0.613977 mkm. These are
smallest possible sizes of the cell which can provide the required
S/N ratio at exposure suitable for 100ISO film.

To have 1600ISO sensor, these numbers should be quartupled; 16MP
3:2 ratio sensor based on the 0.4mkm spectral window results in
12x8mm sensor.

OPTIMIZING THE COLOR MASK
~~~~~~~~~~~~~~~~~~~~~~~~~

For color sensor, theoretical estimates are complicated by the
following issue: different collections of spectral curves for the
filter mask can result in identical sensor signal after suitable
post-processing. (This ignores noise, and de-mosaicing artefacts.)
Indeed, taking a linear combination of the R,G,B cells is equivalent
to substituting the transparency curves for mask filters by the
corresponding linear combination. (This assumes the linear
combination curve fits between 0 and 1.)

As we saw in B&W SENSOR section, a more transparent filter results
in higher S/N at the cell; if the filter is close to transparent,
cell's signal is close to luminance, thus higher transparency
results in improvement of luminance noise.

To estimate color reproduction, take spectral sensitivity curves
of the different types of sensors cells. Ideally, 3 linear
combinations of these curves should match the spectral sensitivity
curves of cones in human eyes. Assuming 3 different types of sensor
cells, this shows that spectral curves of cells should be linear
combinations of spectral sensitivity curves of cones. In
principle, any 3 independent linear combinations can be used for
sensors curves; recalculation to RGB requires just application of
a suitable matrix. However, large matrix coefficients will result
in higher chrominance noise. (Recall that we assume that [due to
high transparency] the luminance is quite close to signals
of the sensors, thus matrix coefficents corresponding to luminance
can't be large; thus all that large matrix coefficients can do is
to give contribution to CHROMINANCE noise.)

Without knowing exactly how eye reacts to chrominance and luminance
noise it is impossible to optimize the sensor structure; however,
one particular sensor structure is "logical" enough to be close to
optimal: take 2 filters in a 2x2 filter matrix to be as transparent
as possible while remaining a linear combination of cone curves.
This particular spectral curve is natural to call the W=R+G+B curve.
Take two other filters to be as far as possible from W (and
from each other) while keeping high transparency; in particular,
keep the most powerful (in terms of photon count) G channel, and
remove one of R and B channels; this may result, for example, in
the following filter matrix

W Y W Y W Y W Y
C W C W C W C W
W Y W Y W Y W Y
C W C W C W C W

here C=G+B, Y=R+G. Since the post-processing matrix R=W-C, B=W-G,
G=C+Y-W does not have large matrix coefficients, the increase in
chrominance noise is not significant.

Above, W means the combination of the cone sensitivity curves with
maximal integral among (physically possible) combinations with
"maximal transparency" being 1. While we cannot conclude that this
results in the optimal mask, recall the following elementary fact:
to estimate the maximal *value* f(x) one can make quite large errors
in the *argument* x, and still get good approximation for f(xMAX).
Thus choosing the matrix above gives a pessimistic estimate, AND one
should expect that it is not very far of the correct one.

TRANSPARENCY OF THE COLOR MASK
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Actually, what is R, G, B in colorimetry are in turn linear
combinations of responses of cones. Use cones sensitivity curves
from
http://www.rwc.uc.edu/koehler/biophys/6d.html

Now use RR, GG, and BB to denote *these* curves, not "usual" R, G, B
of colorimetry. Since I could not find these data in table form,
the values below are not maximal possible, but just first
opportunities which come to mind.

Using 0.9RR+0.35GG, one gets a quite flat curve; one may assume that
in range 0.42--0.65nm the sensitivity is above 0.9. with one at
700nm going down to 0.6, and 400nm going down to 0.8. So the
filter "compatible" with cone sensitivity curves can easily achive
0.9 transparency in the range 400--700nm, which would give photon
count 827.705822 photon/mkm^2 in the W (R+G+B) type cell. Taking
GG and 0.9RR+0.35BB curves as other types of sensors, one gets
average transparency about 0.8 and 0.85. Taking average
transparency of the filter over a 2x2 WCWY matrix cell 0.85, one
gets photon count averaged over different kinds of color-sensitive
cells as 781.722165 photon/mkm^2.

As above, we assume that this average photon count is the count
giving contribution into luminance noise.

FINAL ESTIMATES
~~~~~~~~~~~~~~~

With above average photon count at a cell, to get S/N ratio 21.5
one needs a square cell of 0.768975 mkm. Recall that this is the
the smallest possible cell which can provide the required S/N ratio
at exposure suitable for 100ISO film.

Quadrupling to get sensitivity 1600ISO, and taking 12MP equivalent
of 36x24mm Velvia 50, one gets the 13 x 8.7 mm sensor.

HOW GOOD CAN 36x34mm SENSOR GO?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

In other direction, 36x24 mm color sensor at sensitivity 1600ISO can
(theoretically) be equivalent (or better) than 10 x 6.6 cm Velvia 50
film; that is 1/2 frame of 4x5 in film. In yet other words, take
36x24mm sensor with resolution and noise better than 4x5 Velvia 50
film; it has theoretical maximum of sensibility at 800ISO.
Likewise, to achieve resolution and noise of 8x10in Velvia 50 film,
the maximal sensibility of 36x24mm sensor is 200ISO.

THE QUESTION OF LENSES
~~~~~~~~~~~~~~~~~~~~~~

Of course, preceeding section completely ignores the issue of
lenses; on the other hand, a cheap prosumer zoom lens with
28--200mm equivalent paired with a digital sensor easily gives
resolution of 3.3mkm per single line (with usable image diameter
about 11mm, see
http://www.dpreview.com/reviews/konicaminoltaa200/page1...
); so we know it is practically possible to create a lens which
saturates the theoretical resolution of 1600ISO sensor (but probably
not 800ISO and 200ISO sensor!). It is natural to expect that a
non-zoom lens could saturate resolution of 800ISO sensor.

This gives theoretical resolution limit of a "practical" lens +
800ISO digital 36x24mm sensor: it is equivalent to best 4x5in 50ISO
film (with non-zoom lens). With zoom lense, one can achieve quality
of 2.5x4in 50ISO film; sensor is at 1600ISO, lense is 28-200mm zoom.

Some more estimates of how practical is "practical": the zoom
mentioned above is bundled with $600 street price camera which
weights about 580g. Assume the lens takes 1/2 of the price, and
1/4 of the weight. Rescaling from 11mm diagonal image size to the
36x24mm image size will increase price to $70K--$280K (assuming that
price is proportional to 4th-5th power of the size [these numbers
were applicable 20 years ago, I do not know what holds today]), and
will increase the weight to 9kg.

On the other hand, the 4:3 aspect ratio sensor of the same area as
the mentioned above 13 x 8.6 mm sensor (1600ISO sensor equivalent
in quality to Velvia 50 at 36x24mm) is 12.2 x 9.17mm, diagonal is
15.26mm. It is 0.9'' sensor (in the current - silly - notation).

Rescaling the mentioned above lens to this size gives lens price
$1100--$1500, and weight about 750g; both quite "reasonable".
Recall that this 28--200 equivalent zoom lens will saturates resolution
of an equivalent of Velvia 50 36x24mm film.
Anonymous
March 6, 2005 11:50:37 AM

Archived from groups: rec.photo.digital (More info?)

A very nice write up, I will admit I have not gone through all of it
yet in detail. One thing to consider is that CCD have a read out noise
of around 10 electrons, whereas this noise level will not greatly
effect the signal to noise when looking at 400 detected photons with an
noise level of 20 electrons it will start to dominate in darker parts
of the scene. For instance by the time you are down 5 stops from full
white the readout noise will be larger then the photon noise, by a
small amount.

The idea of using non-RGB filters is sound and a number of CCD sensors
have used filters more like C, Y and M. Why RGB is used on digital
cameras I am not sure.

Scott
Anonymous
March 9, 2005 3:52:16 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
Scott W
<biphoto@hotmail.com>], who wrote in article <1110125916.657251.135140@z14g2000cwz.googlegroups.com>:
> A very nice write up, I will admit I have not gone through all of it
> yet in detail. One thing to consider is that CCD have a read out noise
> of around 10 electrons, whereas this noise level will not greatly
> effect the signal to noise when looking at 400 detected photons with an
> noise level of 20 electrons it will start to dominate in darker parts
> of the scene. For instance by the time you are down 5 stops from full
> white the readout noise will be larger then the photon noise, by a
> small amount.

This is a very valid remark. However, note that these were
*theoretical* estimates; after translation into this language your
remark becomes:

Readout noise should be decreased too; otherwise shadows noise is
going to be well above Poisson noise.

Thanks,
Ilya
Related resources
Anonymous
March 9, 2005 3:18:13 PM

Archived from groups: rec.photo.digital (More info?)

Scott W <biphoto@hotmail.com> wrote:
> The idea of using non-RGB filters is sound and a number of CCD sensors
> have used filters more like C, Y and M. Why RGB is used on digital
> cameras I am not sure.

My guess is that to do otherwise would increase the chroma noise too
much. Chroma noise in digital cameras at high ISO is already
intrusive, and anything that increases it may be unwelcome, even if
sensitivity improved. Without direct experimental data it's hard to
say.

The other issue is how well non-RGB filters could be made to
approximate the colour matching functions of typical display systems.
Red and green are quite well matched by sensors of a typical camera,
but the blue is quite a way off because its spectral sensitivity is
too broad.[1] It would be a matter of measuring some physically
realizable filters and seeing what colour matching functions resulted.

Andrew.

[1] The Reproduction of Colour, 6th Edition, Robert Hunt, p556.
Anonymous
March 9, 2005 11:10:39 PM

Archived from groups: rec.photo.digital (More info?)

Ilya Zakharevich wrote:

> PHOTON FLOW OF NORMAL EXPOSURE
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>
> First, we need to recalculate 0.08 lux-sec exposure into the the
> photon flow.
>
> Assume const/lambda energy spectral density (assumption b),
> integration of photonic curve gives const*0.192344740294
> filtered flow. With constant spectral density at
> 1 photon/(sec*mkm*m^2), const = h * c, so the eye-corrected energy
> flow is 3.82082403851941e-20 W/m^2 = 2.60962281830876e-17 lux.
What unit is 'mkm', wavenumber?
-- Hans
Anonymous
March 10, 2005 12:28:43 PM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to

<andrew29@littlepinkcloud.invalid>], who wrote in article <112tqc5260hi724@news.supernews.com>:
> My guess is that to do otherwise would increase the chroma noise too
> much. Chroma noise in digital cameras at high ISO is already
> intrusive, and anything that increases it may be unwelcome, even if
> sensitivity improved. Without direct experimental data it's hard to
> say.

When I look on the digital images of a gray surface (those of "compare
two cameras" kind), it looks like my perception of noise is not
related to chrominance noise at all. At least a camera with higher
measured individual-channel R/G/B noise can produce much lower visible
noise if its noise reduction algorithm favors luminance noise (as
confirmed by luminance noise graph). Of course, it is in no way
scientific conclusion, but I may have seen about ten such
comparisons...

> The other issue is how well non-RGB filters could be made to
> approximate the colour matching functions of typical display
> systems.

AFAIU, this has nothing to do with display (output) system, but only
with input system (cones). As far as the filters match cones, you can
postprocess colors into *any* display system (if the initial color is
in the gamut of the display system).

And if you do not match the cone sensitivity, colors which look the
same will get different when stored. After this no amount of
post-processing will be able to fix this.

Hope this helps,
Ilya
Anonymous
March 10, 2005 12:32:28 PM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
HvdV
<nohanz@svi.nl>], who wrote in article <3059$422f4a27$3e3aaa83$633@news.versatel.net>:
> > filtered flow. With constant spectral density at
> > 1 photon/(sec*mkm*m^2), const = h * c, so the eye-corrected energy
> > flow is 3.82082403851941e-20 W/m^2 = 2.60962281830876e-17 lux.
> What unit is 'mkm', wavenumber?

Yes (?); Wavelength. IIRC, wavenumber is 1/wavelength (or some such;
2pi comes to mind...).

[BTW, because of non-linearity of wavelength vs wavenumber, spectral
density which constant per wavelength becomes very non-constant when
measured per wavenumber.]

Yours,
Ilya
Anonymous
March 11, 2005 6:39:49 AM

Archived from groups: rec.photo.digital (More info?)

andrew29@littlepinkcloud.invalid writes:

>[1] The Reproduction of Colour, 6th Edition, Robert Hunt, p556.

I didn't realize that the 6th edition was out. Does it say how it
differs from the previous edition (e.g. in a preface?).

I do have the 3rd, 4th, and 5th editions already, but they don't cover
digital imaging much.

Dave
Anonymous
March 12, 2005 12:37:57 AM

Archived from groups: rec.photo.digital (More info?)

Hi Ilya,
>
>
> Yes (?); Wavelength. IIRC, wavenumber is 1/wavelength (or some such;
> 2pi comes to mind...).
Yes, wavenumber is 2 * pi / lambda, units m^-1
>
> [BTW, because of non-linearity of wavelength vs wavenumber, spectral
> density which constant per wavelength becomes very non-constant when
> measured per wavenumber.]
Yes, but the choice is arbitrary. Since wavenumber which is proportional to
photon energy, an interesting quantity for many applications, spectroscopy
people tend towards wavenumber whereas optical people like wavelength since
resolving power scales with that.

BTW, can you substantiate your interesting assumption:
----
In any decent photographic system the most important component
of performance/price ratio is the lenses. Since the price of the
lens scales as 4th or 5th power of its linear size, decreasing
the size of the sensor (while keeping S/N ratio) may lead to
very significant improvements of performance/price.
---
with some examples?
The tradeoff of lens aperture and expense vs sensor size determines
ultimately the size and shape of the digital camera. After the 'fashion
factor' of course.

-- hans
Anonymous
March 12, 2005 10:12:34 PM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to

<andrew29@littlepinkcloud.invalid>], who wrote in article <112tqc5260hi724@news.supernews.com>:
> Scott W <biphoto@hotmail.com> wrote:
> > The idea of using non-RGB filters is sound and a number of CCD sensors
> > have used filters more like C, Y and M. Why RGB is used on digital
> > cameras I am not sure.
>
> My guess is that to do otherwise would increase the chroma noise too
> much. Chroma noise in digital cameras at high ISO is already
> intrusive, and anything that increases it may be unwelcome, even if
> sensitivity improved. Without direct experimental data it's hard to
> say.

Judge for yourself: visit

http://ilyaz.org/photo/random-noise

Yours,
Ilya
Anonymous
March 12, 2005 10:26:06 PM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
HvdV
<nohanz@svi.nl>], who wrote in article <7ac8b$4232019c$3e3aaa83$3472@news.versatel.net>:
> In any decent photographic system the most important component
> of performance/price ratio is the lenses. Since the price of the
> lens scales as 4th or 5th power of its linear size, decreasing
> the size of the sensor (while keeping S/N ratio) may lead to
> very significant improvements of performance/price.
> ---
> with some examples?
> The tradeoff of lens aperture and expense vs sensor size determines
> ultimately the size and shape of the digital camera. After the 'fashion
> factor' of course.

a) First of all, my assumption on how rescaling the lense affects
image quality was "incomplete" (read: wrong ;-). Part of fuzziness
due to difraction does not change; but part of fuzziness due to
optical imperfection scales up with the lense linear size (since
all the light rays passing through the system scale up, the spot in
the focal plane which is the diffraction-less image of a
point-source will scale up as well).

This has two effects: sweet spot (in F-stops) scales up (i.e., to
the worse) as sqrt(size); and best resolution scales down as
1/sqrt(size). So my estimates for "perfect lense" for an ideal
36x24mm sensor were wrong, since I erroneously assumed that the
sweet spot does not change.

b) One corollary is that when you scale sensor size AND LENSE up n
times, it makes sense to scale up the size of the pixel sqrt(n)
times. In other words, you should increase the sensitivity of the
sensor and number of pixels both the same amount - n times.
Interesting...

c) The estimages on price vs. size: IIRC, this was from a review in a
technical magazine on optical production ("Scientific publications
of LOMO" or some such) in end of 80s. Since technology could have
changed meanwhile (digitally-controlled machinery?), the numbers
could have changed...

Hope this helps,
Ilya
Anonymous
March 12, 2005 10:27:17 PM

Archived from groups: rec.photo.digital (More info?)

Ilya Zakharevich wrote:
[]
> Judge for yourself: visit
>
> http://ilyaz.org/photo/random-noise
>
> Yours,
> Ilya

Grey is one colour to test this on - what about a more sensitive colour
like skin-tones?

Cheers,
David
Anonymous
March 12, 2005 11:46:33 PM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
David J Taylor
<david-taylor@blueyonder.co.not-this-bit.nor-this-part.uk>], who wrote in article <poHYd.3079$QN1.2097@text.news.blueyonder.co.uk>:
> > Judge for yourself: visit
> >
> > http://ilyaz.org/photo/random-noise

> Grey is one colour to test this on - what about a more sensitive colour
> like skin-tones?

The script is there. Feel free to edit it to change the base value.
Or just modify the .png by adding a constant bias...

Yours,
Ilya
Anonymous
March 13, 2005 4:22:13 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
Scott W
<biphoto@hotmail.com>], who wrote in article <1110125916.657251.135140@z14g2000cwz.googlegroups.com>:
> A very nice write up, I will admit I have not gone through all of it
> yet in detail. One thing to consider is that CCD have a read out noise
> of around 10 electrons, whereas this noise level will not greatly
> effect the signal to noise when looking at 400 detected photons with an
> noise level of 20 electrons it will start to dominate in darker parts
> of the scene. For instance by the time you are down 5 stops from full
> white the readout noise will be larger then the photon noise, by a
> small amount.

On the second thought, maybe this issue is not as crucial as it may
sound. Remember that 12 electrons noise is present on Mark II, and
its 800ISO setting is "considered nice". It has S/N=28 at Zone V; so
the electron noise at Zone III should be about 13 electrons; while 12
electrons readout noise will increase this to total about 17
electrons, we must conclude that such a noise (S/N=9) at Zone III is
not very bad. Likewise for Zones II and I.

So: either Mark II produces noticable noise in zones I--III, or
readout noise 12 electrons is already small enough to be "not
important".

Yours,
Ilya
Anonymous
March 13, 2005 4:30:37 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was NOT [per weedlist] sent to
Ilya Zakharevich
<nospam-abuse@ilyaz.org>], who wrote in article <d0vkf9$1io5$1@agate.berkeley.edu>:
> > > Judge for yourself: visit

> > > http://ilyaz.org/photo/random-noise

> > Grey is one colour to test this on - what about a more sensitive colour
> > like skin-tones?

> The script is there. Feel free to edit it to change the base value.
> Or just modify the .png by adding a constant bias...

Actually, it may be a little bit more than just changing the base
value. Luminance is calculatable from Luma only very close to neutral
gray; thus having a skin-tone with luma-less noise may have
significant luminance noise.

One needs to experiment with both constant-luma noise and
constant-luminance noise, and see which one is less perceivable by
eye. Summary: one may need also to modify the vector 0.2126 0.7152
0.0722 to take into account gamma (via derivatives of x^2.2 at R'G'B'
values of skin tone).

Yours,
Ilya
March 13, 2005 4:30:38 AM

Archived from groups: rec.photo.digital (More info?)

Ilya Zakharevich wrote:
>
>>>>Judge for yourself: visit
>
>
>>>> http://ilyaz.org/photo/random-noise
>
>
> ...Luminance is calculatable from Luma only very close to neutral
> gray; thus having a skin-tone with luma-less noise may have
> significant luminance noise.
>
> One needs to experiment with both constant-luma noise and
> constant-luminance noise, and see which one is less perceivable by
> eye. Summary: one may need also to modify the vector 0.2126 0.7152
> 0.0722 to take into account gamma (via derivatives of x^2.2 at R'G'B'
> values of skin tone).



Any chance of an executive summary of this study. I just cannot see what
the exercise is all about.

The photoshop RAW converter has color (chrominance) & regular
(luminance) noise reduction & I noticed the color noise reduction does
almost nothing. It seems you are saying color noise is indeed
insubstantial in comparison but maybe I'm missing the boat on that?

thanks!
Anonymous
March 13, 2005 12:34:50 PM

Archived from groups: rec.photo.digital (More info?)

Ilya Zakharevich wrote:
> [A complimentary Cc of this posting was NOT [per weedlist] sent to
> Ilya Zakharevich
> <nospam-abuse@ilyaz.org>], who wrote in article
> <d0vkf9$1io5$1@agate.berkeley.edu>:
>>>> Judge for yourself: visit
>
>>>> http://ilyaz.org/photo/random-noise
>
>>> Grey is one colour to test this on - what about a more sensitive
>>> colour like skin-tones?
>
>> The script is there. Feel free to edit it to change the base value.
>> Or just modify the .png by adding a constant bias...
>
> Actually, it may be a little bit more than just changing the base
> value. Luminance is calculatable from Luma only very close to neutral
> gray; thus having a skin-tone with luma-less noise may have
> significant luminance noise.
>
> One needs to experiment with both constant-luma noise and
> constant-luminance noise, and see which one is less perceivable by
> eye. Summary: one may need also to modify the vector 0.2126 0.7152
> 0.0722 to take into account gamma (via derivatives of x^2.2 at R'G'B'
> values of skin tone).
>
> Yours,
> Ilya

Thanks, Ilya. I don't have the time to do detailed work on this right
now, but at least I hope it triggers /someone/ to check this out. Your
comments about the gamma remind me of the "constant luminance failure"
errors in colour TV - takes me back a long time.

http://www.poynton.com/notes/video/Constant_luminance.h...

Cheers,
David
Anonymous
March 13, 2005 10:39:26 PM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
paul
<paul@not.net>], who wrote in article <8bSdnaijPZrUXa7fRVn-sg@speakeasy.net>:
> >>>>Judge for yourself: visit

> >>>> http://ilyaz.org/photo/random-noise

> Any chance of an executive summary of this study. I just cannot see what
> the exercise is all about.

Did you see the pictures on the URL above?

> The photoshop RAW converter has color (chrominance) & regular
> (luminance) noise reduction & I noticed the color noise reduction does
> almost nothing. It seems you are saying color noise is indeed
> insubstantial in comparison but maybe I'm missing the boat on that?

How I see the pictures, the eye sensitivity for chrominance noise is
not much higher than 10% of sensitivity for luminance one. [But my
eyes are kinda special, so I would appreciate if somebody else - with
normal vision - confirms this.]

Yours,
Ilya
March 13, 2005 10:39:27 PM

Archived from groups: rec.photo.digital (More info?)

Ilya Zakharevich wrote:
> [A complimentary Cc of this posting was sent to
> paul
> <paul@not.net>], who wrote in article <8bSdnaijPZrUXa7fRVn-sg@speakeasy.net>:
>
>>>>>>Judge for yourself: visit
>
>
>>>>>>http://ilyaz.org/photo/random-noise
>
>
>>Any chance of an executive summary of this study. I just cannot see what
>>the exercise is all about.
>
>
> Did you see the pictures on the URL above?
>
>
>>The photoshop RAW converter has color (chrominance) & regular
>>(luminance) noise reduction & I noticed the color noise reduction does
>>almost nothing. It seems you are saying color noise is indeed
>>insubstantial in comparison but maybe I'm missing the boat on that?
>
>
> How I see the pictures, the eye sensitivity for chrominance noise is
> not much higher than 10% of sensitivity for luminance one. [But my
> eyes are kinda special, so I would appreciate if somebody else - with
> normal vision - confirms this.]


So that's equal noise on left & right? No doubt the left looks 90% more
noisy. I suppose if I zoomed way in, I could see the color noise.
Anonymous
March 14, 2005 3:03:32 AM

Archived from groups: rec.photo.digital (More info?)

Hi Ilya,
> [A complimentary Cc of this posting was sent to
> HvdV
> <nohanz@svi.nl>], who wrote in article <7ac8b$4232019c$3e3aaa83$3472@news.versatel.net>:
Substitute 'hans' for 'nohanz', sorry for the paranoia.

>>In any decent photographic system the most important component
>>of performance/price ratio is the lenses. Since the price of the
>>lens scales as 4th or 5th power of its linear size, decreasing
>>the size of the sensor (while keeping S/N ratio) may lead to
>>very significant improvements of performance/price.
>>---
>>with some examples?
>>The tradeoff of lens aperture and expense vs sensor size determines
>>ultimately the size and shape of the digital camera. After the 'fashion
>>factor' of course.
>
>
> a) First of all, my assumption on how rescaling the lense affects
> image quality was "incomplete" (read: wrong ;-). Part of fuzziness
> due to difraction does not change; but part of fuzziness due to
> optical imperfection scales up with the lense linear size (since
> all the light rays passing through the system scale up, the spot in
> the focal plane which is the diffraction-less image of a
> point-source will scale up as well).
>
> This has two effects: sweet spot (in F-stops) scales up (i.e., to
> the worse) as sqrt(size); and best resolution scales down as
> 1/sqrt(size). So my estimates for "perfect lense" for an ideal
> 36x24mm sensor were wrong, since I erroneously assumed that the
> sweet spot does not change.
Hm, not so sure you were very wrong. I don't know much about lens design, but
I do know errors like spherical aberration scale up in a non-linear fashion
if you increase aperture. And that's only one of the many errors.
Then there are amplifying econimical factors like a much smaller lens copy
number.
BTW, if you keep aperture constant the diffraction spot stays the same. It
scales with the wavelength, the sine of the half-aperture angle, and for
completeness, also the refractive index of the medium.
>
> b) One corollary is that when you scale sensor size AND LENSE up n
> times, it makes sense to scale up the size of the pixel sqrt(n)
> times. In other words, you should increase the sensitivity of the
> sensor and number of pixels both the same amount - n times.
> Interesting...
Sizing up the lens and sensor gets you more information about the object,
with the square of the scale. You can average that information with bigger
pixels to get a better SNR, but you could do that also in postprocessing.
>
> c) The estimages on price vs. size: IIRC, this was from a review in a
> technical magazine on optical production ("Scientific publications
> of LOMO" or some such) in end of 80s. Since technology could have
> changed meanwhile (digitally-controlled machinery?), the numbers
> could have changed...
It's clear that it is cheaper now to make aspherical lenses, and there are
also new glasses available.
I was hoping for a plot with a lenses with similar view angles in it with on
the horizontal axis the formats and vertically the price. I guess it should
be possible to dig this out of ebay..

-- Hans
Anonymous
March 15, 2005 12:54:34 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
paul
<paul@not.net>], who wrote in article <Mp-dnSpYkaAxA6nfRVn-3Q@speakeasy.net>:
> > How I see the pictures, the eye sensitivity for chrominance noise is
> > not much higher than 10% of sensitivity for luminance one. [But my
> > eyes are kinda special, so I would appreciate if somebody else - with
> > normal vision - confirms this.]

> So that's equal noise on left & right? No doubt the left looks 90% more
> noisy.

You mean "LESS noisy"?

No, it is not "equal noise". I'm afraid you need to read the
explanation at the beginning. In addition to "equal noise" having
little sense, as you can easily see, the noise on the right is
*different* at top and at bottom.

But numerical noise on the left "is close" to numerical noise on the
bottom of the right. Visual noise is an order of magnitude less...

Yours,
Ilya
Anonymous
March 15, 2005 1:06:19 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
HvdV
<nohanz@svi.nl>], who wrote in article <91fe8$4234c6bb$3e3aaa83$31509@news.versatel.net>:
> Hm, not so sure you were very wrong. I don't know much about lens design, but
> I do know errors like spherical aberration scale up in a non-linear fashion
> if you increase aperture.

Sure, but I was not talking about increase of aperture. I was talking
about using the same lense design *geometrically rescaled* for larger
sensor size.

> BTW, if you keep aperture constant the diffraction spot stays the same. It
> scales with the wavelength, the sine of the half-aperture angle, and for
> completeness, also the refractive index of the medium.

Yes, this is what I wrote (not in so many words, though ;-).

> > b) One corollary is that when you scale sensor size AND LENSE up n
> > times, it makes sense to scale up the size of the pixel sqrt(n)
> > times. In other words, you should increase the sensitivity of the
> > sensor and number of pixels both the same amount - n times.
> > Interesting...

> Sizing up the lens and sensor gets you more information about the object,
> with the square of the scale. You can average that information with bigger
> pixels to get a better SNR, but you could do that also in postprocessing.

It does not always make sense to do it in postprocessing; in absense
of readout noise more pixels can be recalculated in less pixel
losslessly, but I'm not sure that it is easy to reduce readout noise...

Even without readout noise, assuming that it does not make sense to
rasterize at resolution (e.g.) 3 times higher than the resolution of
the lense, when you rescale your lense+sensor (keeping the lense
design), you better rescale the pixel count and sensitivity the same
amount.

[Additional assumption: the sweet spot is not better than the maximal
aperture of the lense. E.g., the current prosumer 8MP lenses have the
sweet spot at maximal aperture; so if you rescale this design *down*,
the law above does not hold. BTW, rescaling them up from 2/3'' sensor
to 36x24mm sensor (3.93x rescale) will give, e.g., 28--200 F2.8 zoom
with corner-to-corner high resolution and sweet spot at 5.6.]

Yours,
Ilya
Anonymous
March 16, 2005 2:03:25 AM

Archived from groups: rec.photo.digital (More info?)

Ilya Zakharevich wrote:
>
> Sure, but I was not talking about increase of aperture. I was talking
> about using the same lense design *geometrically rescaled* for larger
> sensor size.
Sorry for making myself not clear, what I meant was that if you scale up,
keeping aperture angle constant, aberrations will act up. Suppose for a small
lens you have a manageable deviation from a spherical wave of Pi/4 phase
error, then for a twice larger system parts of the wave will arrive out of
phase at the focus, seriously affecting your resolution. Likely the error is
due to production flaws and imperfect design. To force back the phase error
both the production techniques and the design must be improved. That suggests
that your earlier assumption of steeply rising production cost are true.
If you compare 35mm to MF lenses you see that for the same view angle lenses
tend to have higher f-numbers and are much more expensive, 4x?
>
>
> It does not always make sense to do it in postprocessing; in absense
> of readout noise more pixels can be recalculated in less pixel
> losslessly, but I'm not sure that it is easy to reduce readout noise...
CCDs for low light applications are usually capable of binning pixels to get
around this. I don't know whether this technique is used in any camera.
>
> Even without readout noise, assuming that it does not make sense to
> rasterize at resolution (e.g.) 3 times higher than the resolution of
> the lense, when you rescale your lense+sensor (keeping the lense
> design), you better rescale the pixel count and sensitivity the same
> amount.
When readout noise is not a key factor it is IMO better to match the pixel
size to the optical bandwidth, making anti aliasing filters superfluous. With
all the image information in your computer it's then up to the post
processing to figure out what the image was. Just my hobby horse...
>
> [Additional assumption: the sweet spot is not better than the maximal
> aperture of the lense. E.g., the current prosumer 8MP lenses have the
> sweet spot at maximal aperture; so if you rescale this design *down*,
> the law above does not hold. BTW, rescaling them up from 2/3'' sensor
Nice point!
But instead of cheaper scaling down makes more outrageous designs possible
for the same price, in particular larger zoom ranges with similar apertures.
This leads to the feature battle where manufacturers advertise the MP number
and the zoom range.
> to 36x24mm sensor (3.93x rescale) will give, e.g., 28--200 F2.8 zoom
> with corner-to-corner high resolution and sweet spot at 5.6.]
If you scale up the 28--200 F2.0--F2.8 on the Sony 828 to 35mm you get indeed
something unaffordable.


-- Hans
Anonymous
March 16, 2005 2:03:26 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
HvdV
<nohanz@svi.nl>], who wrote in article <42375BAD.8080906@svi.nl>:
> > Sure, but I was not talking about increase of aperture. I was talking
> > about using the same lense design *geometrically rescaled* for larger
> > sensor size.

> Sorry for making myself not clear, what I meant was that if you
> scale up, keeping aperture angle constant, aberrations will act
> up. Suppose for a small lens you have a manageable deviation from a
> spherical wave of Pi/4 phase error, then for a twice larger system
> parts of the wave will arrive out of phase at the focus, seriously
> affecting your resolution.

I think we speak about the same issue using two different languages:
you discuss wave optic, I - geometric optic. You mention pi/4 phase,
I discuss "the spot" where rays going through different places on the
lense come to.

Assume that "wave optic" = "geometric optic" + "diffration". Under
this assumption (which I used) your "vague" discription is
*quantified* by using the geometric optic language: "diffration"
"circle" does not change when you scale, while "geometric optic" spot
grows linearly with the size. This also quantifies the dependence of
the "sweet spot" and maximal resolution (both changing with
sqrt(size)).

So if the assumption holds, my approach is more convenient. ;-) And,
IIRC, it holds in most situations. [I will try to remember the math
behind this.]

> Likely the error is due to production flaws and imperfect design. To
> force back the phase error both the production techniques and the
> design must be improved. That suggests that your earlier assumption
> of steeply rising production cost are true.

Let us keep these two issues separate (as a customer in a restorant
said: may I have soup separate and cockroaches separate?). Rescaling
of a *design* leads to sqrt(size) increase in sweet spot; rescaling of
*defects w.r.t. design* leads to steep cost-vs-size curve...

> > Even without readout noise, assuming that it does not make sense to
> > rasterize at resolution (e.g.) 3 times higher than the resolution of
> > the lense, when you rescale your lense+sensor (keeping the lense
> > design), you better rescale the pixel count and sensitivity the same
> > amount.

> When readout noise is not a key factor it is IMO better to match the
> pixel size to the optical bandwidth, making anti aliasing filters
> superfluous.

I assume that "matching" is as above: having sensor resolution "K
times the lense resolution", for some number K? IIRC, military air
reconnaissance photos were (Vietnam era?) scanned several times above
the optical resolution, and it mattered. [Likewise for this 700 MP IR
telescope?] Of course, increasing K you hit a return-of-investment
flat part pretty soon, this is why I had chosen this low example value
"3" above...

> > [Additional assumption: the sweet spot is not better than the maximal
> > aperture of the lense. E.g., the current prosumer 8MP lenses have the
> > sweet spot at maximal aperture; so if you rescale this design *down*,
> > the law above does not hold. BTW, rescaling them up from 2/3'' sensor

> Nice point!
> But instead of cheaper scaling down makes more outrageous designs possible
> for the same price, in particular larger zoom ranges with similar apertures.
> This leads to the feature battle where manufacturers advertise the MP number
> and the zoom range.

AFAIU, the current manufacturing gimmic is dSLRs. [If my analysis is
correct] in a year or two one can have a 1'' sensor with the same
performance as Mark II (since sensors with QE=0.8 are in production
today, all you need is to scale the design to 12MP, and use "good"
filter matrix). This would mean the 35mm world switching to lenses
which are 3 times smaller, 25 times lighter, and 100 times cheaper (or
correspondingly, MUCH MUCH better optic).

My conjecture is that today the marketing is based on this "100 times
cheaper" dread. The manufacturers are trying to lure the public to
buy as many *current design* lenses as possible; they expect that
these lenses are going to be useless in a few years, so people will
need to change their optic again.

[While for professionals, who have tens K$ invested in lenses, dSLRs
are very convenient, for Joe-the-public the EVFs of today are much
more practical; probably producers use the first fact to confuse the
Joes to by dSLRs too; note the stop of the development of EVF during
the last 1/2 year, when they reached the spot they start to compete
with dSLR, e.g., KM A200 vs A2 down-grading.]

This is similar to DVDs today: during last several months, when
blue-rays are at sight, studios started to digitize films as if there
is no tomorrow...

Thanks for a very interesting discussion,
Ilya
Anonymous
March 23, 2005 12:09:28 AM

Archived from groups: rec.photo.digital (More info?)

This discussion got a little bit too long. Here is a short summary.

A lot of people confuse counting photons with counting electrons.
This leads to statements like Roger Clark's

"these high-end cameras are reaching the limits of what
is possible from a theoretically perfect sensor."

Actually, even with sensor technology available now (for
mass-production), the sensitivity of the sensor he
considers can be improved 4.8 times; or the size can be
decreased 2.2 times without affecting MP count, sensitivity, and
noise.

Perfect Bayer filter sensors have equivalent film sensitivity of
12000 ISO (taking noise and resolution of Velvia 50 film as a
reference point). In other words, with such a sensor you get
equivalent resolution and noise of Velvia 50 film with 240 times
smaller exposure. For example, for 36x24mm format one gets a
12Mpixels sensor with 8.5 mkm square sensels with sensitivity
12000 ISO (calculated to achieve noise level better than one
of Velvia 50).

One illustration: since shooting with aperture smaller than F/16 does
not fully use the resolution of 8.5mkm sensels, for best results one
should use daylight exposure similar to F/16, 1/12000sec. (Of
course, one can lower the sensitivity of the sensor also by
controling the ADC; this would decrease the noise. E.g.,
decreasing the sensibility 8 times, one can achieve the noise
of a best-resolution 8x10in shot. However, I did not see any
indication that noise well below one given by Velvia 50 on 35mm
film results in any improvement of the image...)

Another illustration: the 8Mpixels sensor of EOS 1D Mark II (recall
that it achieves the noise level of Velvia 50 at sensitivity on or
above 1200ISO) has the "total" Quantum Efficiency about 14.1%.

The "total" efficiency of "the sensor assembly" is the product of
average "quantum efficiency" (in other words, transparency) of the
cells of the Bayer filter, and the quantum efficiency of the actual
sensor. To distinguish colors, some photons MUST be captured by the
Bayer filter; however, it is easy to design a filter with average
transparency 85% or above. On the other hand, currently there are
mass-produced sensors with QE=0.8; combining such a sensor with such
a filter, one can get the "total" efficiency of 68%. Thus the
sensitivity of the sensor of EOS 1D Mark II can be improved 4.8 times
without using any new technology...

The last illustration: using the same value QE=0.8, a 12MP sensor
of size 8.8x6.6mm (this is a 2/3'' sensor) has sensitivity of
655ISO. (Again, this is with better resolution and noise than
35mm Velvia 50!)

Recall that 2/3'' format is especially nice, since an affordable
lense in this format should provide the same resolution as a very
expensive lense in 35mm format. For example, compare a 35mm zoom
having the sweet spot at f/11 with a 2/3'' zoom having the sweet
spot at f/2.8; they have the same resolution at their sweet spots.
Recall also that the 2/3'' lenses have the same depth of field,
much larger zoom range, allow 4x shorter exposure at the same
resolution, and due to 4x smaller size are much easier to
image-stabilize on the sensor level.

P.S. One of the conclusions I make is that nowadays it does not makes
sense to ask for equivalent of digicams in terms of film cameras.
35mm film is not good enough to use the full potential of decent
35mm lenses; very soon it will be possible to produce affordable
sensors which will be able to exhaust potentials of these lenses.

It makes more sense to ask what kind of sensor "suits most"
a particular lense...

Ilya
Anonymous
March 23, 2005 1:26:09 AM

Archived from groups: rec.photo.digital (More info?)

In article <d1q1i8$24rq$1@agate.berkeley.edu>, Ilya Zakharevich says...

> The "total" efficiency of "the sensor assembly" is the product of
> average "quantum efficiency" (in other words, transparency) of the
> cells of the Bayer filter, and the quantum efficiency of the actual
> sensor. To distinguish colors, some photons MUST be captured by the
> Bayer filter; however, it is easy to design a filter with average
> transparency 85% or above. On the other hand, currently there are
> mass-produced sensors with QE=0.8; combining such a sensor with such
> a filter, one can get the "total" efficiency of 68%. Thus the
> sensitivity of the sensor of EOS 1D Mark II can be improved 4.8 times
> without using any new technology...

Are you talking of front-illuminated or back-illuminated CCDs here ?
--

Alfred Molon
------------------------------
Olympus 4040, 5050, 5060, 7070, 8080, E300 forum at
http://groups.yahoo.com/group/MyOlympus/
Olympus 8080 resource - http://myolympus.org/8080/
Anonymous
March 23, 2005 1:26:10 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
Alfred Molon
<alfredREMOVE_molon@yahoo.com>], who wrote in article <MPG.1caaabdc6f415c6c98aa61@news.supernews.com>:
> In article <d1q1i8$24rq$1@agate.berkeley.edu>, Ilya Zakharevich says...
>
> > The "total" efficiency of "the sensor assembly" is the product of
> > average "quantum efficiency" (in other words, transparency) of the
> > cells of the Bayer filter, and the quantum efficiency of the actual
> > sensor. To distinguish colors, some photons MUST be captured by the
> > Bayer filter; however, it is easy to design a filter with average
> > transparency 85% or above. On the other hand, currently there are
> > mass-produced sensors with QE=0.8; combining such a sensor with such
> > a filter, one can get the "total" efficiency of 68%. Thus the
> > sensitivity of the sensor of EOS 1D Mark II can be improved 4.8 times
> > without using any new technology...
>
> Are you talking of front-illuminated or back-illuminated CCDs here ?

Actually, what I saw was that both CCDs and CMOSes can "now" (it was
in papers of 2003 or 2004) achieve QE of 80%. Do not remember whether
it was front- or back- for CCDs; probably back-. However, my first
impression was that front- with microlenses can give the same
performance as back-, does not it?

Yours,
Ilya
Anonymous
March 23, 2005 1:17:38 PM

Archived from groups: rec.photo.digital (More info?)

In article <d1q4s5$25qu$1@agate.berkeley.edu>, Ilya Zakharevich says...

> > Are you talking of front-illuminated or back-illuminated CCDs here ?
>
> Actually, what I saw was that both CCDs and CMOSes can "now" (it was
> in papers of 2003 or 2004) achieve QE of 80%. Do not remember whether
> it was front- or back- for CCDs; probably back-. However, my first
> impression was that front- with microlenses can give the same
> performance as back-, does not it?

Usually front-illuminated CCDs have QEs in the range 20-30%, while back-
illuminated ones have QEs up to 100%.
--

Alfred Molon
------------------------------
Olympus 4040, 5050, 5060, 7070, 8080, E300 forum at
http://groups.yahoo.com/group/MyOlympus/
Olympus 8080 resource - http://myolympus.org/8080/
Anonymous
March 24, 2005 1:18:35 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
Alfred Molon
<alfredREMOVE_molon@yahoo.com>], who wrote in article <MPG.1cab529fc45c40ee98aa63@news.supernews.com>:
> > Actually, what I saw was that both CCDs and CMOSes can "now" (it was
> > in papers of 2003 or 2004) achieve QE of 80%. Do not remember whether
> > it was front- or back- for CCDs; probably back-. However, my first
> > impression was that front- with microlenses can give the same
> > performance as back-, does not it?

> Usually front-illuminated CCDs have QEs in the range 20-30%, while back-
> illuminated ones have QEs up to 100%.

Thanks; probably I was not paying enough attention when reading these
papers. Anyway, I also saw this 100% number quoted in many places,
but the actual graphs of QE/vs/wavelength presented in the papers were
much closer to 80%...

Anyway, I would suppose that of these 4.84 which are the current
inefficiency (comparing to QE=0.8 sensor with a good Bayer matrix), at
least about 2..3 comes from using RGB Bayer (and I do not have a
slightest idea why they use RGB). This gives the QE of the "actual"
sensor closer to 30..40%. This is a kinda strange number - too good
for front-, too bad for back-. [Of course, the actual sensor is CMOS
;-]

Are there actual back-illumination sensor used in mass-production
digicams?

Thanks,
Ilya
Anonymous
March 24, 2005 3:51:07 PM

Archived from groups: rec.photo.digital (More info?)

In article <d1spvr$2qj8$1@agate.berkeley.edu>, Ilya Zakharevich says...

> Are there actual back-illumination sensor used in mass-production
> digicams?

To my knowledge no - they are all used for astronomy. The production
process involves thinning the CCD to around 10 micrometer (or something
very thin). Then the back side of the CCD, which does not have all
layers with the circuitry which would obstruct light, is used as the
active side. But either the additional production process is expensive
or the resulting CCDs are too thin for mass production. Try doing a
Google search for "back illuminated CCDs".
--

Alfred Molon
------------------------------
Olympus 4040, 5050, 5060, 7070, 8080, E300 forum at
http://groups.yahoo.com/group/MyOlympus/
Olympus 8080 resource - http://myolympus.org/8080/
Anonymous
April 1, 2005 2:31:36 AM

Archived from groups: rec.photo.digital (More info?)

Hi Ilya,

(took me a while to come back to this topic)
>
> I think we speak about the same issue using two different languages:
> you discuss wave optic, I - geometric optic. You mention pi/4 phase,
> I discuss "the spot" where rays going through different places on the
> lense come to.
>
> Assume that "wave optic" = "geometric optic" + "diffration". Under
> this assumption (which I used) your "vague" discription is
> *quantified* by using the geometric optic language: "diffration"
> "circle" does not change when you scale, while "geometric optic" spot
> grows linearly with the size. This also quantifies the dependence of
> the "sweet spot" and maximal resolution (both changing with
> sqrt(size)).
You can use geometrical optics to compute optical path lengths from an object
to any location behind the lens, but to find out what intensity you get there
you need to sum all light contributing to that point and take its phase into
account.
The point I tried to make earlier is that the geometry scales, but the
wavelength doesn't, so scaling up means scaling up phase errors. Take for
example a phase error caused by spherical aberration (SA) between rays
through the center of the lens and those from the rim, causing the rim-rays
to be focused in front of the focal plane. Doubling the phase error will at
least double that distance, depending on the aperture angle. To understand
the wild pattern created by all interphering phase shifted rays you need to
do that summation mentioned above. All in all this causes quite non-linear
effects on the 2D spot size as you scale the lens, but also seriously affects
its out off focus 3D shape, related to the bokeh.
If at the sweet spot (measured in f/d number) the size of the diffraction
spot balances against geometrical errors like chromatic aberration, scaling
of the lens means as you say scaling of the geometric spot. For the
unaberrated diffraction spot to match that you need to scale down the
sin(aperture_angle), roughly d/f, linearly. However, camera lenses have many
aberrations which are very sensitive to a change in lens diameter. For
example, SA depends of the 4th power of the distance to the optical axis,
In short, I don't understand how you derive a sqrt(f/d) rule for this.

It might be possible that you can find such a rule empirically by comparing
existing lenses, but then you can't exclude design or manufacturing changes.
For the purpose of this thread that is good enough though.
>
> So if the assumption holds, my approach is more convenient. ;-) And,
> IIRC, it holds in most situations. [I will try to remember the math
> behind this.]
Please do!

>>>Even without readout noise, assuming that it does not make sense to
>>>rasterize at resolution (e.g.) 3 times higher than the resolution of
>>>the lense, when you rescale your lense+sensor (keeping the lense
>>>design), you better rescale the pixel count and sensitivity the same
>>>amount.
BTW, there are also such devices like Electron Multiplying CCDs which tackle
that. No reason why these will not appear eventually in consumer electronics.
>
>
>>When readout noise is not a key factor it is IMO better to match the
>>pixel size to the optical bandwidth, making anti aliasing filters
>>superfluous.
>
>
> I assume that "matching" is as above: having sensor resolution "K
> times the lense resolution", for some number K? IIRC, military air
> reconnaissance photos were (Vietnam era?) scanned several times above
> the optical resolution, and it mattered. [Likewise for this 700 MP IR
> telescope?] Of course, increasing K you hit a return-of-investment
> flat part pretty soon, this is why I had chosen this low example value
> "3" above...
'Resolution' is a rather vague term, usually it is taken as Half Intensity
Width of the point spread function, or using the Rayleigh criterion. Both are
not the same as the highest spatial frequency passed by the lens,
'resolution' is for camera type optics a bit (say 50%) larger than the
highest spatial frequency. In principle it is enough to sample at twice that
frequency, so with the 50% included your 3x is reproduced!
BTW, even a bad lens with a bloated PSF produces something up to the
bandwidth, so in that case the K factor will be even higher.
>
>
>
>
> AFAIU, the current manufacturing gimmic is dSLRs. [If my analysis is
yes, a sort of horse-drawn carriage with a motor instead of the horse...
> correct] in a year or two one can have a 1'' sensor with the same
> performance as Mark II (since sensors with QE=0.8 are in production
> today, all you need is to scale the design to 12MP, and use "good"
> filter matrix). This would mean the 35mm world switching to lenses
> which are 3 times smaller, 25 times lighter, and 100 times cheaper (or
> correspondingly, MUCH MUCH better optic).
To keep sensitivity when scaling down the sensor, keeping the pixel count and
not being able to gain sensitivity, you need to keep the aperture diameter as
is, resulting in a lower f/d number, costs extra.
>
> My conjecture is that today the marketing is based on this "100 times
> cheaper" dread. The manufacturers are trying to lure the public to
> buy as many *current design* lenses as possible; they expect that
> these lenses are going to be useless in a few years, so people will
> need to change their optic again.
As 'Joe' I bought a recommended-brand P&S, assuming modern lenses for tiny
CCDs would be fine. It's not, it's abysmal. IMO such cameras and most dSLRs
are not intended to last very long. After all, see what happens to
manufacturers which make durable quality cameras (Leica, Contax), that
strategy is not working anymore.
>
> [While for professionals, who have tens K$ invested in lenses, dSLRs
> are very convenient, for Joe-the-public the EVFs of today are much
> more practical; probably producers use the first fact to confuse the
> Joes to by dSLRs too; note the stop of the development of EVF during
> the last 1/2 year, when they reached the spot they start to compete
> with dSLR, e.g., KM A200 vs A2 down-grading.]
Hm, yes, I noted also the Sony F828 is also pretty old..
>
> This is similar to DVDs today: during last several months, when
> blue-rays are at sight, studios started to digitize films as if there
> is no tomorrow...
>
> Thanks for a very interesting discussion,
Likewise, cheers, Hans
Anonymous
April 1, 2005 2:31:37 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
HvdV
<nohanz@svi.nl>], who wrote in article <424C5E28.7090104@svi.nl>:
> >>>Even without readout noise, assuming that it does not make sense to
> >>>rasterize at resolution (e.g.) 3 times higher than the resolution of
> >>>the lense, when you rescale your lense+sensor (keeping the lense
> >>>design), you better rescale the pixel count and sensitivity the same
> >>>amount.

> BTW, there are also such devices like Electron Multiplying CCDs
> which tackle that. No reason why these will not appear eventually in
> consumer electronics.

I think that electron multiplying may be useful only when readout
noise is comparable with Poisson noise. When you multiply electrons,
the initial Poisson noise is not changed, but your multiplication
constant can vary (e.g., be sometimes 5, sometimes 6 - unpredictably),
an additional Poisson-like noise is added to your signal.
Additionally, the readout noise is essentially decreased the same
number of times as the multiplication constant.

Looks like it does not make sense in the photography-related settings,
since the current readout noise is low enough compared to Poisson
noise at what is jugded to be "photographically good quality" (S/N
above 20 at 18% gray).

However, note that in other thread ("Lens quality") another limiting
factor was introduced: finite capacity of sensels per area. E.g.,
current state of art of capacity per area (Canon 1D MII, 52000
electrons per 8.2mkm sensel) limits the size of 2000 electrons cell to
1.6mkm. So without technological change, there is also a restriction
of sensitivy *from below*.

Combining two estimages, this gives the low limil of cell size at
1.6mkm. However, I think that the latter restriction is only
technological, and can be overcome with more circuitry per photocell.

> 'Resolution' is a rather vague term, usually it is taken as Half
> Intensity Width of the point spread function, or using the Rayleigh
> criterion. Both are not the same as the highest spatial frequency
> passed by the lens,

Right. However, my impression is that at lens' sweet spot f-stop, all
these are closely related. At least I made calculations of MTF
functions of lenses limited by different aberrations, and all the
examples give approximately the same relations between these numbers
at the sweet spot.

> To keep sensitivity when scaling down the sensor, keeping the pixel
> count and not being able to gain sensitivity, you need to keep the
> aperture diameter as is, resulting in a lower f/d number, costs
> extra.

What happens is you keep the aperture diameter the same, and want to
keep the field of view the same, but the focal length smaller. This
"obviously" can't be done without addition additional elements.
However, these "additions" may happen on the "sensor" side of the
lens, not on the subject side. So the added elements are actually
small in diameter (since sensor is so much smaller), so much cheaper
to produce. This will not add a lot to the lens price.

Hmm, maybe this may work... The lengths of optical paths through the
"old" part of the lens will preserve their mismatches; if added
elements somewhat compensate these mismatches, it will have much
higher optical quality, and price not much higher than the original.

> As 'Joe' I bought a recommended-brand P&S, assuming modern lenses for tiny
> CCDs would be fine. It's not, it's abysmal. IMO such cameras and most dSLRs
> are not intended to last very long. After all, see what happens to
> manufacturers which make durable quality cameras (Leica, Contax), that
> strategy is not working anymore.

Right. After 3 newer-generation VCRs almost immediately broke down, I
went to my garage, fetched a 15-years old VCR, and use it happily ever
after. :-(

Yours,
Ilya
Anonymous
April 1, 2005 3:54:14 PM

Archived from groups: rec.photo.digital (More info?)

In article <d2hsb1$27me$1@agate.berkeley.edu>, Ilya Zakharevich
<nospam-abuse@ilyaz.org> writes
>
>However, note that in other thread ("Lens quality") another limiting
>factor was introduced: finite capacity of sensels per area. E.g.,
>current state of art of capacity per area (Canon 1D MII, 52000
>electrons per 8.2mkm sensel) limits the size of 2000 electrons cell to
>1.6mkm. So without technological change, there is also a restriction
>of sensitivy *from below*.
>

"mkm"? Not a recognised unit; could you please clarify.

David
--
David Littlewood
Anonymous
April 1, 2005 5:42:00 PM

Archived from groups: rec.photo.digital (More info?)

David Littlewood wrote:
[]
> "mkm"? Not a recognised unit; could you please clarify.
>
> David

He says it's micrometres but he refuses to use "um".

David
Anonymous
April 1, 2005 7:32:31 PM

Archived from groups: rec.photo.digital (More info?)

In article <Icc3e.1580$G8.828@text.news.blueyonder.co.uk>, David J
Taylor <david-taylor@blueyonder.co.not-this-bit.nor-this-part.uk> writes
>David Littlewood wrote:
>[]
>> "mkm"? Not a recognised unit; could you please clarify.
>>
>> David
>
>He says it's micrometres but he refuses to use "um".
>
>David
>
>
Ah! Bernard's irregular verb from "Yes Minister" springs to mind.

Thanks.

David
--
David Littlewood
Anonymous
April 1, 2005 7:32:32 PM

Archived from groups: rec.photo.digital (More info?)

David Littlewood wrote:
> In article <Icc3e.1580$G8.828@text.news.blueyonder.co.uk>, David J
> Taylor <david-taylor@blueyonder.co.not-this-bit.nor-this-part.uk>
> writes
>> David Littlewood wrote:
>> []
>>> "mkm"? Not a recognised unit; could you please clarify.
>>>
>>> David
>>
>> He says it's micrometres but he refuses to use "um".
>>
>> David
>>
>>
> Ah! Bernard's irregular verb from "Yes Minister" springs to mind.
>
> Thanks.
>
> David

Unfortunately it doesn't improve the credibility of anything else he says.
I presume we're in for a few weeks of "Yes, Minister" speak ourselves over
the next few weeks!

Cheers,
David
Anonymous
April 6, 2005 2:43:45 AM

Archived from groups: rec.photo.digital (More info?)

Hi Ilya,
>
>
>>BTW, there are also such devices like Electron Multiplying CCDs
>>which tackle that. No reason why these will not appear eventually in
>>consumer electronics.
>
>
(snip)
>
>
> However, note that in other thread ("Lens quality") another limiting
> factor was introduced: finite capacity of sensels per area. E.g.,
> current state of art of capacity per area (Canon 1D MII, 52000
> electrons per 8.2mkm sensel) limits the size of 2000 electrons cell to
> 1.6mkm. So without technological change, there is also a restriction
> of sensitivy *from below*.
One advantage of the EMCCDs is there speed: up to 100fps. One could use that
speed for example for smart averaging including motion compensation, depth of
focus manipulation in combination with moving the focus, have stop here
before getting carried away....
>
> Combining two estimages, this gives the low limil of cell size at
> 1.6mkm. However, I think that the latter restriction is only
> technological, and can be overcome with more circuitry per photocell.
ok
>
>
>>'Resolution' is a rather vague term, usually it is taken as Half
>>Intensity Width of the point spread function, or using the Rayleigh
>>criterion. Both are not the same as the highest spatial frequency
>>passed by the lens,
>
>
> Right. However, my impression is that at lens' sweet spot f-stop, all
> these are closely related. At least I made calculations of MTF
> functions of lenses limited by different aberrations, and all the
> examples give approximately the same relations between these numbers
> at the sweet spot.
The theoretical bandlimit is not affected by the aberrations, but the 50% MTF
point of course strongly.
>
>
>>To keep sensitivity when scaling down the sensor, keeping the pixel
>>count and not being able to gain sensitivity, you need to keep the
>>aperture diameter as is, resulting in a lower f/d number, costs
>>extra.
>
>
> What happens is you keep the aperture diameter the same, and want to
> keep the field of view the same, but the focal length smaller. This
> "obviously" can't be done without addition additional elements.
yes
> However, these "additions" may happen on the "sensor" side of the
> lens, not on the subject side. So the added elements are actually
> small in diameter (since sensor is so much smaller), so much cheaper
> to produce. This will not add a lot to the lens price.
Looking at prices for microscope lenses I'm not so sure :-)
>
> Hmm, maybe this may work... The lengths of optical paths through the
> "old" part of the lens will preserve their mismatches; if added
> elements somewhat compensate these mismatches, it will have much
> higher optical quality, and price not much higher than the original.
I don't know much about lens designing, but I think that as soon as you add a
single element, make one aspherical surface or use some glass with special
dispersion properties you have to redo the entire optimization process. That
might be not so hard provided the basic design ideas are good, but probably
it is much pricier to manufacture the whole scaled up design to sufficient
accuracy.

Cheers, hans
Anonymous
April 6, 2005 2:50:01 AM

Archived from groups: rec.photo.digital (More info?)

Alfred Molon wrote:
> In article <d1spvr$2qj8$1@agate.berkeley.edu>, Ilya Zakharevich says...
>
>
>>Are there actual back-illumination sensor used in mass-production
>>digicams?
>
>
> To my knowledge no - they are all used for astronomy. The production
Good camera's for fluorescence microscopy use them too. I guess the
efficiency gain is not sufficient to justify the current price difference (>>
$1) for use in digicams.

-- hans
Anonymous
April 9, 2005 12:06:27 PM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
HvdV
<nohanz@svi.nl>], who wrote in article <4252F881.5020808@svi.nl>:

> > However, note that in other thread ("Lens quality") another limiting
> > factor was introduced: finite capacity of sensels per area. E.g.,
> > current state of art of capacity per area (Canon 1D MII, 52000
> > electrons per 8.2mkm sensel) limits the size of 2000 electrons cell to
> > 1.6mkm. So without technological change, there is also a restriction
> > of sensitivy *from below*.

> One advantage of the EMCCDs is there speed: up to 100fps. One could
> use that speed for example for smart averaging including motion
> compensation, depth of focus manipulation in combination with moving
> the focus, have stop here before getting carried away....

To do this, you need low readout noise. The data for Canon 1D MII
(readout noise about 12 electrons) prohibits making more than about 4
"subexposition" per exposition (without significant reduction of
noise).

> >>'Resolution' is a rather vague term, usually it is taken as Half
> >>Intensity Width of the point spread function, or using the Rayleigh
> >>criterion. Both are not the same as the highest spatial frequency
> >>passed by the lens,

> > Right. However, my impression is that at lens' sweet spot f-stop, all
> > these are closely related. At least I made calculations of MTF
> > functions of lenses limited by different aberrations, and all the
> > examples give approximately the same relations between these numbers
> > at the sweet spot.

> The theoretical bandlimit is not affected by the aberrations, but
> the 50% MTF point of course strongly.

My point was with all kinds of individual aberrations I checked, at
the sweet spot the 20% MTF point WITH aberrations was approximately at
the same percentage of the cutoff frequency (given by diffration).
From this it follows that particular "numeric" performance at the
sweet-spot should be quite predictable.

Of course, the quality of the lens image cannot be described by one
number; so when the lens is at sweet spot for one parameter (e.g.,
radial MTF at 1/4 of diagonal size from center), it is far from sweet
spot for other parameters. On the other hand, on a well-optimized
lens a lot of parameters have sweet spots at the same aperture.

[This follows from an assumption that improving one parameter will
negatively affect others; so with multi-argument optimization a lot
of parameters reach their margin values simultaneously.]

> > However, these "additions" may happen on the "sensor" side of the
> > lens, not on the subject side. So the added elements are actually
> > small in diameter (since sensor is so much smaller), so much cheaper
> > to produce. This will not add a lot to the lens price.

> Looking at prices for microscope lenses I'm not so sure :-)

This particular market can bear?

> > Hmm, maybe this may work... The lengths of optical paths through the
> > "old" part of the lens will preserve their mismatches; if added
> > elements somewhat compensate these mismatches, it will have much
> > higher optical quality, and price not much higher than the original.

> I don't know much about lens designing, but I think that as soon as
> you add a single element, make one aspherical surface or use some
> glass with special dispersion properties you have to redo the entire
> optimization process. That might be not so hard provided the basic
> design ideas are good, but probably it is much pricier to
> manufacture the whole scaled up design to sufficient accuracy.

Given an overall design (which stuff goes into which groups, etc), and
given clear goal functions, I would expect the optimization process
should be more or less trivial. So it follows that it is the
design/goals part which must require some skill... ;-)

Yours,
Ilya
Anonymous
May 6, 2005 1:45:32 PM

Archived from groups: rec.photo.digital (More info?)

Dave Martindale <davem@cs.ubc.ca> wrote:
> andrew29@littlepinkcloud.invalid writes:

>>[1] The Reproduction of Colour, 6th Edition, Robert Hunt, p556.

> I didn't realize that the 6th edition was out. Does it say how it
> differs from the previous edition (e.g. in a preface?).

> I do have the 3rd, 4th, and 5th editions already, but they don't cover
> digital imaging much.

Sorry I didn't reply before now. The difference in the 6th. ed. is
indeed coverage of digital imaging.

Andrew.
!