Sign in with
Sign up | Sign in
Your question

Foveon schematics?

Last response: in Digital Camera
Share
Anonymous
May 20, 2005 12:54:20 PM

Archived from groups: rec.photo.digital (More info?)

Anyone knowing the technical reasons for Foveon type sensor producing
so poor colors? What I saw mentioned is that 3 channels stored in the
RAW format are almost identical. However, the theory looks very
sound: if you have 3 sensors, each of thickness 1 micron, stacked on
top of each other, then the top one will get (approximately) G + 1/2
B + 1/3 R, the middle one 1/2 B + 1/3 R, and the bottom one 1/3 R;
from these data one would get R, G, and B values without a lot of
extra noise... And the typical size of epitaxial layer is close to 1
micron; so what did break? Are electrons migrating too much? Is it
tunelling between layers? Some "transistor" effect hitting the fan?

Another question: how the p/n junctions are located: is it just a
p/n/p/n cake with 3 junctions (slashes between p/n/p/n) being
separately discharged by generated electrons? So is it that all p's
and all n' are charged to +1.5V or -1.5V, then after exposition the
voltages of all the layers are "measured" (via charge transfer, or
CMOS process)?

Thanks,
Ilya

More about : foveon schematics

Anonymous
May 20, 2005 12:54:21 PM

Archived from groups: rec.photo.digital (More info?)

In article <d6k8js$1vsg$1@agate.berkeley.edu>, Ilya Zakharevich
<nospam-abuse@ilyaz.org> wrote:

> Anyone knowing the technical reasons for Foveon type sensor producing
> so poor colors?

Just because something looks good on paper...doesn't mean it translates
to real life.
May 20, 2005 2:44:20 PM

Archived from groups: rec.photo.digital (More info?)

"Ilya Zakharevich" <nospam-abuse@ilyaz.org> wrote in message
news:D 6k8js$1vsg$1@agate.berkeley.edu...
> Anyone knowing the technical reasons for Foveon type sensor producing
> so poor colors? What I saw mentioned is that 3 channels stored in the
> RAW format are almost identical. However, the theory looks very
> sound: if you have 3 sensors, each of thickness 1 micron, stacked on
> top of each other, then the top one will get (approximately) G + 1/2
> B + 1/3 R, the middle one 1/2 B + 1/3 R, and the bottom one 1/3 R;
> from these data one would get R, G, and B values without a lot of
> extra noise... And the typical size of epitaxial layer is close to 1
> micron; so what did break? Are electrons migrating too much? Is it
> tunelling between layers? Some "transistor" effect hitting the fan?
>
> Another question: how the p/n junctions are located: is it just a
> p/n/p/n cake with 3 junctions (slashes between p/n/p/n) being
> separately discharged by generated electrons? So is it that all p's
> and all n' are charged to +1.5V or -1.5V, then after exposition the
> voltages of all the layers are "measured" (via charge transfer, or
> CMOS process)?
>
> Thanks,
> Ilya

The theory looks good, would be even better if it had lots more pixels, but
just doesn't seem to happen in reality.
Perhaps it's the filter responses between the layers causing the naff colors
?

No idea on the p's & n's :o O
Related resources
Anonymous
May 20, 2005 5:31:18 PM

Archived from groups: rec.photo.digital (More info?)

Ilya Zakharevich <nospam-abuse@ilyaz.org> wrote:
> Anyone knowing the technical reasons for Foveon type sensor producing
> so poor colors? What I saw mentioned is that 3 channels stored in the
> RAW format are almost identical. However, the theory looks very
> sound: if you have 3 sensors, each of thickness 1 micron, stacked on
> top of each other, then the top one will get (approximately) G + 1/2
> B + 1/3 R, the middle one 1/2 B + 1/3 R, and the bottom one 1/3 R;
> from these data one would get R, G, and B values without a lot of
> extra noise... And the typical size of epitaxial layer is close to 1
> micron; so what did break? Are electrons migrating too much? Is it
> tunelling between layers? Some "transistor" effect hitting the fan?

No, it's just that the sensitivities of the three layers are rather too
broad

See http://www.alt-vision.com/documentation/5074-35.pdf

compare Figure 4 and Figure 6. Bear in mind that Figure 6 is the
result *after* filtration; the raw curves aren't presented in this
paper.

Andrew.
Anonymous
May 20, 2005 7:17:22 PM

Archived from groups: rec.photo.digital (More info?)

Take an image of a pure green wall.

In an RGB Bayer sensor, the R and B cells will only recode 4% of the
values in the G cells :: a G:RorB ratio of 25:1

In a Foveon sensor, the R and B layers record 60%-80% of what gets
recorded inthe G layer :: a G:RorB ratio of 1.25:1

With well exposed areas, it is pretty easy to extract good color for
either sensor since yo have plenty of signal above the noise floor to
work with, but when you get to dark parts of the image, and especialy
when working with the last stop of exposure, there is simply not enough
spectral selectivity in the Foveon sensor to extract useful color. And
this is why the shadow detail gets a color smear.
Anonymous
May 20, 2005 9:11:53 PM

Archived from groups: rec.photo.digital (More info?)

"Ilya Zakharevich" <nospam-abuse@ilyaz.org> wrote in message
news:D 6k8js$1vsg$1@agate.berkeley.edu...
> Anyone knowing the technical reasons for Foveon type sensor producing
> so poor colors? What I saw mentioned is that 3 channels stored in the
> RAW format are almost identical. However, the theory looks very
> sound: if you have 3 sensors, each of thickness 1 micron, stacked on
> top of each other, then the top one will get (approximately) G + 1/2
> B + 1/3 R, the middle one 1/2 B + 1/3 R, and the bottom one 1/3 R;
> from these data one would get R, G, and B values without a lot of
> extra noise... And the typical size of epitaxial layer is close to 1
> micron; so what did break? Are electrons migrating too much? Is it
> tunelling between layers? Some "transistor" effect hitting the fan?
>
> Another question: how the p/n junctions are located: is it just a
> p/n/p/n cake with 3 junctions (slashes between p/n/p/n) being
> separately discharged by generated electrons? So is it that all p's
> and all n' are charged to +1.5V or -1.5V, then after exposition the
> voltages of all the layers are "measured" (via charge transfer, or
> CMOS process)?

I don't have any specific engineering experience with Foveon sensors, but do
have some general experience. My GUESSES are that:

1/ Foveon sensors are more difficult to manufacture with a reasonable degree
of consistency.
2/ The inconsistencies make the interpolation algorithms of the sensor data
iffy at best.
3/ Foveon sensors are more subject to temperature related problems.
4/ Quantum statistical effects are more pronounced and thus problematic in
layered sensors.
Anonymous
May 21, 2005 1:55:40 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
Charles Schuler
<charleschuler@comcast.net>], who wrote in article <Cs6dncwCy5gBzxPfRVn-rQ@comcast.com>:
> I don't have any specific engineering experience with Foveon sensors, but do
> have some general experience. My GUESSES are that:

> 1/ Foveon sensors are more difficult to manufacture with a reasonable degree
> of consistency.
> 2/ The inconsistencies make the interpolation algorithms of the sensor data
> iffy at best.

But why do you need consistency? This is like FPN: you *measure* the
sensitivities, burn this into ROM, and use these data in the
interpolation algorithm. At most 18MB of ROM is needed for this (for
6M sensor); is it a show-stopper with today's prices?

> 3/ Foveon sensors are more subject to temperature related problems.

> 4/ Quantum statistical effects are more pronounced and thus problematic in
> layered sensors.

I do not follow... I would not consider Poisson noise as "quantum
effect"; then what do you mean? Also, Poisson noise can be higher
only if QE is pathologically low (given that there is no filter which
removes 2/3 of photons)...

Thanks,
Ilya
Anonymous
May 21, 2005 1:55:41 AM

Archived from groups: rec.photo.digital (More info?)

> But why do you need consistency? This is like FPN: you *measure* the
> sensitivities, burn this into ROM, and use these data in the
> interpolation algorithm. At most 18MB of ROM is needed for this (for
> 6M sensor); is it a show-stopper with today's prices?

So, you propose to burn custom ROMs for each production run after running
the metrics. Or, worse yet, for each sensor? Not a great way to make money
in today's markets.

The layered sensor is compelling but thus far, has not reached the
competition. It's all about profits!
Anonymous
May 21, 2005 2:31:05 AM

Archived from groups: rec.photo.digital (More info?)

In article <d6k8js$1vsg$1@agate.berkeley.edu>,
Ilya Zakharevich <nospam-abuse@ilyaz.org> wrote:

> Anyone knowing the technical reasons for Foveon type sensor producing
> so poor colors? What I saw mentioned is that 3 channels stored in the
> RAW format are almost identical. However, the theory looks very
> sound: if you have 3 sensors, each of thickness 1 micron, stacked on
> top of each other, then the top one will get (approximately) G + 1/2
> B + 1/3 R, the middle one 1/2 B + 1/3 R, and the bottom one 1/3 R;
> from these data one would get R, G, and B values without a lot of
> extra noise... And the typical size of epitaxial layer is close to 1
> micron; so what did break? Are electrons migrating too much? Is it
> tunelling between layers? Some "transistor" effect hitting the fan?
>
> Another question: how the p/n junctions are located: is it just a
> p/n/p/n cake with 3 junctions (slashes between p/n/p/n) being
> separately discharged by generated electrons? So is it that all p's
> and all n' are charged to +1.5V or -1.5V, then after exposition the
> voltages of all the layers are "measured" (via charge transfer, or
> CMOS process)?
>
> Thanks,
> Ilya

The color sensitivity of the Foveon can't be accurately controlled like
it can be in filter based sensors. Software can make general color
corrections but it still won't be prefect.

A great test is to take a picture several brands of fluorescent lamps at
once. (Photograph an office building from outside at night) The lamps
produce light at several narrow-band wavelengths that together appear
white to a human eye. To most cameras, each brand of lamp will be a
different shade. Some will photograph tinted cyan, green, or yellow.
You can see that's there's no possible software correction that will
work for all conditions. It's a problem with the response of the sensor
to very specific colors in a way that differs from an eye's. Film does
it too.

And that's where the Foveon had its really big problem. There were
certain conditions where the colors were very much wrong. Narrow band
pre-filters might have helped the Foveon sensor but it didn't have
enough light sensitivity to spare.


The Foveon consumer camera sensor is dead. There's no conspiracy, it
just didn't work well for photography. If they know what's good for
them they'll try to apply their technology to scientific uses where
measurements are being taken outside of the visible spectrum. It could
replace similar but less refined sensors.
Anonymous
May 21, 2005 5:48:20 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
Charles Schuler
<charleschuler@comcast.net>], who wrote in article <Tc6dnb8HpJN7wxPfRVn-pA@comcast.com>:

> > But why do you need consistency? This is like FPN: you *measure* the
> > sensitivities, burn this into ROM, and use these data in the
> > interpolation algorithm. At most 18MB of ROM is needed for this (for
> > 6M sensor); is it a show-stopper with today's prices?
>
> So, you propose to burn custom ROMs for each production run after running
> the metrics. Or, worse yet, for each sensor?

Naturally, it should be each sensor.

> Not a great way to make money in today's markets.

AFAIK, EPROMs are dirt cheap (but I did not check specifics for the
last several years); the camera has one anyway. Actually, to improve
things 4x, it is enough to have about 2 bits per channel; so extra 4M
of EPROM is enough for a major improvement.

> The layered sensor is compelling but thus far, has not reached the
> competition. It's all about profits!

Agreed. And profit is about forcing people who would be quite
satisfied with EVF to buy dSLR...

Yours,
Ilya
Anonymous
May 21, 2005 6:07:32 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to

<MitchAlsup@aol.com>], who wrote in article <1116627442.958746.209290@g49g2000cwa.googlegroups.com>:
> Take an image of a pure green wall.
>
> In an RGB Bayer sensor, the R and B cells will only recode 4% of the
> values in the G cells :: a G:RorB ratio of 25:1
>
> In a Foveon sensor, the R and B layers record 60%-80% of what gets
> recorded inthe G layer :: a G:RorB ratio of 1.25:1

And 6*6 is 36. So what? The ratio does not matter at all; what is
important is S/N ratio, and propagation of noise via matrix
coefficients of transformation.

> With well exposed areas, it is pretty easy to extract good color for
> either sensor since yo have plenty of signal above the noise floor to
> work with, but when you get to dark parts of the image, and especialy
> when working with the last stop of exposure, there is simply not enough
> spectral selectivity in the Foveon sensor to extract useful color. And
> this is why the shadow detail gets a color smear.

Since Foveon will get way more photons for the same area, it has
*larger* S/N ratio.

The effect of what you describe LOOKS like a postprocessing artefact
(what follows is almost a pure speculation!):

The demosaicing algorithm of Bayer filter cameras extracts the
chrominance info at a pixel from a large area of the sensor;
effectively, this decreases the bandwidth for chrominance,
decreasing the *total wide-band noise* correspondingly (but not the
narrow-band noise, or narrow-band S/N ratio). The chrominance info
of multilayer sensor has much wider bandwidth; thus it may have
better narrow-band S/N ratio, and still *look* more noisy, since it
has larger wide-band noise.

Now narrow-band S/N ratio is what is preserved by postprocessing;
thus (if arguments above are applicable) postprocessing of image
from multilayer sensor will produce better picture than Bayer
sensor. So it may be that what we discuss is a silly default
setting of some parameter.

Of course, the details depend on QE curves of different layers of the
sensor. I suspect that there should be a strong dependency of the
depth photon can enter into silicon with the doping level; if so, one
cannot just predict these curves from theoretical considerations...
Likewise for fill factor, etc.

Unfortunately, www.alt-vision.com is down, so I cannot get the info
another poster suggested...

Thanks,
Ilya
Anonymous
May 21, 2005 1:13:37 PM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
Kevin McMurtrie
<mcmurtri@dslextreme.com>], who wrote in article <mcmurtri-DD0637.22310520052005@corp-radius.supernews.com>:

> The color sensitivity of the Foveon can't be accurately controlled like
> it can be in filter based sensors.

I presume you mean "spectral sensitivity curves" when you say "color
sensitivity".

> work for all conditions. It's a problem with the response of the sensor
> to very specific colors in a way that differs from an eye's.

It is clear that using one physical mechanism (photon-plasmon
interaction in silicon, or what it is?) does not give a lot of leeway
in controlling the spectral curves if all you have is 3 layers.
However, having more than 3 layers allows one to approximate the color
sensitivity of cones arbitrarily close.

[On the other hand, this may introduce big negative coefficients
into the matrix of transformation LAYERS --> RGB; this may increase
the chrominance noise, thus reducing the attractiveness of
Foveon-like design. One cannot tell without having exact data for
transparency of (doped?) silicon at different wavelengths.]

> The Foveon consumer camera sensor is dead.

Well, my question was about *why* it is dead. If the reasons are
technological with a possible solution in foreseeable future (like
small fill factor - or I do not know what is the reason for low QE),
it is one thing. If one just can't get acceptable chrominance noise
and spectral sensitivity curves without organic color filters, it is a
completely different situation.

Thanks,
Ilya

P.S. For myself, what is interesting about Foveon is that multilayer
designs allows high capacitance per area of the sensor. It is
crucial to have when "throughput QE" of the sensors is increased
from the current abyssmal state (about 0.13 for good sensors),
and one can get small cells which generate a lot of electrons
(with not too long exposures)...
Anonymous
May 21, 2005 4:38:23 PM

Archived from groups: rec.photo.digital (More info?)

Ilya Zakharevich wrote:
> Anyone knowing the technical reasons for Foveon type sensor producing
> so poor colors? What I saw mentioned is that 3 channels stored in the
> RAW format are almost identical. However, the theory looks very
> sound: if you have 3 sensors, each of thickness 1 micron, stacked on
> top of each other, then the top one will get (approximately) G + 1/2
> B + 1/3 R, the middle one 1/2 B + 1/3 R, and the bottom one 1/3 R;
> from these data one would get R, G, and B values without a lot of
> extra noise... And the typical size of epitaxial layer is close to 1
> micron; so what did break? Are electrons migrating too much? Is it
> tunelling between layers? Some "transistor" effect hitting the fan?
>
> Another question: how the p/n junctions are located: is it just a
> p/n/p/n cake with 3 junctions (slashes between p/n/p/n) being
> separately discharged by generated electrons? So is it that all p's
> and all n' are charged to +1.5V or -1.5V, then after exposition the
> voltages of all the layers are "measured" (via charge transfer, or
> CMOS process)?

There is a simple technical reason why a foveon, or other broadband
sensor will not produce accurate colors in all situations.
It has nothing to do with signal-to-noise ratios. You could have millions
more signal-to-noise compared to an RGB sensor and still have
color reproduction problems.

The reason is that the spectral response of the sensor + color
filter gets convolved with the spectral response of the light source.
Different light sources (e.g. high sun, versus low sun,
versus incandescent lamp, versus fluorescent lamp, etc) have different
spectral distributions. Multiply the spectral response by the reflectance
of the subject, times the light source, times the bandpass of the filter
times the spectral response of the detector then integrate the
result. This answer is proportional the number the camera pixel
records. Because of differing light source responses, the answer will be
different with different lighting. This is true of the eye's
response functions also, but because that standard is what we are
comparing to (how we perceive colors), the errors by various lighting
is part of the defined accurate color response of a system we want
to mimic with a camera (our eyes). But when you have broader bandpasses
as in the foveon or some other schemes (the worse case would be
those with a white light channel), the error becomes large, and the
system can only be calibrated to give accurate color with one
specific light source. Again, it is fundamental physics that
will limit the systems accuracy. If you are interested, I can point you
to spectral response of materials and you can convolve them with
various lighting functions and detector and filter response functions
and see for yourself.

In pursuit of an ideal camera sensor, one could theorize a sensor that
counts all photons (100% quantum efficiency) and the energy of every
photon. If the energy (wavelength) of each photon were known, then
one does not need filters. One then uses a mathematical description
of the human eyes response functions to convolve with the photon
energy distribution (wavelength and number of photons at each
wavelength). That would give theoretically best signal-to-noise and
the most accurate color. Perhaps someday.

Roger
Anonymous
May 21, 2005 4:39:03 PM

Archived from groups: rec.photo.digital (More info?)

"Ilya Zakharevich" <nospam-abuse@ilyaz.org> wrote in message
news:D 6m554$2r5l$1@agate.berkeley.edu...
SNIP
> Since Foveon will get way more photons for the same area,
> it has *larger* S/N ratio.

If only the potential well capacity were 3 times greater, which it
isn't. The charge for 3 color bands will be collected in the same
space as Bayer CFA designs use for a single band, and Photons
collected in one band are not used in another. Further more, due to
the very poor color separation, a lot more post-processing
(subtraction reduces SN!) is needed. This added to manufacturing
variations and needed on-chip temperature compensation, results in
demonstrably poorer image quality (look in the newsgroup archives for
high ISO noise, yellow skin tone, etc. in connection with
Sigma/Foveon). And then there's aliasing, although that has less to do
with chip design than with a deliberate attempt to fool people into
believing the false resolution claims, and saving some money by
leaving out a proper Anti-Aliasing filter.

SNIP
> Of course, the details depend on QE curves of different layers
> of the sensor. I suspect that there should be a strong
> dependency of the depth photon can enter into silicon with the
> doping level; if so, one cannot just predict these curves from
> theoretical considerations... Likewise for fill factor, etc.

The principle is based on the penetration depth of photons in the
doped silicon. You can check the details at
<http://patft.uspto.gov/netahtml/srchnum.htm&gt; and search for Patent
no. 5,965,875 , and for more general info:
<http://www.x3f.info/technotes/X3SensorCharacteristics.p...; .

Bart
Anonymous
May 22, 2005 9:53:21 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
Roger N. Clark (change username to rnclark)
<username@qwest.net>], who wrote in article <428F801F.9000501@qwest.net>:

> There is a simple technical reason why a foveon, or other broadband
> sensor will not produce accurate colors in all situations.

There may be, but I do not think it is the one you wrote (unless your
design is restricted to 3 layers).

> It has nothing to do with signal-to-noise ratios. You could have millions
> more signal-to-noise compared to an RGB sensor and still have
> color reproduction problems.

With many-levels sensors the only color-related problem is S/N (of
course, only if one forgets about restriction on thickness of the
layers, which did not enter your reasonings).

> The reason is that the spectral response of the sensor + color
> filter gets convolved with the spectral response of the light source.

I presume you mean "multiplied", not "convolved"...

> Different light sources (e.g. high sun, versus low sun,
> versus incandescent lamp, versus fluorescent lamp, etc) have different
> spectral distributions.

I take this as a very roundabout way to argue that one should be able
to get the spectral curve of sensitivity of cones in human eye as a
linear combination of spectral curves of sensitivity of sensors (at
least approximately).

> to mimic with a camera (our eyes). But when you have broader bandpasses
> as in the foveon or some other schemes (the worse case would be
> those with a white light channel), the error becomes large, and the
> system can only be calibrated to give accurate color with one
> specific light source.

What you say is (probably) that one can't expect to get the match for
eye sensitivity curves from regulating 3 widths of sensitive layers in
silicon. If this interpretation is valid, I say: sure! However, the
match may be improved by having more than 3 layers; each additional
layer should give significant improvement. I would expect that about
5 should be enough (just a hunch, I did not do any actual
calculation).

> Again, it is fundamental physics that will limit the systems
> accuracy.

Actually, it is fundamental mathematics that in the case when the
penetration depth depends monotonically on frequency, you can get
*arbitrary* spectral curves by using multilayer design. (It is called
Laplace transform.)

Of course, it may happen that the matrix of transformation LAYERS -->
luminosity/chrominance has large negative coefficients (hard to tell
without actual calculations). In this case the S/N ratio will become
a limiting factor.

> If you are interested, I can point you to spectral response of
> materials and you can convolve them with various lighting functions
> and detector and filter response functions and see for yourself.

Well, another poster gave a reference for X3SensorCharacteristics.pdf;
it contains a lot of data; but it is hard to call it unbiased...

Actually, the major problem is one of modeling the vision: how eye
would react to different levels of approximation of the spectral
curve. E.g., the PDF files has some statements like "Foveon gives
better approximation" (well, honestly, I did not check whether higher
numbers in the table are better or worse, so take this interpretation
with a grain of salt ;-). And we know how far this statement is from
reality...

> In pursuit of an ideal camera sensor, one could theorize a sensor that
> counts all photons (100% quantum efficiency) and the energy of every
> photon.

Theoretically possible with multiple splitters of light directing
different wavelengths to different sensors. Requires retro-focus lens
design, thus much worse utilization of glass.

However, what is practically important is to have enough channels to
compensate for different "typical" lighting conditions. And in many
cases having channels narrow-band vs the same number of overlapping
wider-band channels affects the matrix coefficients of the
transformation to RGB; not much else. So the difference is translated
to different S/N ratios.

If I read

http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Se...

right, they have some layers of width 0.2 microns, with total width to
substrate about 3 microns. Assuming about 20 layers practical with
today's technology, and interpolating these 3 pieces of data, this may
give about 4 micron sandwich with 19 pn-junctions (of course, one
needs extra layers for, e.g., contact areas; so it is going to be
smaller than 19). This is so much of information as to consider it
very close to your dream - almost any color filter can be emulated by
post-processing.

The difference is that in "your dream" post-processing will provide
exactly the same noise as the filtered signal. In the "multilayer
dream" this is not so; the filters with shallow slopes (like those of
human eye's cones) can be emulated with quite small noise; but the
"steep type" filters can be emulated only with significant increase in
noise.

Thanks,
Ilya
Anonymous
May 22, 2005 12:03:54 PM

Archived from groups: rec.photo.digital (More info?)

Ilya Zakharevich wrote:
> [A complimentary Cc of this posting was sent to
> Roger N. Clark (change username to rnclark)
> <username@qwest.net>], who wrote in article <428F801F.9000501@qwest.net>:
>
>>There is a simple technical reason why a foveon, or other broadband
>>sensor will not produce accurate colors in all situations.
>
> There may be, but I do not think it is the one you wrote (unless your
> design is restricted to 3 layers).

No.
Read: Trends in CMOS Image Sensor Technology and Design
http://www-isl.stanford.edu/~abbas/group/papers_and_pub...
Examine Figure 2. Note how the RGB sensor has a narrower
bandpass compared to the Foveon. Note also that at least the Canon RGB
profiles are better than shown on this page. Remember, I while
back I pointed you to:
http://www.astrosurf.org/buil
(apparently he has reorganized the site and the test and performance
page is not loading at the moment).

But in any event, it is clear the foveon profiles are broader
band then the filters on Bayer sensors.

>>It has nothing to do with signal-to-noise ratios. You could have millions
>>more signal-to-noise compared to an RGB sensor and still have
>>color reproduction problems.
>
> With many-levels sensors the only color-related problem is S/N (of
> course, only if one forgets about restriction on thickness of the
> layers, which did not enter your reasonings).

We are not talking many levels. The foveon sensor has only
3 levels.

>>The reason is that the spectral response of the sensor + color
>>filter gets convolved with the spectral response of the light source.
>
> I presume you mean "multiplied", not "convolved"...

It is a nomenclature term used in the spectroscopy field.
Yes in terms of mathematical operations it is multiply one
spectrum by another at each wavelength in the spectrum.

>>Different light sources (e.g. high sun, versus low sun,
>>versus incandescent lamp, versus fluorescent lamp, etc) have different
>>spectral distributions.

> I take this as a very roundabout way to argue that one should be able
> to get the spectral curve of sensitivity of cones in human eye as a
> linear combination of spectral curves of sensitivity of sensors (at
> least approximately).

No. You are mixing a spectrometer with the 3-layer foveon sensor.
The foveon sensor has 3 layers, each with different sensitivities
to red, green and blue.

Now multiply the spectral curve of some objects you might photograph.
Here are some example spectra (note the eye is sensitive over about
0.4 to 0.7 microns = 4000 to 7000 angstroms = 400 to 700 nanometers):

http://pubs.usgs.gov/of/2003/ofr-03-395/PLOTS/V/bluespr...
http://pubs.usgs.gov/of/2003/ofr-03-395/PLOTS/V/aspen_l...
http://pubs.usgs.gov/of/2003/ofr-03-395/PLOTS/A/cardboa...
http://pubs.usgs.gov/of/2003/ofr-03-395/PLOTS/A/plastic...
http://pubs.usgs.gov/of/2003/ofr-03-395/PLOTS/A/plywood...
http://pubs.usgs.gov/of/2003/ofr-03-395/PLOTS/L/seawate...
http://pubs.usgs.gov/of/2003/ofr-03-395/PLOTS/L/melting...
http://pubs.usgs.gov/of/2003/ofr-03-395/PLOTS/S/stonewa...
http://pubs.usgs.gov/of/2003/ofr-03-395/PLOTS/S/basalt_...
http://pubs.usgs.gov/of/2003/ofr-03-395/PLOTS/S/biotite...

You can get the digital data for the above + many more spectra at:
http://pubs.usgs.gov/of/2003/ofr-03-395/datatable.html
The description of the spectral library is at:
http://pubs.usgs.gov/of/2003/ofr-03-395/ofr-03-395.html

Next digitize the foveon and rgb profiles from the Stanford link above.
You'll have to interpolate all curves to the same wavelengths.
Now generate different light sources. An incandescent light bulb
can be computed from about a 2000 Kelvin Black Body. The curve will
have a lot of red intensity and not so much blue. The sun can be
approximated by a 5995 Kelvin Black Body, or get real data
off the net.

Now multiply the RGB filter response by the material spectrum and multiply
that result by the light source. Do the same with the foveon response
curves. Each (RGB filters and foveon) gives 3 output numbers: a red, a green
and a blue. Next integrate the signal under each of the red, green and
blue curves, to get 3 numbers.
Since we are only interested in color, scale all the greens
to one. Now for a given material, how much do the red and blue
channels change with different light sources? The foveon responses
will change a lot more than the rgb responses.

>>to mimic with a camera (our eyes). But when you have broader bandpasses
>>as in the foveon or some other schemes (the worse case would be
>>those with a white light channel), the error becomes large, and the
>>system can only be calibrated to give accurate color with one
>>specific light source.
>
> What you say is (probably) that one can't expect to get the match for
> eye sensitivity curves from regulating 3 widths of sensitive layers in
> silicon.

No, that is not what I said. If the foveon sensor was an RGB sensor
with exactly the spectral response profiles of the eye, then it would
produce excellent color with different lighting. ANY system, foveon,
Bayer sensor, or film will do well if the color layers a close to
that of the eye. But broader band RGB sensors, or CMY, or other
broad band sensors will do poorly.

> If this interpretation is valid, I say: sure! However, the
> match may be improved by having more than 3 layers; each additional
> layer should give significant improvement. I would expect that about
> 5 should be enough (just a hunch, I did not do any actual
> calculation).

No, it will take a lot more than that. You can experiment with
the above curves to model what is needed. You will also need to get
the response functions of the eye to model what it does, and see
how your designs match the eye.

>>Again, it is fundamental physics that will limit the systems
>>accuracy.

> Actually, it is fundamental mathematics that in the case when the
> penetration depth depends monotonically on frequency, you can get
> *arbitrary* spectral curves by using multilayer design. (It is called
> Laplace transform.)

But the question was why the foveon technology does not produce
accurate color.

> Of course, it may happen that the matrix of transformation LAYERS -->
> luminosity/chrominance has large negative coefficients (hard to tell
> without actual calculations). In this case the S/N ratio will become
> a limiting factor.

But regardless of signal to noise, you can NOT get accurate color
from broader bands than those in the human eye system.

>>If you are interested, I can point you to spectral response of
>>materials and you can convolve them with various lighting functions
>>and detector and filter response functions and see for yourself.
>
> Well, another poster gave a reference for X3SensorCharacteristics.pdf;
> it contains a lot of data; but it is hard to call it unbiased...

That link is down this morning.

>>In pursuit of an ideal camera sensor, one could theorize a sensor that
>>counts all photons (100% quantum efficiency) and the energy of every
>>photon.
>
> Theoretically possible with multiple splitters of light directing
> different wavelengths to different sensors. Requires retro-focus lens
> design, thus much worse utilization of glass.

It is already being done. They are called imaging spectrometers.
See:
http://speclab.cr.usgs.gov
http://aviris.jpl.nasa.gov
We are flying them around Mars and Saturn right now, as well as the earth.


> However, what is practically important is to have enough channels to
> compensate for different "typical" lighting conditions. And in many
> cases having channels narrow-band vs the same number of overlapping
> wider-band channels affects the matrix coefficients of the
> transformation to RGB; not much else. So the difference is translated
> to different S/N ratios.
>
> If I read
>
> http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PTO1&Se...
>
> right, they have some layers of width 0.2 microns, with total width to
> substrate about 3 microns. Assuming about 20 layers practical with
> today's technology, and interpolating these 3 pieces of data, this may
> give about 4 micron sandwich with 19 pn-junctions (of course, one
> needs extra layers for, e.g., contact areas; so it is going to be
> smaller than 19). This is so much of information as to consider it
> very close to your dream - almost any color filter can be emulated by
> post-processing.

You misunderstand how the 3 color sensor works. Shorter wavelength photons
get absorbed and longer wavelength photons pass through. You really
need factors of 2 in wavelength for this to work efficiently,
and that does not work well in the ~2x total wavelength range of human
vision. That is why the foveon RGB responses are so poor.
Making the layers smaller will only peak the efficiency at a certain
wavelength but with large out-of-band response. The
Full Width at Half Maximum would not match the eye. Read about
spectroscopy, e.g.:
Spectroscopy of Rocks and Minerals, and Principles of Spectroscopy
http://speclab.cr.usgs.gov/PAPERS.refl-mrs/refl4.html
>
> The difference is that in "your dream" post-processing will provide
> exactly the same noise as the filtered signal. In the "multilayer
> dream" this is not so; the filters with shallow slopes (like those of
> human eye's cones) can be emulated with quite small noise; but the
> "steep type" filters can be emulated only with significant increase in
> noise.

Again, noise has nothing to do with accurate color response in the context
we are discussing. In the case of the ideal sensor, one that
counts every photon and its wavelength (energy), so one can
mathematically synthesize the human spectral response is the best
one could do. Counting every photon with 100% quantum efficiency
ensures you maximize the possible signal-to-noise ratio.
Imaging spectrometers can synthesize lower bandpass system responses
and it is being done today (if you want, I can
supply references) although not with 100% quantum efficiency.

Roger
Anonymous
May 22, 2005 3:34:08 PM

Archived from groups: rec.photo.digital (More info?)

Roger N. Clark (change username to rnclark) wrote:

> Ilya Zakharevich wrote:
>
>> [A complimentary Cc of this posting was sent to
>> Roger N. Clark (change username to rnclark)
>> <username@qwest.net>], who wrote in article <428F801F.9000501@qwest.net>:
>>
>>> There is a simple technical reason why a foveon, or other broadband
>>> sensor will not produce accurate colors in all situations.
>>
>>
>> There may be, but I do not think it is the one you wrote (unless your
>> design is restricted to 3 layers).
>
>
> No.
> Read: Trends in CMOS Image Sensor Technology and Design
> http://www-isl.stanford.edu/~abbas/group/papers_and_pub...
> Examine Figure 2. Note how the RGB sensor has a narrower
> bandpass compared to the Foveon. Note also that at least the Canon RGB
> profiles are better than shown on this page.

Note: one needs to include the transmittance of the IR filter which
would reduce the 700 nm response to near zero. That would improve
the RGB curves (top in Fig 2). I assume foveon uses an IR
filter too, so it too would be improved. But even so, the
foveon red filter lets in too much green and blue,
the green filter lets in too much blue and red, and the
blue filter lets in too much green and red, and too much
deep blue. A deep-blue block filter could help the
foveon, but which more light loss due to the transmission of the
filter.

Roger
Anonymous
May 23, 2005 3:52:56 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
Bart van der Wolf
<bvdwolf@no.spam>], who wrote in article <428f0fc9$0$64739$e4fe514c@news.xs4all.nl>:
>
> "Ilya Zakharevich" <nospam-abuse@ilyaz.org> wrote in message
> news:D 6m554$2r5l$1@agate.berkeley.edu...
> SNIP
> > Since Foveon will get way more photons for the same area,
> > it has *larger* S/N ratio.
>
> If only the potential well capacity were 3 times greater, which it
> isn't.

Hmm, why is not it? Potentially, it has 3x the area of p/n-junctions
per unit cell area, so the capacity may be as high as 3x larger. [Of
course, the fill factor matters; but I did not see fill factor of any
CMOS sensor actually used in cameras, including Foveon...]

> The principle is based on the penetration depth of photons in the
> doped silicon. You can check the details at
> <http://patft.uspto.gov/netahtml/srchnum.htm&gt; and search for Patent
> no. 5,965,875 , and for more general info:
> <http://www.x3f.info/technotes/X3SensorCharacteristics.p...; .

Thanks, these links are very relevant to my initial question; however,
having an independent technicalish evaluation would be even more
useful; it is hard to call these text unbiased. ;-)

Yours,
Ilya
Anonymous
May 25, 2005 1:32:27 PM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was NOT [per weedlist] sent to
Ilya Zakharevich
<nospam-abuse@ilyaz.org>], who wrote in article <d6k8js$1vsg$1@agate.berkeley.edu>:
> Anyone knowing the technical reasons for Foveon type sensor producing
> so poor colors?

Here is a summary of relevant responses:

---------------------------------------------
(a guess?) Large Fixed-Pattern variations (due to little experience
in manufacturing?)

[Should not be hard to fix with corrections written to EEPROM]

---------------------------------------------
(a guess) Temperature-related variations

Anybody knows how the penetration depth depends on temperature?
Last time I solved problems on photon-phonon interaction was ages
ago, and I suspect that the photon capture in semiconductor may be
governed by very different laws anyway...

---------------------------------------------
(a fact) To fit the eye's cones sensitivity curves with 3 layers, one
needs a special (quite darkish) pre-filter.

See Figure 6 (page 4) of X3SensorCharacteristics.pdf. The black
line is the parameters of pre-filter needed to obtain a good fit for
cone's response. Note a deep (65%) required dip near 500nm. I
doubt that how the sensor is used in "real life" includes this filter.

=============================================

Actually, this document looks like a strong mix of a technical paper
with marketing speak. Can somebody translate the discussion after
Figure 6 (about using the sensor without a pre-filter) into plain
facts? I do not know about significance of "metamerism index";
neither do I know whether the comparison sensors (those of HP 618/715,
and MegaVision S2) provide good color fidelity...

=============================================

Conclusion: one needs at least one additional layer (with maximum
about 500nm) to fix major discrepancies between the silicon and human
eye. Looking at Figure 6 more, there is a residual discrepancy about
425-475 nm too; so maybe yet another layer with a maximum about 475 nm
is needed (or maybe this is fixed with a tuneup of the position of the
4th layer).

Having this extra layer(s) will also help in correcting errors due to
difference in manufactured thickness of layers, and temperature
changes.

=============================================

BTW, here is another hypothetical reason which affects color fidelity
(at least with 3 layers design). When an electron/hole pair is
generated inside silicon, they are attracted to the "closest" p/n
junction, and discharge the capacitor formed by this junction.
However, "closest" is determined not by distance, but by the
potentials on the junctions.

Thus which junction will be discharged by a given photon (basins of
attraction) is determined not only by depths of junctions, but also by
current charge on each junction; since charge changes during the
exposition, the spectral curves should depend (to some extent) on the
color of incoming light!

Anybody knowing whether this effect is strong enough to deserve
attention? AFAIK, typical discharge is from 3V to full well of 1V;
this should affect the basins a lot...

[Again, this effect is fully determined by the information read from
the sensor. With excess information from "extra" layers it may be
possible to compensate for it in many-layers design.]

Thanks,
Ilya

P.S. BTW, the fill factor is 50% (from 5071-35.pdf). With 3 layers
one should get quite high capacitance per area of sensor.
Anonymous
May 26, 2005 7:53:33 PM

Archived from groups: rec.photo.digital (More info?)

You see the same phenomenon in the audio industry. The most accurate
studio monitors don't sound very interesting. They exist in
professional lines aimed at a very different customer than consumer
speakers, which are intentionally skewed far from anything resembling
wave form accuracy.

Same goes for Foveon - scientifically accurate, which has the potential
to be less pleasing.
Anonymous
May 27, 2005 2:06:44 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
Roger N. Clark (change username to rnclark)
<username@qwest.net>], who wrote in article <4290914A.5020808@qwest.net>:

> > With many-levels sensors the only color-related problem is S/N (of
> > course, only if one forgets about restriction on thickness of the
> > layers, which did not enter your reasonings).

> We are not talking many levels. The foveon sensor has only
> 3 levels.

Yes, we are. *You* said "a foveon, or other broadband sensor".

> But in any event, it is clear the foveon profiles are broader
> band then the filters on Bayer sensors.

The width of the band does not matter (directly). E.g., with sensels
which produce R+G+B, R+G, and G+B you get absolutely the same color
fidelity as from RGB sensels. The only thing which matters is a
possibility to fit the cone's curves with *linear combinations* of
sensel sensitivity curves.

> > I take this as a very roundabout way to argue that one should be able
> > to get the spectral curve of sensitivity of cones in human eye as a
> > linear combination of spectral curves of sensitivity of sensors (at
> > least approximately).

> No. You are mixing a spectrometer with the 3-layer foveon sensor.
> The foveon sensor has 3 layers, each with different sensitivities
> to red, green and blue.

No, I'm not "mixing a spectrometer with the 3-layer foveon sensor".
What made you think so?

> Next digitize the foveon and rgb profiles from the Stanford link above.
> You'll have to interpolate all curves to the same wavelengths.
> Now generate different light sources.

Again, you are muddying a very simple topic. If you can fit the eye
sensitivity curves as above, then for any light source and subject you
get perfect color. If the fitting error is large, it is (of course!)
easy to find subjects and lighting conditions which result in bad
colors.

Anyway, thanks for the links.

> > What you say is (probably) that one can't expect to get the match for
> > eye sensitivity curves from regulating 3 widths of sensitive layers in
> > silicon.
>
> No, that is not what I said. If the foveon sensor was an RGB sensor
> with exactly the spectral response profiles of the eye, then it would
> produce excellent color with different lighting. ANY system, foveon,
> Bayer sensor, or film will do well if the color layers a close to
> that of the eye. But broader band RGB sensors, or CMY, or other
> broad band sensors will do poorly.

All fine, but you forgot to put a "linear combination" there...

> > If this interpretation is valid, I say: sure! However, the
> > match may be improved by having more than 3 layers; each additional
> > layer should give significant improvement. I would expect that about
> > 5 should be enough (just a hunch, I did not do any actual
> > calculation).

> No, it will take a lot more than that.

Well, it is your word against mine then. And I know which one I would
trust. ;-)

> > Actually, it is fundamental mathematics that in the case when the
> > penetration depth depends monotonically on frequency, you can get
> > *arbitrary* spectral curves by using multilayer design. (It is called
> > Laplace transform.)

> But the question was why the foveon technology does not produce
> accurate color.

I think I answered this question in another message of the thread.
Quite probably the foveon sensor *does* produce acceptable colors;
it is just that Sigma did not put the necessary pre-filter in their
camera.

And if we discuss *technology*, then one or two additional layers
should eliminate the need for a pre-filter too.

> But regardless of signal to noise, you can NOT get accurate color
> from broader bands than those in the human eye system.

Same errors as above (width does not matter directly due to
possibility of taking linear combinations). So with narrow enough
layers (which *still* have broadband sensitivity) one can approximate
any desired spectral curve. This is a simple math (Laplace transform,
or Weierstrass approximation theorem - take your pick).

> You misunderstand how the 3 color sensor works. Shorter wavelength photons
> get absorbed and longer wavelength photons pass through. You really
> need factors of 2 in wavelength for this to work efficiently,

Sigh. Yet the same errors again...

Hope this helps,
Ilya
Anonymous
May 27, 2005 2:39:11 AM

Archived from groups: rec.photo.digital (More info?)

Ilya Zakharevich wrote:
> [A complimentary Cc of this posting was sent to
> Roger N. Clark (change username to rnclark)
> <username@qwest.net>], who wrote in article <4290914A.5020808@qwest.net>:
>>But in any event, it is clear the foveon profiles are broader
>>band then the filters on Bayer sensors.
>
>
> The width of the band does not matter (directly). E.g., with sensels
> which produce R+G+B, R+G, and G+B you get absolutely the same color
> fidelity as from RGB sensels. The only thing which matters is a
> possibility to fit the cone's curves with *linear combinations* of
> sensel sensitivity curves.

The problem is that it is NOT a linear combination, it a multiplication
followed by an integration. This is NOT an additive math problem.
For example, in your white light channel one of your scenarios, you multiply
the light source spectrum by the throughput of the optical system times
the quantum efficiency of the detector, all as a function of wavelength
and then integrate the resulting area under the curve. This is what
is called a spectral convolution. Again, it is not a linear process;
it is a multiplicative process. The integration step gives you one number
and you can not from that number back out the spectral response of the
light source independent of the spectral response of the subject.

.... rest of discussion deleted as it is not relevant unless you do
a proper spectral convolution.

> Sigh. Yet the same errors again...
>
> Hope this helps,
> Ilya

Yes, Ilya.
The field of spectral convolution is directly my field of expertise.
I model the response of broadband as well as narrow band systems from
the ultraviolet to far infrared, from the Earth to the outer solar
system. I currently am on the science teams for the Cassini mission
around Saturn, the Mars Global Surveyor currently orbiting Mars,
work with aircraft and satellite systems looking at the earth,
and I am working on a new moon mission to be launched in 2007.
I do spectral convolutions just about every day.

So before you continue down the path you are on, I suggest you
actually model the response like I outlined, using various
light source spectra spectrally convolved with your broad-band systems and
their spectral responses.

You can check my bio at http://www.clarkvision.com

You can also download my spectral analysis and convolution software
and compile it on a unix or linux machine from
http://speclab.cr.usgs.gov/software.html

Roger
Anonymous
May 28, 2005 12:24:38 AM

Archived from groups: rec.photo.digital (More info?)

>Ilya Zakharevich wrote:
>> The width of the band does not matter (directly). E.g., with sensels
>> which produce R+G+B, R+G, and G+B you get absolutely the same color
>> fidelity as from RGB sensels. The only thing which matters is a
>> possibility to fit the cone's curves with *linear combinations* of
>> sensel sensitivity curves.

"Roger N. Clark (change username to rnclark)" <username@qwest.net> writes:
>The problem is that it is NOT a linear combination, it a multiplication
>followed by an integration. This is NOT an additive math problem.

I suspect what Ilya was getting at is that you don't have to have filter
responses that equal the human colour matching functions. You *can* use
filter responses that are some linear combination of the colour matching
functions, then use the inverse matrix to convert from the colour space
you've chosen back into "human colour space".

In fact, all colour cameras have to do this if they want accurate
colour, since the human colour matching functions have negative lobes
and physically realizable filters do not. If you're designing a Bayer-filter
sensor or a 3CCD camera with beamsplitter, you can design the colour
response of the filters or the beamsplitter to be approximately a linear
combination of the colour matching functions, and then you only need a
matrix multiply per pixel for colour space conversion.

On the other hand, the set of colour response curves that have this
property is a vanishingly small fraction of all possible colour
responses. The Foveon sensor has the problem that you don't get to
tailor the response very precisely, so likely *none* of the possible
responses form the linear combination Ilya talks about.

Foveon had a paper on their website that discusses what filtration you'd
have to add to a Foveon sensor to get nearly-correct colour
reproduction, but the Sigma cameras do not seem to use it (and it would
reduce the effective ISO of the camera).

>For example, in your white light channel one of your scenarios, you multiply
>the light source spectrum by the throughput of the optical system times
>the quantum efficiency of the detector, all as a function of wavelength
>and then integrate the resulting area under the curve. This is what
>is called a spectral convolution. Again, it is not a linear process;
>it is a multiplicative process. The integration step gives you one number
>and you can not from that number back out the spectral response of the
>light source independent of the spectral response of the subject.

If you have filters with a suitable response in your Bayer sensor, or in
the beamsplitter of your 3CCD camera, the filter itself does the
spectral convolution described here with the physical light. But the
Foveon sensor seems unsuited to this. It *also* seems unsuited to
providing (for example) 60 spectral bands spaced 5 nm apart, which would
allow doing the convolution numerically, what Roger is talking about.

So, as usual, it seems like Ilya is talking about something that's
theoretically possible given the right sensor response, but can't actually
be done with the Foveon sensor. Roger is pointing out that you can't
achieve the same thing after the sensor either using additional
computation - the data you'd need is gone.

Dave
Anonymous
May 28, 2005 1:53:37 AM

Archived from groups: rec.photo.digital (More info?)

Dave Martindale wrote:
>>Ilya Zakharevich wrote:
>>
>>>The width of the band does not matter (directly). E.g., with sensels
>>>which produce R+G+B, R+G, and G+B you get absolutely the same color
>>>fidelity as from RGB sensels. The only thing which matters is a
>>>possibility to fit the cone's curves with *linear combinations* of
>>>sensel sensitivity curves.
>
>
> "Roger N. Clark (change username to rnclark)" <username@qwest.net> writes:
>
>>The problem is that it is NOT a linear combination, it a multiplication
>>followed by an integration. This is NOT an additive math problem.
>
>
> I suspect what Ilya was getting at is that you don't have to have filter
> responses that equal the human colour matching functions. You *can* use
> filter responses that are some linear combination of the colour matching
> functions, then use the inverse matrix to convert from the colour space
> you've chosen back into "human colour space".

Dave,
We've been through this a couple of posts up the thread.
The problem is that the linear conversion works for only one
light source spectral distribution, e.g., you could define
it for sunlight. But when you change the light source, you
can't recover color accurately because you can't determine of
what the camera measures is due to color in the subject or
color changes in the illuminating light source (e.g. fluorescent
lights, or tungstens lights). So the linear additive
(or subtractive) problem has a solution for only one condition.
All other conditions have errors in the derived colors.

Roger
Anonymous
May 28, 2005 10:33:09 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
Dave Martindale
<davem@cs.ubc.ca>], who wrote in article <d77vm6$j22$1@mughi.cs.ubc.ca>:
> I suspect what Ilya was getting at is that you don't have to have filter
> responses that equal the human colour matching functions. You *can* use
> filter responses that are some linear combination of the colour matching
> functions, then use the inverse matrix to convert from the colour of space
> you've chosen back into "human colour space".

Close, but not close enough. I'm saying approximately "the opposite":

You: individual layer/sensel responses should be linear combinations
of the colour matching functions;

Me: the colour matching functions should be linear combinations
of individual layer/sensel responses.

If you have only 3 different types of sensels/layers, these two
conditions are approximately equivalent. However, if you have more
than 3, it is my condition which reflects the reality.

And with multilayer Foveon-type sensor, 3 layers is not enough to get
good enough match (as papers by Foveon team demonstrate - at least if
one does not consider pre-filter), this is why the difference is
important.

> On the other hand, the set of colour response curves that have this
> property is a vanishingly small fraction of all possible colour
> responses.

This is again a corollary of you considering a wrong criterion.

> The Foveon sensor has the problem that you don't get to
> tailor the response very precisely, so likely *none* of the possible
> responses form the linear combination Ilya talks about.

As I said, with *enough layers* you can approximate any "matching
function" with arbitrary precision. And my educated guess is that
"enough" is in practical terms as low as 4 or 5.

> >For example, in your white light channel one of your scenarios, you multiply
> >the light source spectrum by the throughput of the optical system times
> >the quantum efficiency of the detector, all as a function of wavelength
> >and then integrate the resulting area under the curve. This is what
> >is called a spectral convolution. Again, it is not a linear process;
> >it is a multiplicative process.

Sigh... This *is* a linear process. A linear function involves
multiplication: e.g., y = a*x + b.

Roger, a simple question: you have 5 types of sensels with spectral
sensitivity functions s1(f), ..., s5(f). You want to reconstruct the
response of some other type of sensor (e.g., "R-channel of human
vision") which is described by by spectral sensitivity function R(f).

Suppose you know that

3 s1(f) - 2.5 s2(f) + 11 s3(f) + 2 s4(f) - s5(f) = R(f) for any f.

Do you agree that given 5 readings from the sensels x1, x2, x3, x4, x5
(for a particular source), you can *exactly* predict the response of
the "other sensor"?

> If you have filters with a suitable response in your Bayer sensor, or in
> the beamsplitter of your 3CCD camera, the filter itself does the
> spectral convolution described here with the physical light. But the
> Foveon sensor seems unsuited to this. It *also* seems unsuited to
> providing (for example) 60 spectral bands spaced 5 nm apart, which would
> allow doing the convolution numerically, what Roger is talking about.

60 - sure, this is outside of current technological level of CMOS
processing. But 10 - maybe! The idea is that to get 10 pieces of
narrow-band data, you do not need 10 narrow band *sensors*. What you
need is 10 or more types of sensels with spectral sensitivity
functions which have many narrow-band linear combinations. And with
20 wide-band functions of the form s(f) = f * exp(-a f)
  • with 20
    different values of a, you *can* get about 10 narrow-band linear
    combinations.

  • f * exp(-a f) *is* the sensitivity of narrow silicon layer; the
    trick is the "units" one should measure the frequency in: the
    (inverted) penetration depth.

    Hope this helps,
    Ilya
    Anonymous
    May 28, 2005 11:56:50 AM

    Archived from groups: rec.photo.digital (More info?)

    Ilya Zakharevich wrote:

    > [A complimentary Cc of this posting was sent to
    > Dave Martindale
    > <davem@cs.ubc.ca>], who wrote in article <d77vm6$j22$1@mughi.cs.ubc.ca>:
    >

    > As I said, with *enough layers* you can approximate any "matching
    > function" with arbitrary precision. And my educated guess is that
    > "enough" is in practical terms as low as 4 or 5.

    As I model optical system responses for a living, I have an intuitive
    feel for this. My feel is you might do pretty well with three
    per channel but only if there is no response too far out of band.
    Needing reg, green and blue channels, that would be 9. But again,
    that few would only work if there are no strong out of band
    responses. For example, in the Foveon sensor, the red filter
    still has strong response at the blue end (0.4 microns) (remember
    0.55 microns is approximately green, and 0.6+ is red). These 9
    would work only if they had narrow responses. The reason is
    the process is a multiplicative with integrations.

    If the 9 channels were broad band, the system would still fail.
    This is a classic design issue in spectrometer response. If, in
    theory, you did not need narrow bandwidth, just high sampling, then
    system design would be easy. The problem is that one is consumed
    by noise trying to unmix the signals from multiple channels.

    >
    >>>For example, in your white light channel one of your scenarios, you multiply
    >>>the light source spectrum by the throughput of the optical system times
    >>>the quantum efficiency of the detector, all as a function of wavelength
    >>>and then integrate the resulting area under the curve. This is what
    >>>is called a spectral convolution. Again, it is not a linear process;
    >>>it is a multiplicative process.
    >
    >
    > Sigh... This *is* a linear process. A linear function involves
    > multiplication: e.g., y = a*x + b.

    You like to sigh a lot. If you spent all the time sighing instead to
    opening your mind and trying to understand what others are saying,
    you might learn faster.

    There is an integration in there. The integration under the
    total response curve (of all the individual response curves
    of each system component multiplied together) over all wavelengths
    (not just visible) hides information you need to recover your
    signal (whether you try and use a linear or non-linear function).

    > Roger, a simple question: you have 5 types of sensels with spectral
    > sensitivity functions s1(f), ..., s5(f). You want to reconstruct the
    > response of some other type of sensor (e.g., "R-channel of human
    > vision") which is described by by spectral sensitivity function R(f).
    >
    > Suppose you know that
    >
    > 3 s1(f) - 2.5 s2(f) + 11 s3(f) + 2 s4(f) - s5(f) = R(f) for any f.
    >
    > Do you agree that given 5 readings from the sensels x1, x2, x3, x4, x5
    > (for a particular source), you can *exactly* predict the response of
    > the "other sensor"?

    Not when you include the integral in the equation, which is
    required for reality. The problem is the spectral structure of
    materials give different responses when light transmits through
    each of the 5 systems, and that spectral structure can not be recovered.
    In practice, you can recover some of the information but at a price.
    This is known as spectral deconvolution. There are books written
    about it. What the "price" is artifacts and increased
    signal-to-noise ratios. In practice, you can gain about 2x with
    a 4x loss in S/N (2x meaning from 5 channels you can derive 10)
    and artifacts that would still translate to color errors.


    > 60 - sure, this is outside of current technological level of CMOS
    > processing. But 10 - maybe! The idea is that to get 10 pieces of
    > narrow-band data, you do not need 10 narrow band *sensors*. What you
    > need is 10 or more types of sensels with spectral sensitivity
    > functions which have many narrow-band linear combinations. And with
    > 20 wide-band functions of the form s(f) = f * exp(-a f)
  • with 20
    > different values of a, you *can* get about 10 narrow-band linear
    > combinations.

    It would work if you had infinite signal to noise and you ignore
    the artifacts. Include the integrals in your equations.
    Then do propagation of errors.

    If your signals are, for example, a+/- del_a, b +/- del_b, and c +/- del_c
    and if your equation is d = a + b - c, then your
    error on d is del_d = sqrt (del_a^2 + del_b^2 + del_c^2)
    So in a long equation you see that your noise just increases,
    especially when you have subtractions.

    Roger
    Anonymous
    May 28, 2005 7:35:45 PM

    Archived from groups: rec.photo.digital (More info?)

    "Roger N. Clark (change username to rnclark)" <username@qwest.net> writes:

    >> I suspect what Ilya was getting at is that you don't have to have filter
    >> responses that equal the human colour matching functions. You *can* use
    >> filter responses that are some linear combination of the colour matching
    >> functions, then use the inverse matrix to convert from the colour space
    >> you've chosen back into "human colour space".

    >Dave,
    >We've been through this a couple of posts up the thread.

    Sorry, I've been busy and haven't been able to read the group much
    recently. My news server doesn't have the previous posts.

    >The problem is that the linear conversion works for only one
    >light source spectral distribution, e.g., you could define
    >it for sunlight. But when you change the light source, you
    >can't recover color accurately because you can't determine of
    >what the camera measures is due to color in the subject or
    >color changes in the illuminating light source (e.g. fluorescent
    >lights, or tungstens lights). So the linear additive
    >(or subtractive) problem has a solution for only one condition.
    >All other conditions have errors in the derived colors.

    I don't think this is a problem for photography. If you're trying to
    determine *accurate* colour when the illuminating light has some weird
    spectrum, you're right. If you change the lighting from tungsten to
    fluorescent, the colours change. But generally in photography we just
    want to reproduce the colour as the human eye saw it. We don't care
    what the object colour is separate from the illumination - we just want
    to know what colour the reflected light is.

    Having camera colour matching functions that match those of the eye
    (after the linear transformation) is sufficient to make the camera see
    the same way the eye sees. (Well, there are still some problems because
    the eye does automatic white balance and the camera often does not, or
    does it in a different way).

    Dave
    Anonymous
    May 28, 2005 7:58:22 PM

    Archived from groups: rec.photo.digital (More info?)

    Ilya Zakharevich <nospam-abuse@ilyaz.org> writes:

    >Close, but not close enough. I'm saying approximately "the opposite":

    >You: individual layer/sensel responses should be linear combinations
    > of the colour matching functions;

    >Me: the colour matching functions should be linear combinations
    > of individual layer/sensel responses.

    >If you have only 3 different types of sensels/layers, these two
    >conditions are approximately equivalent.

    No, they are exactly equivalent as long as the transformation is not
    singular.

    >And with multilayer Foveon-type sensor, 3 layers is not enough to get
    >good enough match (as papers by Foveon team demonstrate - at least if
    >one does not consider pre-filter), this is why the difference is
    >important.

    Ok, but how many layers do you think could be built in a Foveon-type
    sensor? The existing 3-layer sensor already seems to have problems of
    non-uniformity across a single sensor, and in full-well capacity of each
    layer. Making more layers within the (unchanged) penetration depth of
    photons means thinner layers yet, which are harder to control and have
    even less capacity each. In addition, the more layers the more similar
    the responses are to each other.

    >As I said, with *enough layers* you can approximate any "matching
    >function" with arbitrary precision. And my educated guess is that
    >"enough" is in practical terms as low as 4 or 5.

    I might believe you with 5 channels that have relatively sharp cutoff.
    But the Foveon channels are anything but sharp. The 3-channel Foveon
    already has noise problems caused by subtracting channels that are too
    similar from each other. Adding more channels that overlap even more,
    and that are each noisier to start with, will give you an image that may
    be more colour accurate but even more noisy.

    >Suppose you know that

    > 3 s1(f) - 2.5 s2(f) + 11 s3(f) + 2 s4(f) - s5(f) = R(f) for any f.

    >Do you agree that given 5 readings from the sensels x1, x2, x3, x4, x5
    >(for a particular source), you can *exactly* predict the response of
    >the "other sensor"?

    How likely is this? The sum of positive coefficients in your
    example is 16, with one dominant channel having a coefficient of 11.
    The sume of negative coefficients is -3.5. This is what you'd get if
    the s3() response is close to the final response you want and the other
    functions provide only small corrections.

    But with a Foveon-type sensor, you'd end up with the sums of positive and
    negative coefficients being nearly equal and opposite. The sum above
    works in math where precision is infinite and noise is zero. When you
    apply it to noisy sensor data, you end up cancelling most of the signal
    while adding up all the sources of noise. And adding layers to a Foveon
    sensor will only make this problem worse, not better, as the responses
    get more similar and you have more noise sources.

    The multiple-channel-sensor approach works well when the channels are
    relatively independent, which means narrow filters with steep slopes.
    The Foveon structure is not capable of this.

    >60 - sure, this is outside of current technological level of CMOS
    >processing. But 10 - maybe! The idea is that to get 10 pieces of
    >narrow-band data, you do not need 10 narrow band *sensors*. What you
    >need is 10 or more types of sensels with spectral sensitivity
    >functions which have many narrow-band linear combinations. And with
    >20 wide-band functions of the form s(f) = f * exp(-a f)
  • with 20
    >different values of a, you *can* get about 10 narrow-band linear
    >combinations.

    Not if you consider noise as well as signal.

    Dave
    Anonymous
    May 29, 2005 5:04:21 AM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    <username@qwest.net>], who wrote in article <429878A2.1050208@qwest.net>:
    > > As I said, with *enough layers* you can approximate any "matching
    > > function" with arbitrary precision. And my educated guess is that
    > > "enough" is in practical terms as low as 4 or 5.

    > As I model optical system responses for a living, I have an intuitive
    > feel for this. My feel is you might do pretty well with three
    > per channel but only if there is no response too far out of band.

    I do not do this for a living, but I can sit and *calculate* the
    answer. With 3 per channel, you can approximate a narrow-band
    response with 0.3% accuracy. I doubt such a precision is needed; the
    natural variablility of eye response functions should be much more
    than this: cornea changes the tint with age; reflection from blood
    vessels changes with level of oxigen in blood etc.

    And this is for approximation of *narrow-band*. With wide-band stuff
    (such as eye functions) the Foveon people did an amazing fit even with
    3 layers (with a pre-filter). [I wonder if an aftermarket filter with
    a custom raw-converter could improve the performance of Sigmas...]

    > But again, that few would only work if there are no strong out of
    > band responses.

    Irrelevant. The type of filtering which multi-layer sensor does is
    mathematically equivalent to cosine-transform (one can be easily
    expressed through the others). As you can see, cos(n*x) has a lot of
    "out of band response"; but you can approximate any even function by a
    combination of cosines.

    > > Sigh... This *is* a linear process. A linear function involves
    > > multiplication: e.g., y = a*x + b.
    >
    > You like to sigh a lot. If you spent all the time sighing instead to
    > opening your mind and trying to understand what others are saying,
    > you might learn faster.

    LOL! After all this discussion you claim it is me how needs to
    learn! This was really refreshing; thanks.

    > There is an integration in there.

    Look again how Fourier transform and inverse Fourier transform work.
    You know, there is an integration in there...

    > > Roger, a simple question: you have 5 types of sensels with spectral
    > > sensitivity functions s1(f), ..., s5(f). You want to reconstruct the
    > > response of some other type of sensor (e.g., "R-channel of human
    > > vision") which is described by by spectral sensitivity function R(f).
    > >
    > > Suppose you know that
    > >
    > > 3 s1(f) - 2.5 s2(f) + 11 s3(f) + 2 s4(f) - s5(f) = R(f) for any f.
    > >
    > > Do you agree that given 5 readings from the sensels x1, x2, x3, x4, x5
    > > (for a particular source), you can *exactly* predict the response of
    > > the "other sensor"?
    >
    > Not when you include the integral in the equation, which is
    > required for reality.

    Answer: under conditions above the response of the "other sensor" is
    *exactly*

    3 * x1 - 2.5 * x2 + 11 * x3 + 2 * x4 - x5

    I hope you are able to obtain this relation yourselves (this is called
    "the linearity property of integration"; it is done on the first
    lecture in any Calculus class teaching integration).

    > > 60 - sure, this is outside of current technological level of CMOS
    > > processing. But 10 - maybe! The idea is that to get 10 pieces of
    > > narrow-band data, you do not need 10 narrow band *sensors*. What you
    > > need is 10 or more types of sensels with spectral sensitivity
    > > functions which have many narrow-band linear combinations. And with
    > > 20 wide-band functions of the form s(f) = f * exp(-a f)
  • with 20
    > > different values of a, you *can* get about 10 narrow-band linear
    > > combinations.

    > It would work if you had infinite signal to noise

    Good. I see that you finally learned that only thing to bother is the
    S/N ratio (as I was saying all along).

    > and you ignore the artifacts.

    With exact match of eye sensitivy curves there are no artefacts. With
    inexact match - but *all* imaging systems give inexact match, so we
    are back to question of measuring the error of match, and natural
    variability of the functions to match.

    Hope this helps,
    Ilya
    Anonymous
    May 29, 2005 5:21:45 AM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    Dave Martindale
    <davem@cs.ubc.ca>], who wrote in article <d7a4eu$4h0$1@mughi.cs.ubc.ca>:
    > >You: individual layer/sensel responses should be linear combinations
    > > of the colour matching functions;
    >
    > >Me: the colour matching functions should be linear combinations
    > > of individual layer/sensel responses.
    >
    > >If you have only 3 different types of sensels/layers, these two
    > >conditions are approximately equivalent.
    >
    > No, they are exactly equivalent as long as the transformation is not
    > singular.

    Right; this is exactly what I meant by "aproximately equivalent". ;-)

    > Ok, but how many layers do you think could be built in a Foveon-type
    > sensor?

    There are some on-the-envelop estimates: CMOS with 20 layers is not
    science fiction; the ratio of depth of Foveon to the thinnest layer of
    Foveon is also 20. So 19 p/n junctions is technologically possible.

    > The existing 3-layer sensor already seems to have problems of
    > non-uniformity across a single sensor, and in full-well capacity of each
    > layer.

    It also has 50% fill factor even with 0.18 micron technology and 9
    micron cell size. Common sense says that will get smaller fill factor
    with more layers (more readout circuits?).

    > Making more layers within the (unchanged) penetration depth of
    > photons means thinner layers yet, which are harder to control and have
    > even less capacity each.

    I do not follow: what has capacity to do with thinness of layers?

    Anyway: it is a new technology, and it is impossible to predict
    whether the gremlins can be put to sleep with enough funding. But as
    of today, it is not clear where this funding can come from.

    > In addition, the more layers the more similar the responses are to
    > each other.

    Yes, and this is the *desirable* feature. With many similar wide-band
    curves you can get narrow-band curves.

    > >As I said, with *enough layers* you can approximate any "matching
    > >function" with arbitrary precision. And my educated guess is that
    > >"enough" is in practical terms as low as 4 or 5.
    >
    > I might believe you with 5 channels that have relatively sharp cutoff.

    To the contrary: with sharp cutoff you *can't* match eye response - it
    has no sharp edges.

    > But the Foveon channels are anything but sharp. The 3-channel Foveon
    > already has noise problems caused by subtracting channels that are too
    > similar from each other.

    I do not think this ("why") is true. My suspicion is that they have
    much larger bandwidth in chrominance channel than comparable Bayer
    sensors; if true, then after restriction to the bandwidth of Bayer,
    the noise would become smaller than for Bayer. But to understand
    details, somebody needs to do actual calculations of noise.

    I agree (and said it many times myself) that S/N issue is the central
    issue in this discussion...

    > >Suppose you know that
    >
    > > 3 s1(f) - 2.5 s2(f) + 11 s3(f) + 2 s4(f) - s5(f) = R(f) for any f.
    >
    > >Do you agree that given 5 readings from the sensels x1, x2, x3, x4, x5
    > >(for a particular source), you can *exactly* predict the response of
    > >the "other sensor"?

    > How likely is this?

    This was just a conversational device to wake up Roger (the
    coefficients are "arbitrary"). He got stuck in some confusion (I
    still can't detect which, which makes it hard to unstuck him), and
    this example may help.

    > But with a Foveon-type sensor, you'd end up with the sums of positive and
    > negative coefficients being nearly equal and opposite. The sum above
    > works in math where precision is infinite and noise is zero. When you
    > apply it to noisy sensor data, you end up cancelling most of the signal
    > while adding up all the sources of noise.

    Right. But since Foveon-type sensor has (potentially) high throughput
    QE, it is an open question whether the final result is better than RGB
    Bayer, or not.

    Hope this helps,
    Ilya
    Anonymous
    May 29, 2005 11:22:54 AM

    Archived from groups: rec.photo.digital (More info?)

    Dave Martindale wrote:

    > "Roger N. Clark (change username to rnclark)" <username@qwest.net> writes:
    >
    >
    >>>I suspect what Ilya was getting at is that you don't have to have filter
    >>>responses that equal the human colour matching functions. You *can* use
    >>>filter responses that are some linear combination of the colour matching
    >>>functions, then use the inverse matrix to convert from the colour space
    >>>you've chosen back into "human colour space".
    >
    >
    >>Dave,
    >>We've been through this a couple of posts up the thread.
    >
    >
    > Sorry, I've been busy and haven't been able to read the group much
    > recently. My news server doesn't have the previous posts.
    >
    >
    >>The problem is that the linear conversion works for only one
    >>light source spectral distribution, e.g., you could define
    >>it for sunlight. But when you change the light source, you
    >>can't recover color accurately because you can't determine of
    >>what the camera measures is due to color in the subject or
    >>color changes in the illuminating light source (e.g. fluorescent
    >>lights, or tungstens lights). So the linear additive
    >>(or subtractive) problem has a solution for only one condition.
    >>All other conditions have errors in the derived colors.
    >
    >
    > I don't think this is a problem for photography. If you're trying to
    > determine *accurate* colour when the illuminating light has some weird
    > spectrum, you're right. If you change the lighting from tungsten to
    > fluorescent, the colours change. But generally in photography we just
    > want to reproduce the colour as the human eye saw it. We don't care
    > what the object colour is separate from the illumination - we just want
    > to know what colour the reflected light is.

    This is exactly the problem with filter design and digital cameras
    (or film design and the color film layers). If there is too much
    out of band response in the camera system (out of the band defined
    by the eye), then the colors produced by the camera will be poor
    depending on the lighting situation. This is the case with the
    foveon sensor: the filters are too broad letting in too much
    out of band response.

    > Having camera colour matching functions that match those of the eye
    > (after the linear transformation) is sufficient to make the camera see
    > the same way the eye sees. (Well, there are still some problems because
    > the eye does automatic white balance and the camera often does not, or
    > does it in a different way).

    Agreed.

    Roger
    Anonymous
    May 29, 2005 11:39:00 AM

    Archived from groups: rec.photo.digital (More info?)

    Ilya Zakharevich wrote:

    > [A complimentary Cc of this posting was sent to
    > Roger N. Clark (change username to rnclark)
    > <username@qwest.net>], who wrote in article <429878A2.1050208@qwest.net>:
    >
    >>As I model optical system responses for a living, I have an intuitive
    >>feel for this. My feel is you might do pretty well with three
    >>per channel but only if there is no response too far out of band.
    >
    >
    > I do not do this for a living, but I can sit and *calculate* the
    > answer. With 3 per channel, you can approximate a narrow-band
    > response with 0.3% accuracy.

    Interesting. I've never seen anyone do this kind of spectral
    convolution precision with so few bands. I didn't even know that the
    human eye response was measured to that kind of precision.
    Are you achieving this precision with any light source?
    If so, how? It's never been done before, so you must have
    made an incredible breakthrough.

    > And this is for approximation of *narrow-band*. With wide-band stuff
    > (such as eye functions) the Foveon people did an amazing fit even with
    > 3 layers (with a pre-filter). [I wonder if an aftermarket filter with
    > a custom raw-converter could improve the performance of Sigmas...]

    This makes your work even more incredible. Please tell us
    exactly how you achieved this, as again, it's never been done
    before.

    >>But again, that few would only work if there are no strong out of
    >>band responses.
    >
    > Irrelevant. The type of filtering which multi-layer sensor does is
    > mathematically equivalent to cosine-transform (one can be easily
    > expressed through the others). As you can see, cos(n*x) has a lot of
    > "out of band response"; but you can approximate any even function by a
    > combination of cosines.

    Wow! Now I am really impressed. You achieved what has never before
    been possible. Out of band responses has been the bane of
    multiple fields, from imaging to spectroscopy for decades.
    To reduce them to irrelevancy is Nobel Prize work. You
    will revolutionize many scientific fields.

    I honestly do not know why you spend your valuable time here in
    this newsgroup. With such valuable discoveries, I would think you
    would be working on publications and patent applications.
    I can't wait to see the scientific papers, and be able to use
    new cameras that are so much better than existing ones.

    All of us here can one day say "we knew him before he was famous."

    Roger
    May 30, 2005 12:25:56 AM

    Archived from groups: rec.photo.digital (More info?)

    [ reaches for a fresh can of SarCazAway (tm) ]

    [ frantically sopping up to goo as it seeps into keyboard ]

    Well, I saved the screen but the keyboard is toast... ;^)

    Jeff
    Anonymous
    May 30, 2005 4:50:32 AM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    Roger N. Clark (change username to rnclark)
    <username@qwest.net>], who wrote in article <4299C5F4.2060807@qwest.net>:
    > > I do not do this for a living, but I can sit and *calculate* the
    > > answer. With 3 per channel, you can approximate a narrow-band
    > > response with 0.3% accuracy.
    >
    > Interesting. I've never seen anyone do this kind of spectral
    > convolution precision with so few bands.

    Of course you have seen it.

    0.9863692 |''''''''''''''''''''''''''''''""''''''''''''''''''''''''''''''|
    | |
    | " " |
    | : : |
    | : : |
    | _ _ |
    | : : |
    | : : |
    | : : |
    | : : |
    | x x |
    | : : |
    | : : |
    | : : |
    | : : |
    | " " |
    | : : |
    | _ _ _ : : _ _ _ |
    | _ " x " x _ _ x " x " _ |
    ,x,x,,,,,_,,,_,,,,",,,_,,,:,,,,,,,,,,:,,,_,,,",,,,_,,,_,,,,,x,x,
    | _ x : : x _ |
    -0.1 _._...._"....."_x......"_"............"_"......x_"....."_...._._
    0 1

    This is with 2 wideband inputs per channel; as you can see, this gives
    the off-band error of 10%. With 3 per channel, the error goes to
    0.3% (so it would not be seen with ASCII graphing).

    This is "essentially" Fourier transform; this "scientific
    breakthrough" is soon to be 200 years old. Any good class on calculus
    teaches this...

    > I didn't even know that the human eye response was measured to that
    > kind of precision.

    As usual, absolutely irrelevant to this discussion (but, as I said in
    the original message, very relevant to ).

    > Are you achieving this precision with any light source?

    Sigh again... Light source is absolutely irrelevant. What is
    relevant is to match the eye response...

    Hope this helps,
    Ilya
    Anonymous
    May 30, 2005 4:53:05 AM

    Archived from groups: rec.photo.digital (More info?)

    I wrote in article <d7do0o$k7b$1@agate.berkeley.edu>:

    > > I didn't even know that the human eye response was measured to that
    > > kind of precision.
    >
    > As usual, absolutely irrelevant to this discussion (but, as I said in
    > the original message, very relevant to ).
    ^^^

    .... to design of actual sensors (as opposed to just matching some
    curves with some others).

    Sorry for lousy editing in the initial message,
    Ilya
    !