Sign in with
Sign up | Sign in
Your question

Deconvolution of Sensor Anti-Alias filter?

Last response: in Digital Camera
Share
Anonymous
September 9, 2005 5:42:02 PM

Archived from groups: rec.photo.digital (More info?)

Folks,

Another 'do they do that and if not, why?' question.
Canon's sensors have an anti-alias filter for good reason.
Most everyone uses some sort of Unsharp-Mask to 'restore'
the 'sharpness' to the somewhat soft images.
Since the anti-alias filter is known and well defined,
wouldn't it make more sense to use a true 'reconstruction filter' (vs.
USM)
when processing RAW files as is typically done in sampled data systems?
I would think this could avoid some of the USM artifacts.
Does anyone know if this is done or is practical?

W
Anonymous
September 9, 2005 6:59:24 PM

Archived from groups: rec.photo.digital (More info?)

I may not have been clear. The optical anti-alias filter is before the
sensor. What I am suggesting is that a software reconstruction filter
be used on the RAW data to undue the in-band suppression caused by the
optical anti-alias filter.
Anonymous
September 9, 2005 11:51:36 PM

Archived from groups: rec.photo.digital (More info?)

Looks interesting. I understand your point about including the whole
chain, but I would be happy to
'attack' the anti-alias filter only. Since this is a known quantity (at
least by Canon) it would be nice
if they supplied software to compensate for it. I will look into the
options you mentioned.

Thanks
Related resources
Anonymous
September 10, 2005 12:13:45 AM

Archived from groups: rec.photo.digital (More info?)

<winhag@yahoo.com> wrote in message
news:1126298522.839835.130970@g44g2000cwa.googlegroups.com...
> Folks,
>
> Another 'do they do that and if not, why?' question.
> Canon's sensors have an anti-alias filter for good reason.
> Most everyone uses some sort of Unsharp-Mask to 'restore'
> the 'sharpness' to the somewhat soft images.
> Since the anti-alias filter is known and well defined,
> wouldn't it make more sense to use a true 'reconstruction filter' (vs.
> USM)
> when processing RAW files as is typically done in sampled data systems?
> I would think this could avoid some of the USM artifacts.
> Does anyone know if this is done or is practical?

The anti-alias filter no doubt has a specified Bode response. I have no
idea what the production variation parameters are. Ignoring production
variations, it should indeed be possible to correct for the filter in RAW
post processing (with a hell of a lot less guessing than USM). You have
asked a very good question.
September 10, 2005 1:19:23 AM

Archived from groups: rec.photo.digital (More info?)

<winhag@yahoo.com> wrote in message
news:1126298522.839835.130970@g44g2000cwa.googlegroups.com...
> Folks,
>
> Another 'do they do that and if not, why?' question.
> Canon's sensors have an anti-alias filter for good reason.
> Most everyone uses some sort of Unsharp-Mask to 'restore'
> the 'sharpness' to the somewhat soft images.
> Since the anti-alias filter is known and well defined,
> wouldn't it make more sense to use a true 'reconstruction filter' (vs.
> USM)
> when processing RAW files as is typically done in sampled data systems?
> I would think this could avoid some of the USM artifacts.
> Does anyone know if this is done or is practical?
>
> W
>
No, the best place to put a filter is before the data gets sampled. Analog
filters also don't present much of a computational load on the camera.
Deconvolution is a very different process.
Jim
September 10, 2005 3:21:36 AM

Archived from groups: rec.photo.digital (More info?)

winhag@yahoo.com wrote:

> I may not have been clear. The optical anti-alias filter is before the
> sensor. What I am suggesting is that a software reconstruction filter
> be used on the RAW data to undue the in-band suppression caused by the
> optical anti-alias filter.

Makes sense to me. Possibly USM is 'good enough' that there isn't enough
interest in doing what you suggest. Especially as it seems to be the
fashion that the 'appropriate' amount of sharpening is 'as much as
you can possibly get away with without introducing gross artefacts'.

- Len
September 10, 2005 3:28:23 AM

Archived from groups: rec.photo.digital (More info?)

<winhag@yahoo.com> wrote in message
news:1126303164.595790.288930@z14g2000cwz.googlegroups.com...
>I may not have been clear. The optical anti-alias filter is before the
> sensor. What I am suggesting is that a software reconstruction filter
> be used on the RAW data to undue the in-band suppression caused by the
> optical anti-alias filter.
>
There might not be any in-band suppression. In any case, only Canon would
know. You can think of the anti-aliasing filter as a kind of UV (low pass)
filter. These filters don't have much affect on visible light.
Jim
Anonymous
September 10, 2005 5:45:28 AM

Archived from groups: rec.photo.digital (More info?)

<winhag@yahoo.com> wrote in message
news:1126298522.839835.130970@g44g2000cwa.googlegroups.com...
> Folks,
>
> Another 'do they do that and if not, why?' question.
> Canon's sensors have an anti-alias filter for good reason.
> Most everyone uses some sort of Unsharp-Mask to
> 'restore' the 'sharpness' to the somewhat soft images.
> Since the anti-alias filter is known and well defined,
> wouldn't it make more sense to use a true 'reconstruction
> filter' (vs. USM) when processing RAW files as is typically
> done in sampled data systems?

Yes, but you should ideally also include the interaction of the lens
with the AA-filter in the same equation.

> I would think this could avoid some of the USM artifacts.
> Does anyone know if this is done or is practical?

I do it,
<http://www.xs4all.nl/~bvdwolf/main/downloads/Batavia_Cr...;
demonstrates capture sharpening only, and it's very practical for
special (low volume) cases but also computationally expensive (very
slow). It is however not a widespread practice outside
astronomical/scientific photography.

One could, as an alternative, create a high-pass 'capture-restoration'
filter based on the Point-Spread-Function (PSF) of the imaging chain,
and it will largely compensate for capture losses without introducing
halo, but it will enhance noise if used without an edge mask.

There are several programs that allow deconvolution sharpening,
including Photoshop CS2 (Smart Sharpen filter), and DxO, and a few
others as well. The success usually depends on a good description of
the system blur (PSF), which can be obtained with e.g. Imatest
(www.imatest.com) software.

Bart
Anonymous
September 10, 2005 5:24:36 PM

Archived from groups: rec.photo.digital (More info?)

winhag@yahoo.com <winhag@yahoo.com> wrote:
> I may not have been clear. The optical anti-alias filter is before
> the sensor. What I am suggesting is that a software reconstruction
> filter be used on the RAW data to undue the in-band suppression
> caused by the optical anti-alias filter.

The transfer function of the anti-aliasing filter is multiplied by the
transfer function of whetever lens you use, so it makes sense to
compensate for both together.

In theory it is undoubtedly possible to muliply by the inverse
transfer function, but there is a really easy way to achieve the same
effect by using Imatest. All you have to do is measure the MTF of the
system and then adjust unsharp mask parameters until you get a sharp
black-to-white edge with no ringing. This doesn't take very long to
do, and it's unlikely that anything more mathematically sophisticated
would look much different.

Andrew.
Anonymous
September 10, 2005 7:38:16 PM

Archived from groups: rec.photo.digital (More info?)

<winhag@yahoo.com> wrote in message
news:1126320696.345901.76950@o13g2000cwo.googlegroups.com...
SNIP
> I will look into the options you mentioned.

You may also want to check out Image Analyzer
<http://meesoft.logicnet.dk/&gt;, a free image analyzer/editor
(unfortunately only 8-bit/channel) written to assist with several
scientific imaging tasks.

Look at the "Filters|Restoration by deconvolution ..." option.
Choosing a Gaussian blur model with 0.70 to 0.95 radius, will add
sharpness with each iteration (often maxing out at 6 or 7) while
mostly avoiding typical USM artifacts.

Bart
Anonymous
September 12, 2005 5:24:04 PM

Archived from groups: rec.photo.digital (More info?)

The blur pattern created by the AA filter is 'remarkably' Gaussian, and
can be effectively nulled out with the ubiquitous USM filter. If it
were far from Gaussian, then you might do better with a matched spatial
filter, but this seems unnecessary at this point.
Anonymous
September 21, 2005 10:56:20 AM

Archived from groups: rec.photo.digital (More info?)

Thanks. Do you have any reference for the AA filter characteristics?
Anonymous
September 23, 2005 6:32:46 AM

Archived from groups: rec.photo.digital (More info?)

<winhag@yahoo.com> wrote in message
news:1127310979.987987.262100@g49g2000cwa.googlegroups.com...
> Thanks. Do you have any reference for the AA filter characteristics?

The camera manufactures are not clear on what they really/actually
chose as a compromise.

The closest one may come to it is what Canon discloses:
http://www.canon.com/technology/d35mm/01.html

The "point blur' introduced by such a filter will usually spread the
signal pixel signal over 2 pixels, although the exact PSF
(point-spread-function) remains a bit of an "unknow". Given a choice
of material (Lithium Niobate) "thickness", and layers' orientation,
distance will determine the amount of actual spread.

Bart
Anonymous
September 24, 2005 9:46:59 PM

Archived from groups: rec.photo.digital (More info?)

"Bart van der Wolf" <bvdwolf@no.spam> writes:

>The closest one may come to it is what Canon discloses:
>http://www.canon.com/technology/d35mm/01.html

>The "point blur' introduced by such a filter will usually spread the
>signal pixel signal over 2 pixels,

4 pixels, because there are layers that provide horizontal and vertical
separation. The illustration shows how a high-frequency dot image gets
spread into 4 dots.

>Given a choice
>of material (Lithium Niobate) "thickness", and layers' orientation,
>distance will determine the amount of actual spread.

I've been told that manufacturers treat this is a tunable parameter. If
the spread is exactly one pixel pitch, you get a null in the response of
the system right at the Nyquist frequency, which is theoretically good.
But if you reduce the spread to perhaps 0.8 pixels, you get slightly
sharper-looking images at the risk of having more aliasing.

Dave
Anonymous
September 25, 2005 6:25:47 AM

Archived from groups: rec.photo.digital (More info?)

"Dave Martindale" <davem@cs.ubc.ca> wrote in message
news:D h43ej$6ce$1@mughi.cs.ubc.ca...
SNIP
> 4 pixels, because there are layers that provide horizontal and
> vertical separation. The illustration shows how a high-frequency
> dot image gets spread into 4 dots.

You're right, I was thinking about a single dimension spread.
The spread is obviously taking place in two dimensions.

Bart
Anonymous
September 26, 2005 11:26:35 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
Dave Martindale
<davem@cs.ubc.ca>], who wrote in article <dh43ej$6ce$1@mughi.cs.ubc.ca>:
> >Given a choice
> >of material (Lithium Niobate) "thickness", and layers' orientation,
> >distance will determine the amount of actual spread.
>
> I've been told that manufacturers treat this is a tunable parameter. If
> the spread is exactly one pixel pitch, you get a null in the response of
> the system right at the Nyquist frequency, which is theoretically good.

I do not see how this may be "theoretically good". There is no
aliasing at the Nyquist frequency, it appears *above* Nyquist. So
having a zero at Nyquist has absolutely no point.

Extending this by continuity: if you have an image with visible
frequency at 1.1 Nyquist, you get it aliased to 0.9 Nyquist. My
impression is that such a small difference between "actual" and
"recorded" frequency will not create any *visible* moire.

The "annoying" Moire is when some frequency in the image is aliased
into a *LOW* frequency in the recorded data. E.g., if some frequency
is aliased into one 1.5x smaller (I think this is the practical limit
of visible moire). This give 1.2 Nyquist aliased into 0.8 Nyquist.

So one should not care *much* about efficiency of anti-aliasing filter
below the frequency of 1.2 Nyquist. So it looks like optimal position
for a zero of MTF is about 1.4 Nyquist (so the critical range 1.2--1.6
Nyquist is "well-covered").

Of course, the actual numbers I used are hunches only; it is easy to
simulate the process on computer and compare results *visually* to
obtain the "best" choices.

Hope this helps,
Ilya
Anonymous
September 26, 2005 12:48:18 PM

Archived from groups: rec.photo.digital (More info?)

Ilya Zakharevich wrote:
[]
> So one should not care *much* about efficiency of anti-aliasing filter
> below the frequency of 1.2 Nyquist. So it looks like optimal position
> for a zero of MTF is about 1.4 Nyquist (so the critical range 1.2--1.6
> Nyquist is "well-covered").
>
> Of course, the actual numbers I used are hunches only; it is easy to
> simulate the process on computer and compare results *visually* to
> obtain the "best" choices.
>
> Hope this helps,
> Ilya

It would be interesting to see the results of such tests. My hunch is
that images from cameras with AA filters having a zero MTF just /before/
Nyquist would look subtly better than those where above Nyquist (e.g. 1.2)
is allowed.

Indeed, it may be factors like this which help fuel the Nikon/Canon
"wars" - different people have differing sensitivities to the artefacts
introduced by such MTF/aliasing differences.

David
Anonymous
September 26, 2005 9:56:26 PM

Archived from groups: rec.photo.digital (More info?)

Anyone venture a guess on why (I believe) Kodak does not put AA filters
on their sensors a la the Hasselblad H1/H2 and
(discontinued) Kodak SLR's?
Anonymous
September 27, 2005 2:31:12 AM

Archived from groups: rec.photo.digital (More info?)

"Ilya Zakharevich" <nospam-abuse@ilyaz.org> wrote in message
news:D h87rb$2t58$1@agate.berkeley.edu...
SNIP
> Extending this by continuity: if you have an image with visible
> frequency at 1.1 Nyquist, you get it aliased to 0.9 Nyquist. My
> impression is that such a small difference between "actual" and
> "recorded" frequency will not create any *visible* moire.

Also note that the Green sampling density is different from Red and
Blue (in an GRGB mosaic). This can lead to false color-moire unless
properly dealt with in the Raw conversion.

Bart
Anonymous
September 27, 2005 9:50:32 AM

Archived from groups: rec.photo.digital (More info?)

[A complimentary Cc of this posting was sent to
David J Taylor
<david-taylor@blueyonder.co.not-this-bit.nor-this-part.uk.invalid>], who wrote in article <mBOZe.115880$G8.51148@text.news.blueyonder.co.uk>:

> It would be interesting to see the results of such tests. My hunch is
> that images from cameras with AA filters having a zero MTF just /before/
> Nyquist would look subtly better than those where above Nyquist (e.g. 1.2)
> is allowed.

Why do you think so? E.g., consider 0.9 vs 1.2; consider a pattern
with frequency 1.5 Nyquist. Your choice will pass through 87% of the
aliased pattern; mine only 38%.
  • Note that the aliased pattern
    has frequency of 0.5 Nyquist, so should be very noticable.

    > Indeed, it may be factors like this which help fuel the Nikon/Canon
    > "wars" - different people have differing sensitivities to the artefacts
    > introduced by such MTF/aliasing differences.

    Sure, it is a human-centered design, so different people will like
    different choices. However, IIUC, the choice of 0.9 should be
    "practically always" worse than 1.2.

    Hope this helps,
    Ilya

  • This assumes that the performance of the splitting filter is
    governed by the cosine law. In other words, the length of
    coherence of the light reflected by the pattern is smaller than
    difference in two optical paths through the splitter.

    If the length of coherence is high enough, the law becomes cosine
    squared. Who knows which one is applicable on practice?
    Anonymous
    September 27, 2005 10:15:02 AM

    Archived from groups: rec.photo.digital (More info?)

    Ilya Zakharevich <nospam-abuse@ilyaz.org> writes:

    >> I've been told that manufacturers treat this is a tunable parameter. If
    >> the spread is exactly one pixel pitch, you get a null in the response of
    >> the system right at the Nyquist frequency, which is theoretically good.

    >I do not see how this may be "theoretically good". There is no
    >aliasing at the Nyquist frequency, it appears *above* Nyquist. So
    >having a zero at Nyquist has absolutely no point.

    More precisely (though you really ought to know this): the sampling
    theorem says that the input frequencies must be strictly *less than* the
    Nyquist limit of 0.5 cycles/pixel. Although input at exactly the
    Nyquist frequency does not alias, it cannot be sampled reliably either.
    The measured amplitude can be double the true amplitude (if the sample
    points are all at peaks) or zero (if the sample points are all at zero
    crossings) or anywhere in between, depending on the relative phase of
    the signal and the sampling clock. So input at the Nyquist frequency is
    supposed to be filtered before sampling.

    >Extending this by continuity: if you have an image with visible
    >frequency at 1.1 Nyquist, you get it aliased to 0.9 Nyquist. My
    >impression is that such a small difference between "actual" and
    >"recorded" frequency will not create any *visible* moire.

    It may not be Moire, but it's still wrong. This is visible in
    resolution tests of the Sigma SD-9, where the 9-line resolution target
    appears to be "resolved" at 2000 lines per picture height (1000 lp/ph),
    but there are only 5 bars. It's a lovely example of aliasing. I don't
    want *my* camera doing this.

    >The "annoying" Moire is when some frequency in the image is aliased
    >into a *LOW* frequency in the recorded data. E.g., if some frequency
    >is aliased into one 1.5x smaller (I think this is the practical limit
    >of visible moire). This give 1.2 Nyquist aliased into 0.8 Nyquist.

    Again, Moire isn't the only problem introduced by aliasing.

    >Of course, the actual numbers I used are hunches only; it is easy to
    >simulate the process on computer and compare results *visually* to
    >obtain the "best" choices.

    Though note that the "best" choice obtained this way depends on both the
    subject material and the viewer. The advantage of the conservative
    approach of putting the filter's null right at the Nyquist frequency is
    that the image content is more often correct, even if it appears a bit
    less sharp than when a higher-cutoff AA filter was used.

    Dave
    Anonymous
    September 27, 2005 10:16:39 AM

    Archived from groups: rec.photo.digital (More info?)

    "winhag@yahoo.com" <winhag@yahoo.com> writes:
    >Anyone venture a guess on why (I believe) Kodak does not put AA filters
    >on their sensors a la the Hasselblad H1/H2 and
    >(discontinued) Kodak SLR's?

    At least one Kodak DSLR did have an AA filter, and it was removable! I
    know I've seen a Kodak web page that showed the tradeoff: less moire
    with the filter, higher apparent sharpness (with moire) without the
    filter.

    Dave
    Anonymous
    September 27, 2005 12:16:34 PM

    Archived from groups: rec.photo.digital (More info?)

    winhag@yahoo.com wrote:
    > Anyone venture a guess on why (I believe) Kodak does not put AA
    > filters on their sensors a la the Hasselblad H1/H2 and
    > (discontinued) Kodak SLR's?

    - cost - leaving out the AA filter lowers cost

    - they didn't think that the lenses used would be good enough to produce
    significant information abouve Nyquist?

    - "the pictures look better without" ?

    - "the resolution is already low enough - don't reduce it further" ?

    - they didn't realise one would be required? Nah!

    David
    Anonymous
    September 27, 2005 12:26:25 PM

    Archived from groups: rec.photo.digital (More info?)

    Ilya Zakharevich wrote:
    > [A complimentary Cc of this posting was sent to
    > David J Taylor
    []
    >> It would be interesting to see the results of such tests. My hunch
    >> is that images from cameras with AA filters having a zero MTF just
    >> /before/ Nyquist would look subtly better than those where above
    >> Nyquist (e.g. 1.2) is allowed.
    >
    > Why do you think so? E.g., consider 0.9 vs 1.2; consider a pattern
    > with frequency 1.5 Nyquist. Your choice will pass through 87% of the
    > aliased pattern; mine only 38%.
  • Note that the aliased pattern
    > has frequency of 0.5 Nyquist, so should be very noticable.

    That's not what I meant - I meant a filter which had a cut-off of 0.9 of
    the sampling frequency. You were suggesting a cut-off just above, I'm
    suggesting keep it below as theory requires.

    David
    Anonymous
    September 27, 2005 2:17:01 PM

    Archived from groups: rec.photo.digital (More info?)

    <winhag@yahoo.com> wrote:
    > Anyone venture a guess on why (I believe) Kodak does not put AA filters
    > on their sensors a la the Hasselblad H1/H2 and
    > (discontinued) Kodak SLR's?

    A cynical marketing sleaze to fool mathematically naive photographers into
    thinking their products are sharper than they really are.

    David J. Littleboy
    Tokyo, Japan
    Anonymous
    September 28, 2005 1:32:06 AM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    David J Taylor
    <david-taylor@blueyonder.co.not-this-bit.nor-this-part.uk.invalid>], who wrote in article <Rm7_e.116477$G8.86155@text.news.blueyonder.co.uk>:
    > >> It would be interesting to see the results of such tests. My hunch
    > >> is that images from cameras with AA filters having a zero MTF just
    > >> /before/ Nyquist would look subtly better than those where above
    > >> Nyquist (e.g. 1.2) is allowed.

    > > Why do you think so? E.g., consider 0.9 vs 1.2; consider a pattern
    > > with frequency 1.5 Nyquist. Your choice will pass through 87% of the
    > > aliased pattern; mine only 38%.
  • Note that the aliased pattern
    > > has frequency of 0.5 Nyquist, so should be very noticable.

    > That's not what I meant - I meant a filter which had a cut-off of 0.9 of
    > the sampling frequency. You were suggesting a cut-off just above, I'm
    > suggesting keep it below as theory requires.

    As I show above, the theory requires exactly the opposite. Or did I
    misunderstand you? What is a "cut-off"? AAFs have no cut-off, their
    MTF is given by cosine law. What I was discussing was the position of
    the first zero.

    Hope this helps,
    Ilya
    Anonymous
    September 28, 2005 1:43:52 AM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    Dave Martindale
    <davem@cs.ubc.ca>], who wrote in article <dhao16$pm6$1@mughi.cs.ubc.ca>:

    > More precisely (though you really ought to know this): the sampling
    > theorem says that the input frequencies must be strictly *less than* the
    > Nyquist limit of 0.5 cycles/pixel. Although input at exactly the
    > Nyquist frequency does not alias, it cannot be sampled reliably either.

    This is irrelevant, since the image on the focal plane has a
    continuous Fourier spectrum, so each *individual* frequency comes with
    0 amplitude. [You can calculate the "signal power" inside any
    *region* of frequencies, but the power goes down when the region
    narrows.]

    Thus quoting Nyquist in this context is not relevant. There may be
    some effects at frequencies close to limit frequency, but they should
    be attributed to finite size of the sensor.

    > >Extending this by continuity: if you have an image with visible
    > >frequency at 1.1 Nyquist, you get it aliased to 0.9 Nyquist. My
    > >impression is that such a small difference between "actual" and
    > >"recorded" frequency will not create any *visible* moire.
    >
    > It may not be Moire, but it's still wrong. This is visible in
    > resolution tests of the Sigma SD-9, where the 9-line resolution target
    > appears to be "resolved" at 2000 lines per picture height (1000 lp/ph),
    > but there are only 5 bars. It's a lovely example of aliasing. I don't
    > want *my* camera doing this.

    Interesting; do you have an URL? Although my immediate suspicion
    would be the demosaicing software, not AAF.

    > Though note that the "best" choice obtained this way depends on both the
    > subject material and the viewer. The advantage of the conservative
    > approach of putting the filter's null right at the Nyquist frequency is
    > that the image content is more often correct, even if it appears a bit
    > less sharp than when a higher-cutoff AA filter was used.

    As I had shown in my other messages, what you propose is *less often*
    correct than a milder AAF.

    Hope this helps,
    Ilya
    Anonymous
    September 28, 2005 3:23:58 AM

    Archived from groups: rec.photo.digital (More info?)

    Ilya Zakharevich <nospam-abuse@ilyaz.org> writes:

    >> More precisely (though you really ought to know this): the sampling
    >> theorem says that the input frequencies must be strictly *less than* the
    >> Nyquist limit of 0.5 cycles/pixel. Although input at exactly the
    >> Nyquist frequency does not alias, it cannot be sampled reliably either.

    >This is irrelevant, since the image on the focal plane has a
    >continuous Fourier spectrum, so each *individual* frequency comes with
    >0 amplitude. [You can calculate the "signal power" inside any
    >*region* of frequencies, but the power goes down when the region
    >narrows.]

    A regular pattern in object space produces some finite amplitude of a
    particular set of frequencies in image space. The power is there no
    matter how narrow you make the region of frequencies. So power does
    *not* go to zero as you narrow the region, for a single-frequency
    signal. It doesn't matter that the focal-plane image is continuous, or
    that its Fourier transform is continuous.

    But, in practical photography, any given frequency is either above or
    below Nyquist - you can never hit exactly Nyquist. So we should not use
    it in examples (like you did).

    But if you want to be precise, if you managed to have image content at
    exactly the Nyquist frequency, it would not be reliably sampled. That's
    why the sampling theorem says "less than", not "less than or equal".

    >> It may not be Moire, but it's still wrong. This is visible in
    >> resolution tests of the Sigma SD-9, where the 9-line resolution target
    >> appears to be "resolved" at 2000 lines per picture height (1000 lp/ph),
    >> but there are only 5 bars. It's a lovely example of aliasing. I don't
    >> want *my* camera doing this.

    >Interesting; do you have an URL? Although my immediate suspicion
    >would be the demosaicing software, not AAF.

    Look at the dpreview review of the SD9. Find the resolution test chart,
    and download the full-size image. Take a look at the resolution wedge
    at its narrowest point.

    Dave
    Anonymous
    September 28, 2005 5:47:28 AM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    Dave Martindale
    <davem@cs.ubc.ca>], who wrote in article <dhckae$9gm$1@mughi.cs.ubc.ca>:
    > >> More precisely (though you really ought to know this): the sampling
    > >> theorem says that the input frequencies must be strictly *less than* the
    > >> Nyquist limit of 0.5 cycles/pixel. Although input at exactly the
    > >> Nyquist frequency does not alias, it cannot be sampled reliably either.
    >
    > >This is irrelevant, since the image on the focal plane has a
    > >continuous Fourier spectrum, so each *individual* frequency comes with
    > >0 amplitude. [You can calculate the "signal power" inside any
    > >*region* of frequencies, but the power goes down when the region
    > >narrows.]
    >
    > A regular pattern in object space produces some finite amplitude of a
    > particular set of frequencies in image space.

    Nope. You forget about the light fall off.

    > It doesn't matter that the focal-plane image is continuous, or
    > that its Fourier transform is continuous.

    The first one does not matter indeed. The second one matters: it
    makes "less than given frequency" and "less or equal to given
    frequency" indistinguishable.

    > But, in practical photography, any given frequency is either above or
    > below Nyquist - you can never hit exactly Nyquist.

    Right, this is exactly my point.

    > But if you want to be precise, if you managed to have image content at
    > exactly the Nyquist frequency, it would not be reliably sampled.

    Correct under the assumption. But the assumption is never satisfied.

    =======================================================

    > >> It may not be Moire, but it's still wrong. This is visible in
    > >> resolution tests of the Sigma SD-9, where the 9-line resolution target
    > >> appears to be "resolved" at 2000 lines per picture height (1000 lp/ph),
    > >> but there are only 5 bars. It's a lovely example of aliasing. I don't
    > >> want *my* camera doing this.
    >
    > >Interesting; do you have an URL? Although my immediate suspicion
    > >would be the demosaicing software, not AAF.
    >
    > Look at the dpreview review of the SD9. Find the resolution test chart,
    > and download the full-size image. Take a look at the resolution wedge
    > at its narrowest point.

    Again, this example contradicts what you say, and confirms what I
    said. You consider the image of a pattern at 1.32 of Nyquist, not of
    a pattern at Nyquist. With AAF with 0 at 0.9 of Nyquist (which you,
    apparently, like), the contrast of the fake image will be decreased by
    1/3, to 0.67 of the original value. With what I think is prefereable
    (0 at about 1.2 of Nyquist), the fake image will completely disappear:
    its contrast will decrease 6x.

    Hope this helps,
    Ilya
    Anonymous
    September 28, 2005 5:52:24 AM

    Archived from groups: rec.photo.digital (More info?)

    In article <dhceeo$14b7$1@agate.berkeley.edu>, Ilya Zakharevich
    <nospam-abuse@ilyaz.org> writes
    >[A complimentary Cc of this posting was sent to
    >Dave Martindale
    ><davem@cs.ubc.ca>], who wrote in article <dhao16$pm6$1@mughi.cs.ubc.ca>:
    >
    >> More precisely (though you really ought to know this): the sampling
    >> theorem says that the input frequencies must be strictly *less than* the
    >> Nyquist limit of 0.5 cycles/pixel. Although input at exactly the
    >> Nyquist frequency does not alias, it cannot be sampled reliably either.
    >
    >This is irrelevant, since the image on the focal plane has a
    >continuous Fourier spectrum, so each *individual* frequency comes with
    >0 amplitude. [You can calculate the "signal power" inside any
    >*region* of frequencies, but the power goes down when the region
    >narrows.]
    >
    Sorry Ilya, but that is just BS. It doesn't matter that the power in an
    infinitesimal spatial bandwidth tends to zero, the important parameter
    is the power density.

    >Thus quoting Nyquist in this context is not relevant. There may be
    >some effects at frequencies close to limit frequency, but they should
    >be attributed to finite size of the sensor.
    >
    No. At Nyquist, the finite size of the sensor results in an MTF of at
    least 64% based on a 100% fill factor, or more for reduced fill factors.
    --
    Kennedy
    Yes, Socrates himself is particularly missed;
    A lovely little thinker, but a bugger when he's pissed.
    Python Philosophers (replace 'nospam' with 'kennedym' when replying)
    Anonymous
    September 28, 2005 5:52:32 AM

    Archived from groups: rec.photo.digital (More info?)

    In article <dh87rb$2t58$1@agate.berkeley.edu>, Ilya Zakharevich
    <nospam-abuse@ilyaz.org> writes
    >[A complimentary Cc of this posting was sent to
    >Dave Martindale
    ><davem@cs.ubc.ca>], who wrote in article <dh43ej$6ce$1@mughi.cs.ubc.ca>:
    >> >Given a choice
    >> >of material (Lithium Niobate) "thickness", and layers' orientation,
    >> >distance will determine the amount of actual spread.
    >>
    >> I've been told that manufacturers treat this is a tunable parameter. If
    >> the spread is exactly one pixel pitch, you get a null in the response of
    >> the system right at the Nyquist frequency, which is theoretically good.
    >
    >I do not see how this may be "theoretically good". There is no
    >aliasing at the Nyquist frequency, it appears *above* Nyquist. So
    >having a zero at Nyquist has absolutely no point.
    >
    Its onset is infinitesimally above Nyquist, hence Nyquist is the
    effective limit and is why the null should be there or below.

    >Extending this by continuity: if you have an image with visible
    >frequency at 1.1 Nyquist, you get it aliased to 0.9 Nyquist. My
    >impression is that such a small difference between "actual" and
    >"recorded" frequency will not create any *visible* moire.

    Perhaps not moire, since that requires an extended regular spatial
    frequency source, but it will have deleterious effects, such as making
    specular reflections larger than they should be. In terms of direct
    moire, misrepresenting 1.1x Nyquist as 0.9x Nyquist is an exaggeration
    of spatial frequencies by 20%, which certainly is noticeable when a
    dominant image component of that rate occurs in the image.

    In my particular field, I have seen images of tank with 8 regularly
    spaced wheels on their tracks appearing to have only 6 wheels and
    therefore being identified as a different vehicle completely as a
    consequence of less aliasing than the 20% example that you reference. If
    you don't think that is significant then I guess you don't think "blue
    on blue" attacks are significant either!
    >
    >The "annoying" Moire is when some frequency in the image is aliased
    >into a *LOW* frequency in the recorded data.

    Not at all - when a high spatial frequency is aliased to a low one the
    image distortion is so obvious as not to be objectionable in many cases.
    The really objectionable distortion is the one that is not immediately
    obvious, but unintentionally misrepresents the image.

    --
    Kennedy
    Yes, Socrates himself is particularly missed;
    A lovely little thinker, but a bugger when he's pissed.
    Python Philosophers (replace 'nospam' with 'kennedym' when replying)
    Anonymous
    September 28, 2005 5:54:05 AM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    Kennedy McEwen
    <rkm@kennedym.demon.co.uk>], who wrote in article <cSbpNzDIleODFwEO@kennedym.demon.co.uk>:

    > >This is irrelevant, since the image on the focal plane has a
    > >continuous Fourier spectrum, so each *individual* frequency comes with
    > >0 amplitude. [You can calculate the "signal power" inside any
    > >*region* of frequencies, but the power goes down when the region
    > >narrows.]

    > Sorry Ilya, but that is just BS.

    I'm forced to return the compliment...

    > It doesn't matter that the power in an infinitesimal spatial
    > bandwidth tends to zero, the important parameter is the power
    > density.

    With finite power density, what happens at *one particular frequency*
    does not matter. So a sampling theorem holding for "less than
    Nyquist" implies that it also holds for "less or equal to Nyquist";
    actually, for continuous spectrum there is no difference between these
    two formulations.

    Hope this helps,
    Ilya
    Anonymous
    September 28, 2005 6:01:15 AM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    Kennedy McEwen
    <rkm@kennedym.demon.co.uk>], who wrote in article <jTt2tOEQleODFwCc@kennedym.demon.co.uk>:
    > >Extending this by continuity: if you have an image with visible
    > >frequency at 1.1 Nyquist, you get it aliased to 0.9 Nyquist. My
    > >impression is that such a small difference between "actual" and
    > >"recorded" frequency will not create any *visible* moire.
    >
    > Perhaps not moire, since that requires an extended regular spatial
    > frequency source, but it will have deleterious effects, such as making
    > specular reflections larger than they should be.

    Exactly the opposite. As you know, stronger AAF blur stronger.

    > In terms of direct
    > moire, misrepresenting 1.1x Nyquist as 0.9x Nyquist is an exaggeration
    > of spatial frequencies by 20%

    This may sound impressive, but we are talking about difference *much*
    smaller than the pixel size.

    > In my particular field, I have seen images of tank with 8 regularly
    > spaced wheels on their tracks appearing to have only 6 wheels and
    > therefore being identified as a different vehicle completely as a
    > consequence of less aliasing than the 20% example that you reference.

    Well, if you do not know your tools, errors are always possible.

    > If you don't think that is significant then I guess you don't think
    > "blue on blue" attacks are significant either!

    Since I do not know what you mean, I suppose that "I do not think so" indeed.

    > >The "annoying" Moire is when some frequency in the image is aliased
    > >into a *LOW* frequency in the recorded data.

    > Not at all - when a high spatial frequency is aliased to a low one the
    > image distortion is so obvious as not to be objectionable in many cases.
    > The really objectionable distortion is the one that is not immediately
    > obvious, but unintentionally misrepresents the image.

    This depends much on what is the purpose of your image. If you want
    to distinguish 152mm howitzer (sp?) from 156mm one, then your choice
    of tools may be very different from that of a bird photographer.

    Hope this helps,
    Ilya
    Anonymous
    September 28, 2005 8:00:53 AM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    Kennedy McEwen
    <rkm@kennedym.demon.co.uk>], who wrote in article <jTt2tOEQleODFwCc@kennedym.demon.co.uk>:

    [Some more thoughts on the same subject]

    > >Extending this by continuity: if you have an image with visible
    > >frequency at 1.1 Nyquist, you get it aliased to 0.9 Nyquist. My
    > >impression is that such a small difference between "actual" and
    > >"recorded" frequency will not create any *visible* moire.
    >
    > Perhaps not moire, since that requires an extended regular spatial
    > frequency source, but it will have deleterious effects, such as making
    > specular reflections larger than they should be. In terms of direct
    > moire, misrepresenting 1.1x Nyquist as 0.9x Nyquist is an exaggeration
    > of spatial frequencies by 20%, which certainly is noticeable when a
    > dominant image component of that rate occurs in the image.

    Even in applications where this is noticable (very few), note that the
    AAF with zero at 1.2 Nyquist will perform 2.6x times better to
    eliminate this particular example of aliasing.

    > In my particular field, I have seen images of tank with 8 regularly
    > spaced wheels on their tracks appearing to have only 6 wheels and
    > therefore being identified as a different vehicle completely as a
    > consequence of less aliasing than the 20% example that you
    > reference.

    Actually, this is yet more water on my wheel. ;-) Your example is an
    example of aliasing 1.14 to 0.86. For this case, AAF with zero at 1.2
    will perform 5.2x better than one with zero at 0.9 of Nyquist.

    Even you compare an AAF with zero at 1.2 of Nyquist with one at
    Nyquist, the former will perform 2.8x better.

    Hope this helps,
    Ilya
    Anonymous
    September 28, 2005 1:17:48 PM

    Archived from groups: rec.photo.digital (More info?)

    Ilya Zakharevich wrote:
    > [A complimentary Cc of this posting was sent to
    > David J Taylor
    > <david-taylor@blueyonder.co.not-this-bit.nor-this-part.uk.invalid>],
    > who wrote in article
    > <Rm7_e.116477$G8.86155@text.news.blueyonder.co.uk>:
    >>>> It would be interesting to see the results of such tests. My hunch
    >>>> is that images from cameras with AA filters having a zero MTF just
    >>>> /before/ Nyquist would look subtly better than those where above
    >>>> Nyquist (e.g. 1.2) is allowed.
    >
    >>> Why do you think so? E.g., consider 0.9 vs 1.2; consider a pattern
    >>> with frequency 1.5 Nyquist. Your choice will pass through 87% of
    >>> the aliased pattern; mine only 38%.
  • Note that the aliased
    >>> pattern has frequency of 0.5 Nyquist, so should be very noticable.
    >
    >> That's not what I meant - I meant a filter which had a cut-off of
    >> 0.9 of the sampling frequency. You were suggesting a cut-off just
    >> above, I'm suggesting keep it below as theory requires.
    >
    > As I show above, the theory requires exactly the opposite. Or did I
    > misunderstand you? What is a "cut-off"? AAFs have no cut-off, their
    > MTF is given by cosine law. What I was discussing was the position of
    > the first zero.
    >
    > Hope this helps,
    > Ilya

    There's a factor of two missing here! My last message was confused, I
    meant 0.9 of the Nyquist frequency - half the sampling frequency. Sorry!
    I'm more used to electrical AA filters, where you aim to get a brickwall
    response (whilst not compromising phase linearity too much).

    I am not familiar with the expected response of optical AA filters. Can
    you point me to a plot of what you mean by cosine response - I hope you
    don't mean that if the zero is at half the sampling frequency, the
    response at the sampling frequency is -1.

    David
    Anonymous
    September 28, 2005 1:35:16 PM

    Archived from groups: rec.photo.digital (More info?)

    In article <dhct3t$18pu$1@agate.berkeley.edu>, Ilya Zakharevich
    <nospam-abuse@ilyaz.org> writes
    >[A complimentary Cc of this posting was sent to
    >Kennedy McEwen
    ><rkm@kennedym.demon.co.uk>], who wrote in article
    ><cSbpNzDIleODFwEO@kennedym.demon.co.uk>:
    >
    >> >This is irrelevant, since the image on the focal plane has a
    >> >continuous Fourier spectrum, so each *individual* frequency comes with
    >> >0 amplitude. [You can calculate the "signal power" inside any
    >> >*region* of frequencies, but the power goes down when the region
    >> >narrows.]
    >
    >> Sorry Ilya, but that is just BS.
    >
    >I'm forced to return the compliment...
    >
    >> It doesn't matter that the power in an infinitesimal spatial
    >> bandwidth tends to zero, the important parameter is the power
    >> density.
    >
    >With finite power density, what happens at *one particular frequency*
    >does not matter.

    Again, that is just plain wrong. If you want proof of this, create a
    test image with sinusoidal modulation. ie. one that *only* contains a
    single spatial frequency. If it didn't matter what happened at one
    particular frequency then it wouldn't matter what scale you imaged that
    at. Unfortunately, and contrary to your argument, it does - that single
    spatial frequency aliases if it is undersampled just as much as a range
    of frequencies would. Similarly, replacing the single sinusoidal
    pattern with a spatial frequency sweep produces a flat range of spatial
    frequencies and again, despite the power in an infinitesimal bandwidth
    falling to zero, those individual spatial frequencies which are
    undersampled alias just as much as the original.
    --
    Kennedy
    Yes, Socrates himself is particularly missed;
    A lovely little thinker, but a bugger when he's pissed.
    Python Philosophers (replace 'nospam' with 'kennedym' when replying)
    Anonymous
    September 28, 2005 1:48:00 PM

    Archived from groups: rec.photo.digital (More info?)

    In article <dhcthb$192l$1@agate.berkeley.edu>, Ilya Zakharevich
    <nospam-abuse@ilyaz.org> writes
    >[A complimentary Cc of this posting was sent to
    >Kennedy McEwen
    ><rkm@kennedym.demon.co.uk>], who wrote in article
    ><jTt2tOEQleODFwCc@kennedym.demon.co.uk>:
    >> >Extending this by continuity: if you have an image with visible
    >> >frequency at 1.1 Nyquist, you get it aliased to 0.9 Nyquist. My
    >> >impression is that such a small difference between "actual" and
    >> >"recorded" frequency will not create any *visible* moire.
    >>
    >> Perhaps not moire, since that requires an extended regular spatial
    >> frequency source, but it will have deleterious effects, such as making
    >> specular reflections larger than they should be.
    >
    >Exactly the opposite. As you know, stronger AAF blur stronger.

    However that blur of the AAF is simply the rejection of higher spatial
    frequencies. Aliasing is the misrepresentation of high spatial
    frequencies with lower ones. The AAF does not represent fine detail
    such as specular reflections with coarse detail - aliasing does.
    >
    >> In terms of direct
    >> moire, misrepresenting 1.1x Nyquist as 0.9x Nyquist is an exaggeration
    >> of spatial frequencies by 20%
    >
    >This may sound impressive, but we are talking about difference *much*
    >smaller than the pixel size.
    >
    If you want an estimate of how bad that actually is, talk to any of the
    film scanner people about the effect known as grain aliasing - exactly
    the same effect, misrepresenting fine undersampled grain as coarse grain
    in the final image. The same effect occurs here - fine textural
    information being misrepresented as coarse texture with the same
    contrast.

    >> In my particular field, I have seen images of tank with 8 regularly
    >> spaced wheels on their tracks appearing to have only 6 wheels and
    >> therefore being identified as a different vehicle completely as a
    >> consequence of less aliasing than the 20% example that you reference.
    >
    >Well, if you do not know your tools, errors are always possible.
    >
    Which is why correct filtering of the sensor is important - the end user
    shouldn't need to understand the detailed operation of the system or be
    required to compensate for the tools errors.

    >> If you don't think that is significant then I guess you don't think
    >> "blue on blue" attacks are significant either!
    >
    >Since I do not know what you mean, I suppose that "I do not think so" indeed.
    >
    Misinterpretation of targets results in errors which can, and in many
    cases have, resulted in fratricide or engagement of innocent bystanders.
    Seeing the wrong number of wheels on a vehicle can be enough to cause it
    to be identified as foe instead of friend.

    >> >The "annoying" Moire is when some frequency in the image is aliased
    >> >into a *LOW* frequency in the recorded data.
    >
    >> Not at all - when a high spatial frequency is aliased to a low one the
    >> image distortion is so obvious as not to be objectionable in many cases.
    >> The really objectionable distortion is the one that is not immediately
    >> obvious, but unintentionally misrepresents the image.
    >
    >This depends much on what is the purpose of your image. If you want
    >to distinguish 152mm howitzer (sp?) from 156mm one, then your choice
    >of tools may be very different from that of a bird photographer.
    >
    So it would be unimportant to a bird photographer if a seagull was
    misinterpreted to be an albatross? I don't think so! The differences
    between species of bird can be negligible compared to the difference
    between combatants.
    --
    Kennedy
    Yes, Socrates himself is particularly missed;
    A lovely little thinker, but a bugger when he's pissed.
    Python Philosophers (replace 'nospam' with 'kennedym' when replying)
    Anonymous
    September 29, 2005 12:09:25 AM

    Archived from groups: rec.photo.digital (More info?)

    Could the deconvolution of this be implemented in some sort of
    Photoshop 'Custom filter'?

    Ilya Zakharevich wrote:
    > [A complimentary Cc of this posting was sent to
    > David J Taylor
    > <david-taylor@blueyonder.co.not-this-bit.nor-this-part.uk.invalid>], who wrote in article <0dt_e.117155$G8.29444@text.news.blueyonder.co.uk>:
    >
    > > I am not familiar with the expected response of optical AA filters. Can
    > > you point me to a plot of what you mean by cosine response - I hope you
    > > don't mean that if the zero is at half the sampling frequency, the
    > > response at the sampling frequency is -1.
    >
    > Yes it is (but this is for the splitter which separates the (two)
    > images by 2 pixels). A splitter which separates the images by 1 pixel
    > has 0 response at Nyquist frequency, and response -1 at twice the
    > Nyquist.

  • >
    > All AAF does it breaks image into two, and moves the parts aside a
    > little bit (well, the actual AAF does it twice: horizontally and
    > vertically). If it moves it the distance A, then from signal 2f(X) you
    > get f(X + A/2) + f(X - A/2).
    >
    > Hope this helps,
    > Ilya
    >
    >
  • Again, this assumes completely in-coherent light. For
    > completely-coherent light you get the square of this. This is why my
    > question about length of coherence is relevant.
    Anonymous
    September 29, 2005 3:12:52 AM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    David J Taylor
    <david-taylor@blueyonder.co.not-this-bit.nor-this-part.uk.invalid>], who wrote in article <0dt_e.117155$G8.29444@text.news.blueyonder.co.uk>:

    > I am not familiar with the expected response of optical AA filters. Can
    > you point me to a plot of what you mean by cosine response - I hope you
    > don't mean that if the zero is at half the sampling frequency, the
    > response at the sampling frequency is -1.

    Yes it is (but this is for the splitter which separates the (two)
    images by 2 pixels). A splitter which separates the images by 1 pixel
    has 0 response at Nyquist frequency, and response -1 at twice the
    Nyquist.


  • All AAF does it breaks image into two, and moves the parts aside a
    little bit (well, the actual AAF does it twice: horizontally and
    vertically). If it moves it the distance A, then from signal 2f(X) you
    get f(X + A/2) + f(X - A/2).

    Hope this helps,
    Ilya

  • Again, this assumes completely in-coherent light. For
    completely-coherent light you get the square of this. This is why my
    question about length of coherence is relevant.
    Anonymous
    September 29, 2005 3:16:04 AM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    Kennedy McEwen
    <rkm@kennedym.demon.co.uk>], who wrote in article <EpYfzbBEXlODFw0B@kennedym.demon.co.uk>:
    > >With finite power density, what happens at *one particular frequency*
    > >does not matter.

    > Again, that is just plain wrong.

    Well, if you think so, you do not know the subject...

    > If you want proof of this, create a test image with sinusoidal
    > modulation. ie. one that *only* contains a single spatial
    > frequency.

    You can't create such image by an optical system.

    Hope this helps,
    Ilya

    P.S. If these hints are not enough to help find you the error in your
    arguments, let me know.
    Anonymous
    September 29, 2005 3:33:58 AM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    Kennedy McEwen
    <rkm@kennedym.demon.co.uk>], who wrote in article <FpJfjPCAjlODFw0p@kennedym.demon.co.uk>:
    > >Exactly the opposite. As you know, stronger AAF blur stronger.

    > However that blur of the AAF is simply the rejection of higher spatial
    > frequencies. Aliasing is the misrepresentation of high spatial
    > frequencies with lower ones. The AAF does not represent fine detail
    > such as specular reflections with coarse detail - aliasing does.

    To see the error in your arguments, put an AAF designed for sensor
    with step 8microns on a sensor with step 2.2microns; make a shot of
    the same subject with the same lens. Aliasing disappears. Blur
    remains.

    > >This may sound impressive, but we are talking about difference *much*
    > >smaller than the pixel size.

    > If you want an estimate of how bad that actually is, talk to any of the
    > film scanner people about the effect known as grain aliasing - exactly

    All discussion I saw about the so called "grain aliasing" is pure
    uneducated guesses. They see some effect, introduce a fancy name for it,
    and try to discuss as if they know what they are talking about. (Maybe
    I saw wrong sites; if you know something better, I would be glad to
    revisit the question.)

    > the same effect, misrepresenting fine undersampled grain as coarse grain
    > in the final image. The same effect occurs here - fine textural
    > information being misrepresented as coarse texture with the same
    > contrast.

    Again, you stray from the topic: if fine structure is aliased into
    coarse, we are not talking about aliasing *near Nyquist limit*.

    > >> In my particular field, I have seen images of tank with 8 regularly
    > >> spaced wheels on their tracks appearing to have only 6 wheels and
    > >> therefore being identified as a different vehicle completely as a
    > >> consequence of less aliasing than the 20% example that you reference.

    > >Well, if you do not know your tools, errors are always possible.

    > Which is why correct filtering of the sensor is important - the end user
    > shouldn't need to understand the detailed operation of the system or be
    > required to compensate for the tools errors.

    This depends on the application area of a tool. Most tools are not
    subject to this requirement.

    =======================================================

    Moreover, your arithmetic is way wrong: in your example (8 aliased to
    6) is NOT "less aliasing than the 20% example that you reference". To
    nitpick, my example is actually 22% aliasing. Yours is 33% aliasing
    (using the same metric).

    > >> If you don't think that is significant then I guess you don't
    > >> think "blue on blue" attacks are significant either!

    > >Since I do not know what you mean, I suppose that "I do not think
    so" indeed.

    > Misinterpretation of targets results in errors which can, and in many
    > cases have, resulted in fratricide or engagement of innocent bystanders.
    > Seeing the wrong number of wheels on a vehicle can be enough to cause it
    > to be identified as foe instead of friend.

    Some people are ready to solve problems of "social" nature by
    "technical" means (like: just introduce cleverly designed taxes, and
    people will become nicer to each other). Your example looks of
    similar flavor. It is people who fire weapons, not cameras. By
    changing a design of AAF you won't decrease fratricide (even if all of
    it is caused by results of misreading an image); most you can do is
    decrease one kind while increasing some other kind.

    > >This depends much on what is the purpose of your image. If you want
    > >to distinguish 152mm howitzer (sp?) from 156mm one, then your choice
    > >of tools may be very different from that of a bird photographer.

    > So it would be unimportant to a bird photographer if a seagull was
    > misinterpreted to be an albatross? I don't think so!

    Let us get more details: you mean 2-pixel wide image of a seagull, or what?

    Hope this helps,
    Ilya
    Anonymous
    September 29, 2005 4:28:28 AM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    Dave Martindale
    <davem@cs.ubc.ca>], who wrote in article <dhao16$pm6$1@mughi.cs.ubc.ca>:
    > >> I've been told that manufacturers treat this is a tunable parameter. If
    > >> the spread is exactly one pixel pitch, you get a null in the response of
    > >> the system right at the Nyquist frequency, which is theoretically good.

    > >I do not see how this may be "theoretically good". There is no
    > >aliasing at the Nyquist frequency, it appears *above* Nyquist. So
    > >having a zero at Nyquist has absolutely no point.

    > More precisely (though you really ought to know this):

    [Irrelevant argument omitted; see discussion in another subthread.]

    Actually, I think I found an easier argument to convince you that an
    AAF which has its first zero at Nyquist frequency (in other words, which
    shift the image by exactly 1 pixel) is absolutely useless. I hope it
    is a much more convincing argument than calculating cosine at some
    random points. ;-)

    Here it is:

    The effect of AAF which shifts the image by width of 1 pixel can be
    *completely* reconstructed by postprocessing the digital image (just
    shift the digital image by 1 pixel, and average).

    As everyone knows, no aliasing can be removed by post-processing.
    Ergo: such a "filter" *does not remove any aliasing*.

    Is it more convincing now?

    Hope this helps,
    Ilya
    Anonymous
    September 29, 2005 1:16:57 PM

    Archived from groups: rec.photo.digital (More info?)

    In article <dhf87k$1v78$1@agate.berkeley.edu>, Ilya Zakharevich
    <nospam-abuse@ilyaz.org> writes
    >[A complimentary Cc of this posting was sent to
    >Kennedy McEwen
    ><rkm@kennedym.demon.co.uk>], who wrote in article
    ><EpYfzbBEXlODFw0B@kennedym.demon.co.uk>:
    >> >With finite power density, what happens at *one particular frequency*
    >> >does not matter.
    >
    >> Again, that is just plain wrong.
    >
    >Well, if you think so, you do not know the subject...
    >
    Fortunately, I do know the subject and have written several published
    papers on the topic.

    >> If you want proof of this, create a test image with sinusoidal
    >> modulation. ie. one that *only* contains a single spatial
    >> frequency.
    >
    >You can't create such image by an optical system.
    >
    Ignoring the DC and low frequency component necessary to define the
    overall intensity of the pattern, which is entirely irrelevant to this
    arguments, you certainly can. Almost all modern lens MTF measuring
    equipment utilise such patterns.
    --
    Kennedy
    Yes, Socrates himself is particularly missed;
    A lovely little thinker, but a bugger when he's pissed.
    Python Philosophers (replace 'nospam' with 'kennedym' when replying)
    Anonymous
    September 29, 2005 1:19:53 PM

    Archived from groups: rec.photo.digital (More info?)

    Ilya Zakharevich wrote:
    > [A complimentary Cc of this posting was sent to
    > David J Taylor
    > <david-taylor@blueyonder.co.not-this-bit.nor-this-part.uk.invalid>],
    > who wrote in article
    > <0dt_e.117155$G8.29444@text.news.blueyonder.co.uk>:
    >
    >> I am not familiar with the expected response of optical AA filters.
    >> Can you point me to a plot of what you mean by cosine response - I
    >> hope you don't mean that if the zero is at half the sampling
    >> frequency, the response at the sampling frequency is -1.
    >
    > Yes it is (but this is for the splitter which separates the (two)
    > images by 2 pixels). A splitter which separates the images by 1 pixel
    > has 0 response at Nyquist frequency, and response -1 at twice the
    > Nyquist.

  • >
    > All AAF does it breaks image into two, and moves the parts aside a
    > little bit (well, the actual AAF does it twice: horizontally and
    > vertically). If it moves it the distance A, then from signal 2f(X)
    > you
    > get f(X + A/2) + f(X - A/2).
    >
    > Hope this helps,
    > Ilya
    >
    >
  • Again, this assumes completely in-coherent light. For
    > completely-coherent light you get the square of this. This is why my
    > question about length of coherence is relevant.

    Thanks for that - I hadn't realised they were such crude filters compared
    to those we use in audio! It seems to me, therefore, all the more
    important that the lens has a well curtailed MTF.

    David
    Anonymous
    September 29, 2005 1:37:46 PM

    Archived from groups: rec.photo.digital (More info?)

    In article <dhf996$1vhh$1@agate.berkeley.edu>, Ilya Zakharevich
    <nospam-abuse@ilyaz.org> writes
    >[A complimentary Cc of this posting was sent to
    >Kennedy McEwen
    ><rkm@kennedym.demon.co.uk>], who wrote in article
    ><FpJfjPCAjlODFw0p@kennedym.demon.co.uk>:
    >> >Exactly the opposite. As you know, stronger AAF blur stronger.
    >
    >> However that blur of the AAF is simply the rejection of higher spatial
    >> frequencies. Aliasing is the misrepresentation of high spatial
    >> frequencies with lower ones. The AAF does not represent fine detail
    >> such as specular reflections with coarse detail - aliasing does.
    >
    >To see the error in your arguments, put an AAF designed for sensor
    >with step 8microns on a sensor with step 2.2microns; make a shot of
    >the same subject with the same lens. Aliasing disappears. Blur
    >remains.
    >
    And this demonstrates what precisely? Of course blur remains because,
    as I stated previously, the blur is caused by the attenuation of the
    higher spatial frequencies in the image, NOT their misrepresentation at
    lower spatial frequencies as occurs in aliasing.

    >> If you want an estimate of how bad that actually is, talk to any of the
    >> film scanner people about the effect known as grain aliasing - exactly
    >
    >All discussion I saw about the so called "grain aliasing" is pure
    >uneducated guesses. They see some effect, introduce a fancy name for it,
    >and try to discuss as if they know what they are talking about. (Maybe
    >I saw wrong sites; if you know something better, I would be glad to
    >revisit the question.)
    >
    Try sampling white noise - what happens? HF noise is aliased to low
    frequencies. It occurs in every sampling system that is inadequately
    filtered and film grain is a high frequency noise. Perhaps you did
    visit the wrong sites, but it would appear that you are the person who
    doesn't understand this.


    >> the same effect, misrepresenting fine undersampled grain as coarse grain
    >> in the final image. The same effect occurs here - fine textural
    >> information being misrepresented as coarse texture with the same
    >> contrast.
    >
    >Again, you stray from the topic: if fine structure is aliased into
    >coarse, we are not talking about aliasing *near Nyquist limit*.
    >
    As I pointed out in the example you provided, the misrepresentation is
    quite significant and can readily cause mis-identification of objects,
    even quite near the Nyquist limit.
    >
    >Moreover, your arithmetic is way wrong: in your example (8 aliased to
    >6) is NOT "less aliasing than the 20% example that you reference". To
    >nitpick, my example is actually 22% aliasing. Yours is 33% aliasing
    >(using the same metric).
    >
    My example was one that I have actually presented to over 100 trained
    observers and consistently results in target misidentification. It is
    at a level where problems can be guaranteed to occur, well beyond their
    onset. Whilst 22% aliasing may result in less problems, even in
    something as well defined as a military vehicle, misidentification will
    certainly not be zero and is likely to be higher in objects (such as
    birds) where the difference between species or gender is much less
    dramatic.
    >
    >> So it would be unimportant to a bird photographer if a seagull was
    >> misinterpreted to be an albatross? I don't think so!
    >
    >Let us get more details: you mean 2-pixel wide image of a seagull, or what?
    >
    Minimum criteria for 50% probability of identification of a well defined
    object on static images is at least 12 pixels linearly (more correctly,
    6 cycles resolved across the minimum dimension). This is a well known
    standard that has been in use since the work of Johnson etc. in the
    1940s and 50s. With more closely related objects or where a higher
    probability is necessary the requirement can be more than 4-5x this.
    --
    Kennedy
    Yes, Socrates himself is particularly missed;
    A lovely little thinker, but a bugger when he's pissed.
    Python Philosophers (replace 'nospam' with 'kennedym' when replying)
    Anonymous
    September 29, 2005 1:37:47 PM

    Archived from groups: rec.photo.digital (More info?)

    Kennedy McEwen wrote:
    []
    > Minimum criteria for 50% probability of identification of a well
    > defined object on static images is at least 12 pixels linearly (more
    > correctly, 6 cycles resolved across the minimum dimension). This is
    > a well known standard that has been in use since the work of Johnson
    > etc. in the 1940s and 50s. With more closely related objects or
    > where a higher probability is necessary the requirement can be more
    > than 4-5x this.

    Remember as well that a well-trained observer (i.e. expert) can manager
    with a picture that looks like a blur to us. Example: doctors examining
    X-ray images. Also motion can play an important part - a human does not
    move in the same way as a vehicle, so you don't need to see the limbs to
    identify one versus the other.

    David
    Anonymous
    September 29, 2005 7:57:43 PM

    Archived from groups: rec.photo.digital (More info?)

    [A complimentary Cc of this posting was sent to
    David J Taylor
    <david-taylor@blueyonder.co.not-this-bit.nor-this-part.uk.invalid>], who wrote in article <ZkO_e.117691$G8.109896@text.news.blueyonder.co.uk>:
    > > All AAF does it breaks image into two, and moves the parts aside a
    > > little bit (well, the actual AAF does it twice: horizontally and
    > > vertically). If it moves it the distance A, then from signal 2f(X)
    > > you
    > > get f(X + A/2) + f(X - A/2).

    > Thanks for that - I hadn't realised they were such crude filters compared
    > to those we use in audio! It seems to me, therefore, all the more
    > important that the lens has a well curtailed MTF.

    Since any lens has "a well curtailed MTF" (at least at some F-stops)
    this translates (in the age of cheap electronics):

    it is important that Nyquist frequency of the sensor is close to the
    first zero of the "best" MTF of the lens.

    E.g., as I shown in another thread, pushing about 2.5 more pixels into
    current FF (or close) cameras AND removing AAF should produce (with a
    good lens) practically the same "resolution-wise quality of the
    picture" (i.e., will allow 1.5x larger magnification). It will also
    produce no degradation in the visible noise - as far as you stay 1.5x
    further away from the picture.

    So you can "dive 1.5x deeper" into the resulting picture: when you
    stay far away, the resolution is limited by the resolution and the
    field of view of the eye; but you can come close to inspect a smaller
    area of the image. The range of viewing distances where you are
    limited by resolution of the eye is going to be 1.5x larger with this
    sensor. (If you do a lot of postprocessing, the dark area of the
    image may be noisier at the closest viewing distances, which were not
    satisfactory resolution-wise with the coarser sensor.)

    Apparently, currently such sensors are not yet realistic due to
    limitations of firmware of cameras.

    Hope this helps,
    Ilya
    Anonymous
    September 29, 2005 7:59:43 PM

    Archived from groups: rec.photo.digital (More info?)

    Ilya Zakharevich <nospam-abuse@ilyaz.org> writes:

    >> I am not familiar with the expected response of optical AA filters. Can
    >> you point me to a plot of what you mean by cosine response - I hope you
    >> don't mean that if the zero is at half the sampling frequency, the
    >> response at the sampling frequency is -1.

    >Yes it is (but this is for the splitter which separates the (two)
    >images by 2 pixels). A splitter which separates the images by 1 pixel
    >has 0 response at Nyquist frequency, and response -1 at twice the
    >Nyquist.


  • No, David was right - it's the response of a filter with one pixel
    separation. That has a zero at the Nyquist frequency, which is half the
    sampling frequency, and a -1 response at the sampling frequency which is
    twice Nyquist.

    Here, the -1 means that a signal at the sampling frequency is not
    attenuated at all, but it is shifted half a pixel which is a 180 degree
    phase shift which is equivalent to multiplying the signal by -1.

    Dave
    Anonymous
    September 29, 2005 8:02:18 PM

    Archived from groups: rec.photo.digital (More info?)

    "winhag@yahoo.com" <winhag@yahoo.com> writes:
    >Could the deconvolution of this be implemented in some sort of
    >Photoshop 'Custom filter'?

    Deconvolution can't bring back frequencies that are completely gone,
    where the filter has zero response. It could boost some of the
    frequencies that were only attenuated somewhat, at a cost of boosting
    noise. But the finer control you want over the shape of the filter, the
    larger the filter will be.

    I wonder how much better this approach could be than simple unsharp
    masking with well-chosen parameters.

    Dave
    Anonymous
    September 29, 2005 8:16:54 PM

    Archived from groups: rec.photo.digital (More info?)

    "David J Taylor" <david-taylor@blueyonder.co.not-this-bit.nor-this-part.uk.invalid> writes:

    >Thanks for that - I hadn't realised they were such crude filters compared
    >to those we use in audio! It seems to me, therefore, all the more
    >important that the lens has a well curtailed MTF.

    Yes, it's not a low-pass filter at all, in and of itself.

    The low-pass anti-alias filtering in a camera is actually provided by
    at least 3 things:

    - the blur spot of the lens
    - the image-shifting "AA" filter, with its cos(pi*x) response
    - integration over the area of the sensor pixels, or the lenslets
    if the sensor is so equipped, with its sin(pi*x)/(pi+x) response

    I suspect that the main contribution of the crystal "anti aliasing"
    filter is not the prevention of aliasing at all, but:

    - It acts as a notch filter to remove luminance modulation at Fs/2
    because the Bayer filter array operates by *generating* modulation
    of the signal from the sensor at exactly Fs/2 when the image is
    coloured instead of grey. With the AA filter, the demosaicing
    algorithm can reliably decode modulation at Fs/2 as colour,
    avoiding luminance crosstalk into colour.

    - It also acts to attenuate frequencies somewhat *below* Nyquist, which
    is a good thing. In theory, any frequency below Nyquist could be
    sampled and reproduced accurately - but only by using an infinitely
    large reconstruction filter with "brick wall" response. Real computer
    displays and real resampling algorithms do not use such filters, and
    in practice television and digital photography can only resolve up to
    about 70-80% of Nyquist before you start seeing artifacts. So it's
    useful to attenuate these troublesome frequencies slightly below
    Nyquist.

    The lens blur provides a gradual falloff of MTF. Integration in the
    sensor pixels gives a falloff with its first zero at the sampling
    frequency; there's not much attenuation at Nyquist yet. Only the
    AA filter provides significant attenuation below Nyquist.

    Dave
    !