Sign in with
Sign up | Sign in
Your question

Trading pixels for ISO?

Last response: in Digital Camera
Share
Anonymous
August 19, 2005 6:41:55 AM

Archived from groups: rec.photo.digital (More info?)

I've only been active with a digicam for less than two months. I got
a digicam some years ago, but never did much with it back then. The
following is glaringly obvious to me, given my "expertise in the field".

- problem; badly underexposed picture, e.g. at ISO 80 under conditions
where ISO 320 would've been appropriate.

- proposed solution; assume an X-Y co-ordinate system where pixel 0,0
is the upper left-hand corner. Compose pixels in a new photo as
follows
- sum the R values for points (0,0), (0,1), (1,0), and (1,1) and
store them as the R value for pixel (0,0) in the new picture
- ditto for the G and B values.

- to summarize, we end up creating a picture half the width and half
the height of the original (i.e. 1/4 the pixel count). Each pixel
(x', y') in the new picture will have an R value composed of the
sum of the R values of pixels...
(2 * x', 2 * y'), (2 * x' + 1, 2 * y'), (2 * x', 2 * y' + 1), and
(2 * x' + 1, 2 * y' + 1) from the original underexposed photo.
Ditto for the G and B values.

Averaging and multiplying the numbers and trying to spread them out
over the original size will result in a grainy picture. However, what
happens if we accept the reduced size/resolution, and keep the sums of
the RGB values collected in the original picture? Will we effectively
boost ISO by a factor of 4 without introducing graininess? This should
be trivial to implement in most graphics programs.

This algorithm can be generalized to a 3 X 3 size reduction for an ISO
boost factor of 9, 4 X 4 reduction for an ISO boost factor of 16, etc.

I've had these "it seemed like a good idea at the time" ideas, and
seen them shot down in flames before. This one seems so good that I've
got a bad gut feeling. But at first glance I don't see what's wrong
with my calculations. Any comments?

--
Walter Dnes; my email address is *ALMOST* like wzaltdnes@waltdnes.org
Delete the "z" to get my real address. If that gets blocked, follow
the instructions at the end of the 550 message.

More about : trading pixels iso

Anonymous
August 19, 2005 6:49:55 AM

Archived from groups: rec.photo.digital (More info?)

"Walter Dnes (delete the 'z' to get my real address)" <wzaltdnes@waltdnes.org> wrote:
> This algorithm can be generalized to a 3 X 3 size reduction for an ISO
> boost factor of 9, 4 X 4 reduction for an ISO boost factor of 16, etc.

> I've had these "it seemed like a good idea at the time" ideas, and
> seen them shot down in flames before. This one seems so good that I've
> got a bad gut feeling. But at first glance I don't see what's wrong
> with my calculations. Any comments?

Cool. Sounds like you should fire up Matlab and give it a try ;) 
Anonymous
August 19, 2005 1:21:58 PM

Archived from groups: rec.photo.digital (More info?)

Walter Dnes (delete the 'z' to get my real address) wrote:

> I've only been active with a digicam for less than two months. I got
> a digicam some years ago, but never did much with it back then. The
> following is glaringly obvious to me, given my "expertise in the field".
>
> - problem; badly underexposed picture, e.g. at ISO 80 under conditions
> where ISO 320 would've been appropriate.
>
> - proposed solution; assume an X-Y co-ordinate system where pixel 0,0
> is the upper left-hand corner. Compose pixels in a new photo as
> follows
> - sum the R values for points (0,0), (0,1), (1,0), and (1,1) and
> store them as the R value for pixel (0,0) in the new picture
> - ditto for the G and B values.
>
> - to summarize, we end up creating a picture half the width and half
> the height of the original (i.e. 1/4 the pixel count).

It is usually called pixel binning in the scientific CCD community eg.

http://www.noao.edu/outreach/aop/glossary/binning.html

> I've had these "it seemed like a good idea at the time" ideas, and
> seen them shot down in flames before. This one seems so good that I've
> got a bad gut feeling. But at first glance I don't see what's wrong
> with my calculations. Any comments?

If works to get you an extra factor of 2 or 3 signal to noise at the
expense of a smaller image. Sometimes this tradeoff is worthwhile.

Increasing the effective exposure and/or gain of the CCD readout
amplifier are still the first choices.

Regards,
Martin Brown
Anonymous
August 20, 2005 12:12:56 AM

Archived from groups: rec.photo.digital (More info?)

I think you theory is correct.

To execute this, one can use photoshop.
1. Add the picture to itself (four times).
and then resize it with the correct algoritm.
Or
Risize the picture with a factor of four (linear 2 x)
and then add the picture to itself (four times).

Method one has the disadvantage that the first
boosting up could bring some values out of the range.
(That information is then lost (before averaging).)
(Loosing some highlight nuaces)

Method two has the disadvantage that the values are
first added then divided by four, rounded and then
multiplied by four. (So the last bits get lost in the method,
but if there is some variation anyway, most people
will not notice).
(Loosing the two bits, maybe this can be done in 16 bits,
where 2 bits matter less).

But in both ways it should deliver a picture with less
graininess. (A dedicated program, made not to lose
'high' values and not to round of would marginaly produce
better results).

But there are some problems, probably not the whole
process is linear so that migh give some 'distortion'.

Then you are probable better of with removing
graininess by software. Then you 'probably' lose less
resolution with 'more' effect of loosing graininess.

ben brugman

"Walter Dnes (delete the 'z' to get my real address)" <wzaltdnes@waltdnes.org> schreef in bericht
news:430546f3$0$1575$c3e8da3@news.astraweb.com...
> I've only been active with a digicam for less than two months. I got
> a digicam some years ago, but never did much with it back then. The
> following is glaringly obvious to me, given my "expertise in the field".
>
> - problem; badly underexposed picture, e.g. at ISO 80 under conditions
> where ISO 320 would've been appropriate.
>
> - proposed solution; assume an X-Y co-ordinate system where pixel 0,0
> is the upper left-hand corner. Compose pixels in a new photo as
> follows
> - sum the R values for points (0,0), (0,1), (1,0), and (1,1) and
> store them as the R value for pixel (0,0) in the new picture
> - ditto for the G and B values.
>
> - to summarize, we end up creating a picture half the width and half
> the height of the original (i.e. 1/4 the pixel count). Each pixel
> (x', y') in the new picture will have an R value composed of the
> sum of the R values of pixels...
> (2 * x', 2 * y'), (2 * x' + 1, 2 * y'), (2 * x', 2 * y' + 1), and
> (2 * x' + 1, 2 * y' + 1) from the original underexposed photo.
> Ditto for the G and B values.
>
> Averaging and multiplying the numbers and trying to spread them out
> over the original size will result in a grainy picture. However, what
> happens if we accept the reduced size/resolution, and keep the sums of
> the RGB values collected in the original picture? Will we effectively
> boost ISO by a factor of 4 without introducing graininess? This should
> be trivial to implement in most graphics programs.
>
> This algorithm can be generalized to a 3 X 3 size reduction for an ISO
> boost factor of 9, 4 X 4 reduction for an ISO boost factor of 16, etc.
>
> I've had these "it seemed like a good idea at the time" ideas, and
> seen them shot down in flames before. This one seems so good that I've
> got a bad gut feeling. But at first glance I don't see what's wrong
> with my calculations. Any comments?
>
> --
> Walter Dnes; my email address is *ALMOST* like wzaltdnes@waltdnes.org
> Delete the "z" to get my real address. If that gets blocked, follow
> the instructions at the end of the 550 message.
Anonymous
August 21, 2005 3:30:02 AM

Archived from groups: rec.photo.digital (More info?)

On 19 Aug 2005 02:49:55 GMT, Kevin, <kevin@nospam.invalid> wrote:

> Cool. Sounds like you should fire up Matlab and give it a try ;) 

I was hoping for a dedicated program. According to Gentoo, Matlab
show up under "app-emacs/" and it would build emancs before building
Matlab. Talk about overkill.

--
Walter Dnes; my email address is *ALMOST* like wzaltdnes@waltdnes.org
Delete the "z" to get my real address. If that gets blocked, follow
the instructions at the end of the 550 message.
Anonymous
August 21, 2005 4:45:25 PM

Archived from groups: rec.photo.digital (More info?)

It seems like Walter's algorithm would also be useful for reducing the
higher noise inherent in the small-area pixels found on current
point-n-shoot models. For a sufficient light level but
higher-than-desired noise, one could simple average, rather than add,
adjacent pixel levels.

Mark
Anonymous
August 21, 2005 9:16:16 PM

Archived from groups: rec.photo.digital (More info?)

see_my_sig_at_bottom_of_message@waltdnes.org writes:
>On 19 Aug 2005 02:49:55 GMT, Kevin, <kevin@nospam.invalid> wrote:

>> Cool. Sounds like you should fire up Matlab and give it a try ;) 

> I was hoping for a dedicated program. According to Gentoo, Matlab
>show up under "app-emacs/" and it would build emancs before building
>Matlab. Talk about overkill.

Matlab is an extensive numerical mathematics toolbox. You can certainly
use it without emacs. (But it's expensive).

Are you sure the Gentoo "matlab" is the same program, that it's not just
some sort of interface between emacs and the (separate) Matlab package?

Dave
Anonymous
August 22, 2005 2:32:58 AM

Archived from groups: rec.photo.digital (More info?)

Martin Brown <|||newspam|||@nezumi.demon.co.uk> writes:

>It is usually called pixel binning in the scientific CCD community eg.

>http://www.noao.edu/outreach/aop/glossary/binning.html

It seems like binning would be problematic in Bayer-filter sensors,
because a NxN group of pixels would have measurements from all 3 (or 4)
colours that can't be added together without losing colour information.

I suppose you might be able to build a special CCD with 3 or 4 binning
capacitors, one for each filter colour. As you shift charge out of the
CCD, it would be steered to whichever capacitor was appropriate. So
(for example) a 4x4 pixel region of the sensor would be binned into a
2x2 group of red/green/blue capacitors, and then the charge in each
capacitor measured.

Anyone know if this is actually done with Bayer-sensor CCDs?

Dave
Anonymous
August 22, 2005 6:23:45 AM

Archived from groups: rec.photo.digital (More info?)

"redbelly" <redbelly98@yahoo.com> wrote in message
news:1124653525.660762.189710@g14g2000cwa.googlegroups.com...
> It seems like Walter's algorithm would also be useful for
> reducing the higher noise inherent in the small-area pixels
> found on current point-n-shoot models. For a sufficient
> light level but higher-than-desired noise, one could simple
> average, rather than add, adjacent pixel levels.

Yes, for that you can also use Photoshop's (full version, don't know
about Elements) Filter>Pixelate>Mosaic... filter, before
down-sampling. Better results can be expected from proper
down-sampling algorithms.

Bart
Anonymous
August 22, 2005 6:36:18 AM

Archived from groups: rec.photo.digital (More info?)

"Dave Martindale" <davem@cs.ubc.ca> wrote in message
news:D eaveq$ikv$2@mughi.cs.ubc.ca...
> Martin Brown <|||newspam|||@nezumi.demon.co.uk> writes:
>
>>It is usually called pixel binning in the scientific CCD community
>>eg.
>>http://www.noao.edu/outreach/aop/glossary/binning.html
>
> It seems like binning would be problematic in Bayer-filter sensors,
> because a NxN group of pixels would have measurements from all
> 3 (or 4) colours that can't be added together without losing colour
> information.

In those cases one could either bin after demosaicing, or one could of
course also attempt to bin GRGB filtered Raw sensor data, but that
would require a sensor weighting to approximate some sort of white
balance. I understand the latter approach is offered as an option in
ImagesPlus (http://www.mlunsold.com/).

Bart
Anonymous
August 24, 2005 2:45:44 PM

Archived from groups: rec.photo.digital (More info?)

On Sun, 21 Aug 2005 22:32:58 +0000 (UTC), Dave Martindale, <davem@cs.ubc.ca> wrote:

> It seems like binning would be problematic in Bayer-filter sensors,
> because a NxN group of pixels would have measurements from all 3 (or 4)
> colours that can't be added together without losing colour information.
>
> I suppose you might be able to build a special CCD with 3 or 4 binning
> capacitors, one for each filter colour. As you shift charge out of the
> CCD, it would be steered to whichever capacitor was appropriate. So
> (for example) a 4x4 pixel region of the sensor would be binned into a
> 2x2 group of red/green/blue capacitors, and then the charge in each
> capacitor measured.
>
> Anyone know if this is actually done with Bayer-sensor CCDs?

I was proposing to do this with software after the fact, rather than
in hardware. Hence the title of this thread. E.g. take a 2560 X 1920
digital photo, and bin it down to 1280 X 960 (1/4th the area) or 853 X
640 (1/9th the area) or 640 X 480 (1/16th the area). The algorithm
would go like so...

The user supplies 4 parameters
- input filename
- an integer N, to specify N x N binning
- a floating point divisor, generally between 1.0 and N^2
- output filename

Let's assume an RGB image with 8 bits per colour channel. Specify N
= 2, i.e. 2 X 2 binning. The software would figure out the image
dimansions from the header. It would...
- sum up the R value from the 4 source pixels
- divide that result by the supplied divisor
- clip to 255 if necessary
- save the result in the R value of the output pixel in the output file
- ditto for the G and B channels

The advantages of the software approach...
- The hardware wouldn't have to do any of the work, and no hardware
modifications are required. Binning can be applied to existing
digital photos today.
- If the first attempt was too bright (blown highlights in the photo)
you can try again and again with a higher divisor. Doing it in
hardware would not be so forgiving.

Actually, there is only one place where the hardware could help. In
aperture priority mode, the shutter is calculated as a dependant value
of the aperture setting. Ditto with shutter priority where aperture is
the dependant variable. It might help to be able to request a specific
amount of deliberate underexposure. But that can already be forced
today in manual mode, or with EV compensation.

Note that a divisor of 1 would be strictly summation. A divisor of
N^2 would be strictly averaging. Values in between would be a mixture.

Are there any programmers in the group? I, unfortunately, am not one.
I'd love to see an open source binning program as described above.

--
Walter Dnes; my email address is *ALMOST* like wzaltdnes@waltdnes.org
Delete the "z" to get my real address. If that gets blocked, follow
the instructions at the end of the 550 message.
!