# Bits per channel

Tags:
Last response: in Digital Camera
Share
Anonymous

Digicams sensors have 8 bits per channel to record represent voltage
per pixel. So luminescence is represented by a number between 0 and
255.

Is there something to gain by moving to 16-bit or 32-bit? If yes, when
are we moving? Things seem to be at 8-bit for pretty long now.
- Siddhartha

Anonymous

Owamanga wrote:
> On 6 Jan 2005 05:06:34 -0800, "Siddhartha Jain"
> <losttoy2000@yahoo.co.uk> wrote:
>
> >Digicams sensors have 8 bits per channel to record represent voltage
> >per pixel. So luminescence is represented by a number between 0 and
> >255.
>
> State your source. Most DSLRs already use 12 bits per channel.

Ok, correction. 12-bits.

>
> >Is there something to gain by moving to 16-bit or 32-bit? If yes,
when
> >are we moving?
>
I don't think that answers the question.

> But, before you get too fret up, remember that your graphics card
> can't even display 11 bits per channel.
Ok, so why not?

What I am trying to understand is that are there no significant
benefits in moving to a broader bus?
Anonymous

Owamanga wrote:
> On 6 Jan 2005 05:06:34 -0800, "Siddhartha Jain"
> <losttoy2000@yahoo.co.uk> wrote:
>
> >Digicams sensors have 8 bits per channel to record represent voltage
> >per pixel. So luminescence is represented by a number between 0 and
> >255.
>
> State your source. Most DSLRs already use 12 bits per channel.
Thanks. Correction, 12-bits.

>
> >Is there something to gain by moving to 16-bit or 32-bit? If yes,
when
> >are we moving?
>
I don't think that answers the question.

> But, before you get too fret up, remember that your graphics card
> can't even display 11 bits per channel. So, if you want 32 bits per
> channel you'll never see the difference.

What I am trying to understand is are there no significant advantages
to pushing the number of bits upwards?

- Siddhartha
Related resources
Anonymous

"Siddhartha Jain" <losttoy2000@yahoo.co.uk> writes:

> Owamanga wrote:
>> On 6 Jan 2005 05:06:34 -0800, "Siddhartha Jain"
>> <losttoy2000@yahoo.co.uk> wrote:
>>
>> >Digicams sensors have 8 bits per channel to record represent voltage
>> >per pixel. So luminescence is represented by a number between 0 and
>> >255.
>>
>> State your source. Most DSLRs already use 12 bits per channel.
>
> Ok, correction. 12-bits.
>
>
>>
>> >Is there something to gain by moving to 16-bit or 32-bit? If yes,
> when
>> >are we moving?
>>
> I don't think that answers the question.

Well, 12 bits is 50% more than 8 bits, last I checked; so the benefit
of going all the way to 16 from 12 is less than the benefit of going
to 12 from 8.

>> But, before you get too fret up, remember that your graphics card
>> can't even display 11 bits per channel.
> Ok, so why not?

Because your eyes can't distinguish that many colors.

> What I am trying to understand is that are there no significant
> benefits in moving to a broader bus?

One limitation is the human visual system. Now, it's useful to
capture more than that in the initial shot -- but it has to be reduced
to what humans can see to work as a print for humans. Like a negative
being printed.

And, as people said, we *are* moving to broader buses. When the web
was new, it was rare to see pictures in more than 256 colors. Now
24-bit color is pretty much the baseline; and better cameras and
scanners produce 12 bits or more per channel. That sounds like
--
David Dyer-Bennet, <mailto d-b@dd-b.net>, <http://www.dd-b.net/dd-b/&gt;
RKBA: <http://noguns-nomoney.com/&gt; <http://www.dd-b.net/carry/&gt;
Pics: <http://dd-b.lighthunters.net/&gt; <http://www.dd-b.net/dd-b/SnapshotAlbum/&gt;
Dragaera/Steven Brust: <http://dragaera.info/&gt;

David Dyer-Bennet wrote:

> "Siddhartha Jain" <losttoy2000@yahoo.co.uk> writes:
>
>>Owamanga wrote:
>>
>>>Siddhartha Jain wrote:
>>>
>>>can't even display 11 bits per channel.
>>
>>Ok, so why not?
>
> Because your eyes can't distinguish that many colors.

So I guess the real advantage is (maybe in printing?) and with post
curves & adjust contrast, there is info in there which can be made
visible with adjustments. This includes sharpening. unprocessed DSLR
images look quite bland and soft but there is a heck of a lot more info
in there to fiddle with and bring out.
Anonymous

Steve Wolfe wrote:
> > Digicams sensors have 8 bits per channel to record represent
voltage
> > per pixel. So luminescence is represented by a number between 0 and
> > 255.
> >
> > Is there something to gain by moving to 16-bit or 32-bit? If yes,
when
> > are we moving? Things seem to be at 8-bit for pretty long now.
>
> I think that you would see a much greater improvement by moving to
another
> color space instead of increasing the bits - the RGB color space
doesn't
> even come close to covering the gamut that the human eye can see.

Apart from RGB variants what other colour-spaces are feasible?
- Siddhartha
Anonymous

> Digicams sensors have 8 bits per channel to record represent voltage
> per pixel. So luminescence is represented by a number between 0 and
> 255.
>
> Is there something to gain by moving to 16-bit or 32-bit? If yes, when
> are we moving? Things seem to be at 8-bit for pretty long now.

I think that you would see a much greater improvement by moving to another
color space instead of increasing the bits - the RGB color space doesn't
even come close to covering the gamut that the human eye can see.

steve
Anonymous

On 6 Jan 2005 05:06:34 -0800, "Siddhartha Jain"
<losttoy2000@yahoo.co.uk> wrote:

>Digicams sensors have 8 bits per channel to record represent voltage
>per pixel. So luminescence is represented by a number between 0 and
>255.

>Is there something to gain by moving to 16-bit or 32-bit? If yes, when
>are we moving?

>Things seem to be at 8-bit for pretty long now.

Then get a better camera.

But, before you get too fret up, remember that your graphics card
can't even display 11 bits per channel. So, if you want 32 bits per
channel you'll never see the difference.

--
Owamanga!

Owamanga wrote:

> Siddhartha Jain wrote:
>
>>Digicams sensors have 8 bits per channel to record represent voltage
>>per pixel. So luminescence is represented by a number between 0 and
>>255.
>
>
> ...remember that your graphics card
> can't even display 11 bits per channel. So, if you want 32 bits per
> channel you'll never see the difference.

My desktop display properties indicate 32 bit 'color quality' with an
option for 16 bit. I'm not sure if this is the same terminology. My
older PC ran a lot slower in 32 bit mode as I recall.
Anonymous

Siddhartha Jain wrote:
> Digicams sensors have 8 bits per channel to record represent voltage
> per pixel. So luminescence is represented by a number between 0 and
> 255.
>
> Is there something to gain by moving to 16-bit or 32-bit? If yes, when
> are we moving? Things seem to be at 8-bit for pretty long now.
> - Siddhartha

You get an analog value from the sensor that can be represented with about
12 bits, i.e. 4096 grey levels, with a linear scaling of grey level to
luminance. This is what you get in RAW files. This is the situation
inside the camera, before conversion to 8-bits takes place.

In image files, 8-bit JPEG or TIFF files, the digital value does not
directly represent light level, but a value nearer to the log of the light
level. A gamma-correction of approximately 0.45 power law is used to
convert the 0..4095 range of the sensor data into the 0..255 range of the
8-bit image data. In practice, perhaps only value 0..2047 are converted,
the remaining values in the RAW file representing the extra "headroom"
which people mention.

The effect of the gamma correction is to reduce the number of light levels
which can be separately represented at the bright end of the range. I.e.
the eye cannot distinguish between light levels of 2045 and 2046, so they
are both mapped to "254", for example. Light levels at the low end of the
0..4095 range (for example 1 or 2) are much more accurately represented in
the 8-bit JPEG/TIFF image, so shadow detail is preserved.

The display device typically has a gamma around 2.2, i.e. it is rather
non-linear between the drive voltage in and the light level out. The
combination of a 0.45 * 2.2 gamma (camera and display) result in an
approximately linear net transfer between light into the sensor and light
out of the display.

This is all a simplification, but should help you understand why 8-bit
data is adequate (just) for normal usage. Personally, I would like to see
rather more than 8-bits, perhaps 10-bit or 12-bit JPEGS, so that all of
the sensor range and more could be used for subsequent processing steps.

Cheers,
David
Anonymous

Siddhartha Jain wrote:
[]
> What I am trying to understand is that are there no significant
> benefits in moving to a broader bus?

"significant" is the operative word.

Tests have show that the eye has problems in using more than 8-bit data
when applied to a gamma-corrected monitor as I described, although you can
set up some special cases which show that for colour slightly more may be
required. Prior to conversion to 8-bits for display, though, there may be
a slight advantage working in the linear 12/16-bit domain.

My guess is that it will be like CDs - for domestic use 16-bit 44KHz audio
is adequate, for production studios have moved to 24-bit/96/192Khz audio.

Cheers,
David
Anonymous

On 6 Jan 2005 06:42:55 -0800, "Siddhartha Jain"
<losttoy2000@yahoo.co.uk> wrote:

>Owamanga wrote:
>> On 6 Jan 2005 05:06:34 -0800, "Siddhartha Jain"
>> <losttoy2000@yahoo.co.uk> wrote:
>>
>> >Digicams sensors have 8 bits per channel to record represent voltage
>> >per pixel. So luminescence is represented by a number between 0 and
>> >255.
>>
>> State your source. Most DSLRs already use 12 bits per channel.
>Thanks. Correction, 12-bits.
>
>>
>> >Is there something to gain by moving to 16-bit or 32-bit? If yes,
>when
>> >are we moving?
>>
>I don't think that answers the question.
>
>> But, before you get too fret up, remember that your graphics card
>> can't even display 11 bits per channel. So, if you want 32 bits per
>> channel you'll never see the difference.
>
>What I am trying to understand is are there no significant advantages
>to pushing the number of bits upwards?

That's right - diminishing returns. This whole thing is designed
around what the human eye can see. There is no point going crazy with
16 bits, 32 bits, 64 bits, 128 bits per channel when we can't display,
print or see the added detail.

Can you see the difference between 24 bit mode (8 per channel) and 32
bit mode (10.5 per channel) on your graphics card ? I am sure if
someone switched mine down to 24 bits one morning, I'd probably never

The only argument for 48 bit scanners (16 per channel) and the like is
because it allows slightly more scope for exposure correction (ie, you
get to choose later which 8 bits per channel you want to keep)

...same with digital audio. Take CDs - 44.1Khz at 16 bits per sample is
plenty good enough for our human ears. Since that invention 20 years
ago, many subsequent digital audio standards have actually been much
worse and the best one is only just over double that sampling rate and
nobody is buying it. We have reached a plateau.

--
Owamanga!
Anonymous

On Thu, 06 Jan 2005 07:16:11 -0800, paul <paul@not.net> wrote:

>Owamanga wrote:
>
>> Siddhartha Jain wrote:
>>
>>>Digicams sensors have 8 bits per channel to record represent voltage
>>>per pixel. So luminescence is represented by a number between 0 and
>>>255.
>>
>>
>> ...remember that your graphics card
>> can't even display 11 bits per channel. So, if you want 32 bits per
>> channel you'll never see the difference.
>
>
>My desktop display properties indicate 32 bit 'color quality' with an
>option for 16 bit. I'm not sure if this is the same terminology. My
>older PC ran a lot slower in 32 bit mode as I recall.

32 bits per pixel. Split this into the three color components of Red,
Green and Blue and you've got theoretical 10.6 bits per channel. In
fact, most (if not all) are using 32 bits just to pad 24 actual bits
into something that fits neatly into 4 bytes - this is for performance
and design simplicity reasons. So, these modes are actually only
displaying 8 bits per channel. 16,776,215 discrete colors.

24/32 bit modes will be slower because they use 4 bytes of card memory
per pixel instead of 2 bytes in 16 bit mode (65,536 colors) or 1 byte
in 8 bit (256 color) mode.

Someone might correct me and tell me there is now available a true
10.6 bit, 12 or 16 bit per channel graphics card out there - anything
is possible I am sure.

--
Owamanga!
Anonymous

> >My desktop display properties indicate 32 bit 'color quality' with an
> >option for 16 bit. I'm not sure if this is the same terminology. My
> >older PC ran a lot slower in 32 bit mode as I recall.
>
> 32 bits per pixel. Split this into the three color components of Red,
> Green and Blue and you've got theoretical 10.6 bits per channel. In
> fact, most (if not all) are using 32 bits just to pad 24 actual bits

Actually, 32-bit mode gives you 24 bits of color (8 bits per channel),
and an additional 8 bits of alpha (transparency).

steve
Anonymous

Owamanga wrote:
[]
> Can you see the difference between 24 bit mode (8 per channel) and 32
> bit mode (10.5 per channel) on your graphics card ? I am sure if
> someone switched mine down to 24 bits one morning, I'd probably never
> even notice it had happened.

There is no difference in the colour displayed - in each case it's 8 bits
of red, 8 of green, and 8 of blue. The extra 8 bits are for alpha masks.
It can be faster to move 32-bit data to the card, not that you could
perceive the difference with today's PCs.

David
Anonymous

On Thu, 6 Jan 2005 12:10:29 -0700, "Steve Wolfe" <unt@codon.com>
wrote:

>> >My desktop display properties indicate 32 bit 'color quality' with an
>> >option for 16 bit. I'm not sure if this is the same terminology. My
>> >older PC ran a lot slower in 32 bit mode as I recall.
>>
>> 32 bits per pixel. Split this into the three color components of Red,
>> Green and Blue and you've got theoretical 10.6 bits per channel. In
>> fact, most (if not all) are using 32 bits just to pad 24 actual bits
>
> Actually, 32-bit mode gives you 24 bits of color (8 bits per channel),
>and an additional 8 bits of alpha (transparency).

Makes sense, but it is an alpha transparency to what ? - ie what's
underneath?

--
Owamanga!
Anonymous

Sounds Good ... but just what "Color Space" would that be?

"Steve Wolfe" <unt@codon.com> wrote in message
news:345groF44s2c5U1@individual.net...
> > are we moving? Things seem to be at 8-bit for pretty long now.
>
> I think that you would see a much greater improvement by moving to
another
> color space instead of increasing the bits - the RGB color space doesn't
> even come close to covering the gamut that the human eye can see.
>
> steve
>
>
Anonymous

Owamanga <nomail@hotmail.com> writes:

>> Actually, 32-bit mode gives you 24 bits of color (8 bits per channel),
>>and an additional 8 bits of alpha (transparency).

>Makes sense, but it is an alpha transparency to what ? - ie what's
>underneath?

There are video cards that can composite the computer-generated image
onto a background image that comes from somewhere else (e.g. a video
source). In that case, the alpha channel determines the opacity of the
upper layer.

But mostly it's not used in PCs. It may still be advantageous to use 32
bit instead of 24 bit representation, wasting 1/4 of the memory, because
most processors can't address 24 bit packed data except by using byte
operations, while 32-bit access is faster.

Dave
Anonymous

"Siddhartha Jain" <losttoy2000@yahoo.co.uk> wrote:

>Digicams sensors have 8 bits per channel to record represent voltage
>per pixel. So luminescence is represented by a number between 0 and
>255.
>
>Is there something to gain by moving to 16-bit or 32-bit? If yes, when
>are we moving? Things seem to be at 8-bit for pretty long now.

Actually, it is 12 bits per channel for most current digitals.

Either marketing concerns, cost concerns, storage concerns, technical
problems, or any combination are keeping the current data at 12-bit.
Current sensors could easily warrant 16 bits, as their lower 12 bits at
ISO 100 would have the same quality as ISO 1600 currently has on the
same sensor. Maybe the problem is in the analog-to-digital converter.
Maybe converters that can do 16 bits are expensive or impractical, but
current sensors could certainly warrant it.
--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS@no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
Anonymous

In message <sjgqt0to91um5gn55q0m8dqdio83oobkoj@4ax.com>,
Owamanga <nomail@hotmail.com> wrote:

>But, before you get too fret up, remember that your graphics card
>can't even display 11 bits per channel. So, if you want 32 bits per
>channel you'll never see the difference.

It's not about seeing it directly on the screen, raw. It's about
boosting shadows, compressing highights globally while expanding their
local detail, etc, etc.
--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS@no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
Anonymous

In message <3451heF43aes7U1@individual.net>,
"David J Taylor" <david-taylor@invalid.com> wrote:

>Siddhartha Jain wrote:
>[]
>> What I am trying to understand is that are there no significant
>> benefits in moving to a broader bus?
>
>"significant" is the operative word.
>
>Tests have show that the eye

exposure, and capturing more shadows, and more highlights, and *THEN*
manipulating curves and local contrast to make it all visible, even in
8-bit output.
--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS@no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
Anonymous

In message <68lqt0504q2cehqsfdqinco1oph13jumfg@4ax.com>,
Owamanga <nomail@hotmail.com> wrote:

>Can you see the difference between 24 bit mode (8 per channel) and 32
>bit mode (10.5 per channel) on your graphics card ? I am sure if
>someone switched mine down to 24 bits one morning, I'd probably never

That depends on the card. If it is one of the cards that actually uses
10 bits per channel. you should be able to see a difference from 8 bits
per channel in smooth, noiseless, and undithered gradients. If your
card is only using 8 bits per channel in 32-bit mode, as most cards do,
there is nothing different at all in the display. The 8 extra bits is
usually for padding the data for faster access, and less frequently, for
a hardware alpha channel.
--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS@no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
Anonymous

paul <paul@not.net> wrote:

>My desktop display properties indicate 32 bit 'color quality' with an
>option for 16 bit. I'm not sure if this is the same terminology. My
>older PC ran a lot slower in 32 bit mode as I recall.

That would depend on several factors; how the data was stored, how fast
the card/bus was, whether or not 16-bit mode was dithered, etc.

Generally speaking, 24-bit is slower than 32-bit, unless the data is fed
to the card in a format optimized for the storage method, in which case
32-bit is much faster.
--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS@no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
Anonymous

In message <344vjqF45oib8U1@individual.net>,
"David J Taylor" <david-taylor@invalid.com> wrote:

>In practice, perhaps only value 0..2047 are converted,
>the remaining values in the RAW file representing the extra "headroom"
>which people mention.

For the Canon DSLRs (normal contrast JPEGs) and software, it seems that
a green RAW level of about 2000 (after blackpoint subtraction) maps to
255 in the 8-bit output, and the red and blue vary based on the color
temperature and tint used.
--

<>>< ><<> ><<> <>>< ><<> <>>< <>>< ><<>
John P Sheehy <JPS@no.komm>
><<> <>>< <>>< ><<> <>>< ><<> ><<> <>><
Anonymous

JPS@no.komm wrote:
> In message <3451heF43aes7U1@individual.net>,
> "David J Taylor" <david-taylor@invalid.com> wrote:
>
>> Siddhartha Jain wrote:
>> []
>>> What I am trying to understand is that are there no significant
>>> benefits in moving to a broader bus?
>>
>> "significant" is the operative word.
>>
>> Tests have show that the eye
>
> exposure, and capturing more shadows, and more highlights, and *THEN*
> manipulating curves and local contrast to make it all visible, even in
> 8-bit output.

The sensor works in a linear domain but the eye does not, so having the
greater number of bits allows that manipulation to be done with less loss,

The eye is usually the final driving factor in what images need to
present, which then determines the final image requirements. The science
(and craft) is to present what the sensor sees to the eye in the most
pleasing way, and along that route manipulation may be required.

Cheers,
David
Anonymous

JPS@no.komm wrote:
> In message <sjgqt0to91um5gn55q0m8dqdio83oobkoj@4ax.com>,
> Owamanga <nomail@hotmail.com> wrote:

>>But, before you get too fret up, remember that your graphics card
>>can't even display 11 bits per channel. So, if you want 32 bits per
>>channel you'll never see the difference.

> It's not about seeing it directly on the screen, raw. It's about
> boosting shadows, compressing highights globally while expanding their
> local detail, etc, etc.

Try this experiment -- you may have to turn off the lights:

> Make a solid black RGB000 doc.
>
> Make a small selection in the middle.
>
> Hide the marching ants.
>
> Switch to full-screen mode #2, the one with no menu and a black matte.
>
> Hide all the palettes.
>
> Press Command-M to open Curves. If you have dual displays, move the curves
> to that 2nd display.
>
> Target the 0,0 Curve point, then drag the curves dlog off the screen
> until only a tiny corner is showing (to avoid flare).
>
> Press the up arrow key once to change 0,0 to 0,1.
>
> Do you see a difference? Is the selection still neutral gray?
>
> Keep pressing the up arrow one press at a time.
>
> Do you see a difference between each press of the arrow?
>
> Is the difference between each press constant?
>
> Does it always result in a neutral gray?

(From of Andrew Rodney of http://www.digitaldog.net/: credit due.)

What you'll probably see is that black linearity in the shadows for
most monitors and graphics cards is pretty poor, and could be enhanced
by finer gradations at the low end. With only 8-bit adjustment it's
impossible to linearize the system.

Andrew.
Anonymous

Siddhartha Jain <losttoy2000@yahoo.co.uk> wrote:
> Steve Wolfe wrote:
>> > Digicams sensors have 8 bits per channel to record represent
>> > voltage per pixel. So luminescence is represented by a number
>> > between 0 and 255.
>> >
>> > Is there something to gain by moving to 16-bit or 32-bit? If yes,
>> > when are we moving? Things seem to be at 8-bit for pretty long
>> > now.

>> I think that you would see a much greater improvement by moving
>> to another color space instead of increasing the bits - the RGB
>> color space doesn't even come close to covering the gamut that the
>> human eye can see.

> Apart from RGB variants what other colour-spaces are feasible?

Lab, for instance. You can represent any colour with Lab. There are
quite a few others.

There are also some RGB spaces, such as Kodak pro photo RGB, whose
primaries are outside the spectral locus. By using such an encoding
it's possible to represent every colour the eye can see with a simple
triplet.

An alternative is to do what Kodak Picture CD does -- allow negative
RGB values. This also allows all colours to be represented.

Andrew.
Related resources:
!