Sign in with
Sign up | Sign in
Your question

What Comprises a "Pixel"?

Last response: in Digital Camera
Share
August 30, 2005 1:07:02 PM

Archived from groups: rec.photo.digital (More info?)

I know this sounds like a really basic question, but I never thought
about it before recently, when considering taking grayscale photos of
old prints for a digital archive.

As I understand it, my focal-plane sensor chip does not consist of
elements that individually respond to all "colors". Instead, there are
three distinct types of elements, each of which responds to particular
color range and associated intensity for that range. Is this correct?

If so, then is a 3-megapixel image made of 3 million triads? Or do the
spec-writers "cheat" and call each individual sensor element, though
incapable of capturing full-frequency information, a pixel? If that is
the case, then is my 3-megapixel camera really only providing
information for one million sites of combined color/intensity?

Finally, if this is the case, and I switch to grayscale capture mode,
does each of the three elements in a triad now capture independent
intensity information and provide me a 3X increase in spatial
resolution, giving me a "real" 3 megapixels in grayscale vs only a
"real" 1 megapixel in full color?

Martin

More about : comprises pixel

Anonymous
August 30, 2005 1:23:09 PM

Archived from groups: rec.photo.digital (More info?)

I know this sounds like a really basic question, but I never thought
about it before recently, when considering taking grayscale photos of
old prints for a digital archive.


As I understand it, my focal-plane sensor chip does not consist of
elements that individually respond to all "colors". Instead, there are

three distinct types of elements, each of which responds to particular
color range and associated intensity for that range. Is this correct?

YES


If so, then is a 3-megapixel image made of 3 million triads? Or do the

spec-writers "cheat" and call each individual sensor element, though
incapable of capturing full-frequency information, a pixel? If that is

the case, then is my 3-megapixel camera really only providing
information for one million sites of combined color/intensity?

There are not 3 million triads, there are 3million sensor elements (a
mix of
R, G, and B). A technique known as Bayer interpolation is able to
reconstruct a full 3PM worth of full color data. Of course it is not
perfect
and there are losses and artifacts. The Foveon sensor truly captures
R, G, and B for each pixel and for a given number of 'sites' it
provides higher
res than 'Bayer' or mosaic sensors. However, there are few cameras
that use the Foveon sensor (Fuji Finepix SLR) and unfortunately, they
(Foveon) have fallen behind in the sensor race. It has been said that a
Foveon sensor of X MP gives as much res as a Bayer sensor of 2X MP.
Of course this is a rough comparison.

Finally, if this is the case, and I switch to grayscale capture mode,
does each of the three elements in a triad now capture independent
intensity information and provide me a 3X increase in spatial
resolution, giving me a "real" 3 megapixels in grayscale vs only a
"real" 1 megapixel in full color?

Gray scale capture is the same as color, it is just that the camera
processor
does the conversion instead of you. You do not get higher res.
Anonymous
August 30, 2005 8:38:41 PM

Archived from groups: rec.photo.digital (More info?)

You may find this page useful:

http://heim.ifi.uio.no/~gisle/photo/pixels.html

although your precise question is one which Gisle might want to add.

Yes, the manufacturers "prefer" to use the larger number both when
referring to the sensor or to the LCD on the back of the camera. So they
count individual RGB receptors, not (RGB) triplets. On a typical camera,
there will be four receptors per composite pixel, perhaps organised as
RGGB or RGBCyan.

David
Related resources
Anonymous
August 30, 2005 9:18:48 PM

Archived from groups: rec.photo.digital (More info?)

Martin wrote:

> I know this sounds like a really basic question, but I never thought
> about it before recently, when considering taking grayscale photos of
> old prints for a digital archive.
>
> As I understand it, my focal-plane sensor chip does not consist of
> elements that individually respond to all "colors". Instead, there are
> three distinct types of elements, each of which responds to particular
> color range and associated intensity for that range. Is this correct?

First off.. Pixel is a blend of two words

'Picture - Pix' and 'Element - el'.

The **actual** pixels your camera produces are binary representations
of data that was obtained by sampling light reflected from a real
world object.

Pixels have no shape, size or weight. They are just strings of ones
and zeros held as electric charges in your camera memory card, or computer
RAM, or as magnetic impressions on a spinning disk, or pits and valleys
on a CD.

I don't like the idea of saying a sensor has pixels.. I prefer
to call them sensor sites. The sensor sites produce the pixels.

I feel calling sensor sites pixels adds a level of confusion
to this digital imaging thing. It's bad enough calling
scanner sensor sites 'dots' and measuring them in dots
per inch.

Camera sensors are very complex.. There is more than one way to
create a pixel. (The bayer method or foveon method for example).
You can look this up on the web. Google for 'bayer sensor'
You'll see explanations with diagrams which make it much better
than you'll get in a newsgroup post.

Lets call sensor sites, sensor sites.. Lets call what they *produce*
pixels :-)

> Finally, if this is the case, and I switch to grayscale capture mode,
> does each of the three elements in a triad now capture independent
> intensity information and provide me a 3X increase in spatial
> resolution, giving me a "real" 3 megapixels in grayscale vs only a
> "real" 1 megapixel in full color?

No.. You get the same color image from the sensor.. The firmware
within the camera removes the color information. You can do the
exact same thing if you take a color shot and remove the color
with photo editing software. When you select black and white on
your camera, the camera is just saving you that step.
Anonymous
August 30, 2005 9:18:49 PM

Archived from groups: rec.photo.digital (More info?)

Jim Townsend wrote:
> Martin wrote:
>
>> I know this sounds like a really basic question, but I never thought
>> about it before recently, when considering taking grayscale photos of
>> old prints for a digital archive.
>>
>> As I understand it, my focal-plane sensor chip does not consist of
>> elements that individually respond to all "colors". Instead, there
>> are three distinct types of elements, each of which responds to
>> particular color range and associated intensity for that range. Is
>> this correct?
>
> First off.. Pixel is a blend of two words
>
> 'Picture - Pix' and 'Element - el'.
>
> The **actual** pixels your camera produces are binary representations
> of data that was obtained by sampling light reflected from a real
> world object.
>
> Pixels have no shape, size or weight. They are just strings of ones
> and zeros held as electric charges in your camera memory card, or
> computer RAM, or as magnetic impressions on a spinning disk, or pits
> and valleys on a CD.
>
> I don't like the idea of saying a sensor has pixels.. I prefer
> to call them sensor sites. The sensor sites produce the pixels.
>
> I feel calling sensor sites pixels adds a level of confusion
> to this digital imaging thing. It's bad enough calling
> scanner sensor sites 'dots' and measuring them in dots
> per inch.
>
> Camera sensors are very complex.. There is more than one way to
> create a pixel. (The bayer method or foveon method for example).
> You can look this up on the web. Google for 'bayer sensor'
> You'll see explanations with diagrams which make it much better
> than you'll get in a newsgroup post.
>
> Lets call sensor sites, sensor sites.. Lets call what they *produce*
> pixels :-)
>
>> Finally, if this is the case, and I switch to grayscale capture mode,
>> does each of the three elements in a triad now capture independent
>> intensity information and provide me a 3X increase in spatial
>> resolution, giving me a "real" 3 megapixels in grayscale vs only a
>> "real" 1 megapixel in full color?
>
> No.. You get the same color image from the sensor.. The firmware
> within the camera removes the color information. You can do the
> exact same thing if you take a color shot and remove the color
> with photo editing software. When you select black and white on
> your camera, the camera is just saving you that step.

Better yet - if you do the grayscale conversion from a color original,
you can choose from the individual color channels to get various
contrast effects similar to using colored filters with black and white
film.
Anonymous
August 30, 2005 9:39:02 PM

Archived from groups: rec.photo.digital (More info?)

"Martin" <funkychateauSPAM@yahoo.com> writes:

> I know this sounds like a really basic question, but I never thought
> about it before recently, when considering taking grayscale photos of
> old prints for a digital archive.
>
> As I understand it, my focal-plane sensor chip does not consist of
> elements that individually respond to all "colors". Instead, there are
> three distinct types of elements, each of which responds to particular
> color range and associated intensity for that range. Is this correct?

Probably. Nearly all modern digital camera, and every single consumer
P&S digital camera I know of, uses what you describe (it's called the
Bayer filter pattern).

> If so, then is a 3-megapixel image made of 3 million triads? Or do the
> spec-writers "cheat" and call each individual sensor element, though
> incapable of capturing full-frequency information, a pixel? If that is
> the case, then is my 3-megapixel camera really only providing
> information for one million sites of combined color/intensity?

Each individual sensor element is called a pixel.

So, sort-of. Turns out, though, that the eyeball works the same way,
and is much more sensitive to luminance detail than color detail. So
the results match very well with human vision.

> Finally, if this is the case, and I switch to grayscale capture mode,
> does each of the three elements in a triad now capture independent
> intensity information and provide me a 3X increase in spatial
> resolution, giving me a "real" 3 megapixels in grayscale vs only a
> "real" 1 megapixel in full color?

No. The filters on each pixel site are permanently emplaced (consider
the alignment issues of making them removable!), so you're getting
interpolated data at each site either way.

This is a dangerous topic (giving you the benefit of the doubt here,
since there's no reason to think you're trolling); there's a company
called Foveon that makes a sensor chip that *does* have three stacked
sensors, one for each color, at each site. They're used in the Sigma
Digital SLRs, and nowhere else. They have their *extreme* partisans;
that's what makes it a dangerous topic. Most but not all people who
have examined bunches of images from those cameras vs. Bayer pattern
cameras come to the conclusion that the Foveon X3 sensor currently has
enough drawbacks not to end up being superior.
--
David Dyer-Bennet, <mailto:D d-b@dd-b.net>, <http://www.dd-b.net/dd-b/&gt;
RKBA: <http://noguns-nomoney.com/&gt; <http://www.dd-b.net/carry/&gt;
Pics: <http://dd-b.lighthunters.net/&gt; <http://www.dd-b.net/dd-b/SnapshotAlbum/&gt;
Dragaera/Steven Brust: <http://dragaera.info/&gt; Much of which is still down
Anonymous
August 30, 2005 11:40:30 PM

Archived from groups: rec.photo.digital (More info?)

"Martin" <funkychateauSPAM@yahoo.com> wrote in message
news:1125418022.430454.222680@g43g2000cwa.googlegroups.com...
> I know this sounds like a really basic question, but I never thought
> about it before recently, when considering taking grayscale photos of
> old prints for a digital archive.
>
> As I understand it, my focal-plane sensor chip does not consist of
> elements that individually respond to all "colors". Instead, there are
> three distinct types of elements, each of which responds to particular
> color range and associated intensity for that range. Is this correct?
>
> If so, then is a 3-megapixel image made of 3 million triads? Or do the
> spec-writers "cheat" and call each individual sensor element, though
> incapable of capturing full-frequency information, a pixel? If that is
> the case, then is my 3-megapixel camera really only providing
> information for one million sites of combined color/intensity?
>
> Finally, if this is the case, and I switch to grayscale capture mode,
> does each of the three elements in a triad now capture independent
> intensity information and provide me a 3X increase in spatial
> resolution, giving me a "real" 3 megapixels in grayscale vs only a
> "real" 1 megapixel in full color?
>
> Martin
>

While it's true that each pixel only responds to a particular color, it
still produces a single color that isn't necessarily that pixel's original
color. IOW, the green pixel doesn't necessarily produce green. For
instance, if a green pixel is active, it "looks" at the other pixels around
it to figure out what color it should be. If the red and blue pixels
surrounding it are active, then it knows it should be white. If only the
blue pixels are active, then it should produce a cyan color. If the red
ones, then yellow. Similarly for the other pixels. Of course, the pixels
don't "know" anything. It's all done in software.
There was a good article on this at Lexar's site, but I can't find it
anymore. Someone else mentioned bayer sensor. Googling that would
certainly turn up some informative sites.
Anonymous
August 31, 2005 12:21:34 AM

Archived from groups: rec.photo.digital (More info?)

On 30 Aug 2005 17:39:02 -0500, David Dyer-Bennet <dd-b@dd-b.net>
wrote:

>"Martin" <funkychateauSPAM@yahoo.com> writes:
>
>> I know this sounds like a really basic question, but I never thought
>> about it before recently, when considering taking grayscale photos of
>> old prints for a digital archive.
>>
>> As I understand it, my focal-plane sensor chip does not consist of
>> elements that individually respond to all "colors". Instead, there are
>> three distinct types of elements, each of which responds to particular
>> color range and associated intensity for that range. Is this correct?
>
>Probably. Nearly all modern digital camera, and every single consumer
>P&S digital camera I know of, uses what you describe (it's called the
>Bayer filter pattern).

I wouldn't call the sensor elements "three distinct types of
elements", rather identical elements fed light through filters of
three colors: red, blue and green.
The sensor elements are the same throughout. It's the filters (Bayer
filter) that allows these elements to combine their output to create a
color image.
>
>> If so, then is a 3-megapixel image made of 3 million triads? Or do the
>> spec-writers "cheat" and call each individual sensor element, though
>> incapable of capturing full-frequency information, a pixel? If that is
>> the case, then is my 3-megapixel camera really only providing
>> information for one million sites of combined color/intensity?
>
>Each individual sensor element is called a pixel.
>
>So, sort-of. Turns out, though, that the eyeball works the same way,
>and is much more sensitive to luminance detail than color detail. So
>the results match very well with human vision.
>
>> Finally, if this is the case, and I switch to grayscale capture mode,
>> does each of the three elements in a triad now capture independent
>> intensity information and provide me a 3X increase in spatial
>> resolution, giving me a "real" 3 megapixels in grayscale vs only a
>> "real" 1 megapixel in full color?
>
>No. The filters on each pixel site are permanently emplaced (consider
>the alignment issues of making them removable!), so you're getting
>interpolated data at each site either way.
>
>This is a dangerous topic (giving you the benefit of the doubt here,
>since there's no reason to think you're trolling); there's a company
>called Foveon that makes a sensor chip that *does* have three stacked
>sensors, one for each color, at each site. They're used in the Sigma
>Digital SLRs, and nowhere else. They have their *extreme* partisans;
>that's what makes it a dangerous topic. Most but not all people who
>have examined bunches of images from those cameras vs. Bayer pattern
>cameras come to the conclusion that the Foveon X3 sensor currently has
>enough drawbacks not to end up being superior.

--
Bill Funk
Replace "g" with "a"
funktionality.blogspot.com
Anonymous
August 31, 2005 2:10:16 AM

Archived from groups: rec.photo.digital (More info?)

Bob Harrington wrote:


> Better yet - if you do the grayscale conversion from a color original,
> you can choose from the individual color channels to get various
> contrast effects similar to using colored filters with black and white
> film.

Yes.. Actually the newer cameras are incorporating this into
their firmware.. You can tailor the color channels to obtain
a better looking B&W image.. The Canon 20D does this..

Of course, you can still do this to a JPEG after the fact
with photoediting software.
Anonymous
August 31, 2005 11:27:47 AM

Archived from groups: rec.photo.digital (More info?)

Correction: Foveon is in Sigma DSLR, not Fuji
Anonymous
August 31, 2005 12:20:17 PM

Archived from groups: rec.photo.digital (More info?)

Basically, a pixel is a mathematical construct defining how the data is
arranged and stored. While monochrome cameras may have a physical focal
plane array of detectors with each detector assigned to a "pixel" this
is not necessary. There are three ways to generate color images. We
could do it with three seperate chips and mirrors (better digital video
cameras work this way), a single chip and a mosaic of color filters, and
one unique type of chip that has essentially the detectors vertically
stacked and each layer responding to a unique color.

In a single chip camera with filter array, exactly what happens in
greyscale mode is dependent on the mathematical formulas used in the
internal camera processing.

One reason the filter system works is that the eye gets its spatial
resolution from the black and white (luminance) information in the
image- the eye cannot detect color changes with the same acuity as
brightness changes.

Martin wrote:
> I know this sounds like a really basic question, but I never thought
> about it before recently, when considering taking grayscale photos of
> old prints for a digital archive.
>
> As I understand it, my focal-plane sensor chip does not consist of
> elements that individually respond to all "colors". Instead, there are
> three distinct types of elements, each of which responds to particular
> color range and associated intensity for that range. Is this correct?
>
> If so, then is a 3-megapixel image made of 3 million triads? Or do the
> spec-writers "cheat" and call each individual sensor element, though
> incapable of capturing full-frequency information, a pixel? If that is
> the case, then is my 3-megapixel camera really only providing
> information for one million sites of combined color/intensity?
>
> Finally, if this is the case, and I switch to grayscale capture mode,
> does each of the three elements in a triad now capture independent
> intensity information and provide me a 3X increase in spatial
> resolution, giving me a "real" 3 megapixels in grayscale vs only a
> "real" 1 megapixel in full color?
>
> Martin
>
Anonymous
August 31, 2005 1:48:18 PM

Archived from groups: rec.photo.digital (More info?)

On Tue, 30 Aug 2005 17:18:48 -0500, Jim Townsend <not@real.address>
wrote:

>Martin wrote:
>
>> I know this sounds like a really basic question, but I never thought
>> about it before recently, when considering taking grayscale photos of
>> old prints for a digital archive.
snip
>I don't like the idea of saying a sensor has pixels.. I prefer
>to call them sensor sites. The sensor sites produce the pixels.

in some past discussions (Foveon dead horse :-)) some people prefered
to call them "sensels" IIRC ...
FWIW
Anonymous
August 31, 2005 1:48:19 PM

Archived from groups: rec.photo.digital (More info?)

imbsysop wrote:

> On Tue, 30 Aug 2005 17:18:48 -0500, Jim Townsend <not@real.address>
> wrote:
>
>>Martin wrote:

>>I don't like the idea of saying a sensor has pixels.. I prefer
>>to call them sensor sites. The sensor sites produce the pixels.
>
> in some past discussions (Foveon dead horse :-)) some people prefered
> to call them "sensels" IIRC ...

Yep.. Sensels is good too.. It's still less confusing than pixels.
Anonymous
August 31, 2005 1:49:59 PM

Archived from groups: rec.photo.digital (More info?)

On 30 Aug 2005 17:39:02 -0500, David Dyer-Bennet <dd-b@dd-b.net>
wrote:

snip
>This is a dangerous topic (giving you the benefit of the doubt here,
>since there's no reason to think you're trolling); there's a company
>called Foveon that makes a sensor chip that *does* have three stacked
>sensors, one for each color, at each site. ..

minor tech correction .. it is one sensor with 3 (silicon)layers ..
:-)
Anonymous
September 1, 2005 1:17:06 AM

Archived from groups: rec.photo.digital (More info?)

"Martin" <funkychateauSPAM@yahoo.com> wrote in message
news:1125418022.430454.222680@g43g2000cwa.googlegroups.com...
>I know this sounds like a really basic question, but I never thought
> about it before recently, when considering taking grayscale photos of
> old prints for a digital archive.
>
> As I understand it, my focal-plane sensor chip does not consist of
> elements that individually respond to all "colors". Instead, there are
> three distinct types of elements, each of which responds to particular
> color range and associated intensity for that range. Is this correct?
>
> If so, then is a 3-megapixel image made of 3 million triads? Or do the
> spec-writers "cheat" and call each individual sensor element, though
> incapable of capturing full-frequency information, a pixel?
<snip>
IMO the cameras usually have the sensitive sites in fours, with two green
filtered sites for each pair of red and blue.

RGBGRGBG
GBGRGBGR
BGRGBGRG
GRGBGRGB

A 4 megapixel camera has 2 million green filtered sites, and one million
each of red and blue. The camera records the brightness of the image at 4
million sites (luminance) but has less information about color. If your
scene were all red, your grayscale file would show no (or very low)
brightness in three fourths of the sites. Twice as many green are used
because it is in the center of the spectrum, and contains most of the
luminance information of most scenes.

High quality camcorders use three different sensors, each with its own
filter, and thus have somewhat higher resolution than camcorders that use
only one sensor (with the same number of pixels as each of the three in the
higher quality ones) with filters on the individual sites. They don't have
three times as much, though, because real scenes have different colors in
different areas.

This works most of the time because the eye-brain system has much higher
resolution for brightness than for color. The same trick is used in
television. When I was watching Venus Williams play tennis yesterday her
costume looked white when the view took in the whole court, but proved to be
a pale pink in close-ups.

This is just one man's understanding.
--
Gerry
http://www.pbase.com/gfoley9999/
http://www.wilowud.net/
http://home.columbus.rr.com/gfoley
http://www.fortunecity.com/victorian/pollock/263/egypt/...
Anonymous
September 1, 2005 10:55:50 AM

Archived from groups: rec.photo.digital (More info?)

On Tue, 30 Aug 2005 17:18:48 -0500, Jim Townsend <not@real.address>
wrote:

>Martin wrote:
>
>> I know this sounds like a really basic question, but I never thought
>> about it before recently, when considering taking grayscale photos of
>> old prints for a digital archive.
>>
>> As I understand it, my focal-plane sensor chip does not consist of
>> elements that individually respond to all "colors". Instead, there are
>> three distinct types of elements, each of which responds to particular
>> color range and associated intensity for that range. Is this correct?
>
>First off.. Pixel is a blend of two words
>
>'Picture - Pix' and 'Element - el'.
>
>The **actual** pixels your camera produces are binary representations
>of data that was obtained by sampling light reflected from a real
>world object.
>
>Pixels have no shape, size or weight. They are just strings of ones
>and zeros held as electric charges in your camera memory card, or computer
>RAM, or as magnetic impressions on a spinning disk, or pits and valleys
>on a CD.
>
>I don't like the idea of saying a sensor has pixels.. I prefer
>to call them sensor sites. The sensor sites produce the pixels.

So ditch the middleman -- no one else in this ng has used your
terminology.

>
>I feel calling sensor sites pixels adds a level of confusion
>to this digital imaging thing. It's bad enough calling
>scanner sensor sites 'dots' and measuring them in dots
>per inch.
>
>Camera sensors are very complex.. There is more than one way to
>create a pixel. (The bayer method or foveon method for example).
>You can look this up on the web. Google for 'bayer sensor'
>You'll see explanations with diagrams which make it much better
>than you'll get in a newsgroup post.
>
>Lets call sensor sites, sensor sites.. Lets call what they *produce*
>pixels :-)

Useless.

>> Finally, if this is the case, and I switch to grayscale capture mode,
>> does each of the three elements in a triad now capture independent
>> intensity information and provide me a 3X increase in spatial
>> resolution, giving me a "real" 3 megapixels in grayscale vs only a
>> "real" 1 megapixel in full color?
>
>No.. You get the same color image from the sensor.. The firmware
>within the camera removes the color information. You can do the
>exact same thing if you take a color shot and remove the color
>with photo editing software. When you select black and white on
>your camera, the camera is just saving you that step.
>
>
Anonymous
September 1, 2005 12:56:29 PM

Archived from groups: rec.photo.digital (More info?)

"Jim Townsend" <not@real.address> wrote in message
news:11h9mqeccf2he71@news.supernews.com...
> Martin wrote:
>
> > I know this sounds like a really basic question, but I never thought
> > about it before recently, when considering taking grayscale photos of
> > old prints for a digital archive.
> >
> > As I understand it, my focal-plane sensor chip does not consist of
> > elements that individually respond to all "colors". Instead, there are
> > three distinct types of elements, each of which responds to particular
> > color range and associated intensity for that range. Is this correct?
>
> First off.. Pixel is a blend of two words
>
> 'Picture - Pix' and 'Element - el'.
>
> The **actual** pixels your camera produces are binary representations
> of data that was obtained by sampling light reflected from a real
> world object.
>
> Pixels have no shape, size or weight. They are just strings of ones
> and zeros held as electric charges in your camera memory card, or computer
> RAM, or as magnetic impressions on a spinning disk, or pits and valleys
> on a CD.
>
> I don't like the idea of saying a sensor has pixels.. I prefer
> to call them sensor sites. The sensor sites produce the pixels.
>
> I feel calling sensor sites pixels adds a level of confusion
> to this digital imaging thing. It's bad enough calling
> scanner sensor sites 'dots' and measuring them in dots
> per inch.

I don't like the idea of calling them "sensor sites" because this name
simply adds to the already-confusing terminology. After all, these putative
"sensor sites" don't really sense anything by themselves; instead, they are
simply well-defined regions of lightly-doped silicon which don't become true
sensors until the addition of a goodly amount of supporting circuitry,
microlenses, etc. Therefore, I propose a new name for these proto-sensing
sites: QUantum Illuminated Bit-ELementS, or quib-els, for short. Sites on
small CCDs or CMOS imagers (APS size or smaller) may be properly referred
to as "minor quib-els", sites on large sensors are termed "major quib-els",
and the proper name for discussions about both large and small varieties
should be designated "quib-elling."

And "dots" the way it is...


Hope this helps. Right.
Anonymous
September 2, 2005 1:10:19 PM

Archived from groups: rec.photo.digital (More info?)

Paul H. wrote:
>
> I don't like the idea of calling them "sensor sites" because this name
> simply adds to the already-confusing terminology. After all, these putative
> "sensor sites" don't really sense anything by themselves; instead, they are
> simply well-defined regions of lightly-doped silicon which don't become true
> sensors until the addition of a goodly amount of supporting circuitry,
> microlenses, etc. Therefore, I propose a new name for these proto-sensing
> sites: QUantum Illuminated Bit-ELementS, or quib-els, for short. Sites on
> small CCDs or CMOS imagers (APS size or smaller) may be properly referred
> to as "minor quib-els", sites on large sensors are termed "major quib-els",
> and the proper name for discussions about both large and small varieties
> should be designated "quib-elling."
>


There already is a well-used term, in use for decades. Each site is a
"detector". First electronic cameras were single detector, with some
sort of built in scanning, mirrors or other ways. Then, in seventies
folks began to make arrays of detectors, fabricating many detectors on
the same chip.
Anonymous
September 2, 2005 1:10:20 PM

Archived from groups: rec.photo.digital (More info?)

"Don Stauffer" <stauffer@usfamily.net> wrote in message
news:g3ZRe.6$1u4.2056@news.uswest.net...
> Paul H. wrote:
> >
> > I don't like the idea of calling them "sensor sites" because this name
> > simply adds to the already-confusing terminology. After all, these
putative
> > "sensor sites" don't really sense anything by themselves; instead, they
are
> > simply well-defined regions of lightly-doped silicon which don't become
true
> > sensors until the addition of a goodly amount of supporting circuitry,
> > microlenses, etc. Therefore, I propose a new name for these
proto-sensing
> > sites: QUantum Illuminated Bit-ELementS, or quib-els, for short. Sites
on
> > small CCDs or CMOS imagers (APS size or smaller) may be properly
referred
> > to as "minor quib-els", sites on large sensors are termed "major
quib-els",
> > and the proper name for discussions about both large and small varieties
> > should be designated "quib-elling."
> >
>
>
> There already is a well-used term, in use for decades. Each site is a
> "detector". First electronic cameras were single detector, with some
> sort of built in scanning, mirrors or other ways. Then, in seventies
> folks began to make arrays of detectors, fabricating many detectors on
> the same chip.

The unecessary elucidation is appreciated, but I know what a detector is.
But thanks for making a cameo appearance on "Quibbling for Dollars"! You've
made my day and my point.
Anonymous
September 2, 2005 1:10:27 PM

Archived from groups: rec.photo.digital (More info?)

Martin <funkychateauSPAM@yahoo.com> wrote:

> Finally, if this is the case, and I switch to grayscale capture mode,
> does each of the three elements in a triad now capture independent
> intensity information and provide me a 3X increase in spatial
> resolution, giving me a "real" 3 megapixels in grayscale vs only a
> "real" 1 megapixel in full color?

No. The camera takes exactly the same picture it did in color mode, and
then converts the image to grayscale. The color filters which others
have mentioned in this thread are still in place.

In general, you probably don't want to use grayscale mode in your
camera, converting later in Photoshop or similar applications will give
you much better control over the conversion process. There are numerous
techniques to obtain the best grayscale image, and the best technique
often depends on the image itself as well as the objectives and
preferences of the photographer.
September 9, 2005 12:18:00 AM

Archived from groups: rec.photo.digital (More info?)

In article <g3ZRe.6$1u4.2056@news.uswest.net>, Don Stauffer <stauffer@usfamily.net> wrote:
>Paul H. wrote:
>>
>> I don't like the idea of calling them "sensor sites" because this name
>> simply adds to the already-confusing terminology. After all, these putative
>> "sensor sites" don't really sense anything by themselves; instead, they are
>> simply well-defined regions of lightly-doped silicon which don't become true
>> sensors until the addition of a goodly amount of supporting circuitry,
>> microlenses, etc. Therefore, I propose a new name for these proto-sensing
>> sites: QUantum Illuminated Bit-ELementS, or quib-els, for short. Sites on
>> small CCDs or CMOS imagers (APS size or smaller) may be properly referred
>> to as "minor quib-els", sites on large sensors are termed "major quib-els",
>> and the proper name for discussions about both large and small varieties
>> should be designated "quib-elling."
>>
>


I never had a problem with pixel. I consider it a square piece of the puzzel.
However, the designated dot for short, is comprised of data
for intensity and position. Also, the dot, can be detected different
ways. In a multiple photodiode camera, the sensor size is smaller
than the actual size of the represented dot. It is also a separate sensor.
Don't lay down laws that are not universal. There is missing information
along the edges. It just does not get sampled or seen. I think when
a CCD scans, it also probably has missed areas, and the pixel
just represents an average intensity within the designated sample
site.

greg



>
>There already is a well-used term, in use for decades. Each site is a
>"detector". First electronic cameras were single detector, with some
>sort of built in scanning, mirrors or other ways. Then, in seventies
>folks began to make arrays of detectors, fabricating many detectors on
>the same chip.
!