G
Guest
Guest
Archived from groups: alt.comp.periphs.dcameras,rec.photo.digital,rec.photo.equipment.35mm,rec.photo.film+labs,rec.photo.darkroom (More info?)
Let's see if I catch some guru's attention with this subject
I have recently bought a Canon S50 but probably any other camera would
not change the question. It would probably be the same even with a film
camera + lab processing.
I can't understand how the following thing works:
Case a.
I take a photo by night at ISO 50 without flash. In the photo I can see
the streetlights which have white color (RGB = 255,255,255), but all the
rest is very dark.
Case b.
I take the same photo (same time and lens aperture) at ISO 400 without
flash. This time I can see everything. The streetligts are always white
at RGB = 255,255,255 .
I cannot understand how the algorithm works, and not even the physics
behind all this: how can the ratio between the luminosity of the
streetlight and the luminosity of the walls of the houses CHANGE
depending on the ISO (50 vs 400)??
One could say that the reason for this is the clipping, that is, at 400
ISO the streetlights were more luminous than RGB = 255,255,255 but they
have been clipped to that value.
But I think this is NOT the reason because if it should be, then what
would be the explanation for the fact that EVERY picture I take in a
whatever dark environment ALWAYS contains at least 1 pixel at with one
of the three components (R,G or B) at 255? It seems like there is an
algorithm which multiplies the information from the CCD until at least 1
pixel of the image reaches the maximum value (255).
BUT THEN, if such a normalizing algorithm exists, again the photos in
case a. and b. should present the same luminosity ratio between the
streetlights and the walls of the houses, while it is not like this.
So what?
Thanks in advance.
Let's see if I catch some guru's attention with this subject
I have recently bought a Canon S50 but probably any other camera would
not change the question. It would probably be the same even with a film
camera + lab processing.
I can't understand how the following thing works:
Case a.
I take a photo by night at ISO 50 without flash. In the photo I can see
the streetlights which have white color (RGB = 255,255,255), but all the
rest is very dark.
Case b.
I take the same photo (same time and lens aperture) at ISO 400 without
flash. This time I can see everything. The streetligts are always white
at RGB = 255,255,255 .
I cannot understand how the algorithm works, and not even the physics
behind all this: how can the ratio between the luminosity of the
streetlight and the luminosity of the walls of the houses CHANGE
depending on the ISO (50 vs 400)??
One could say that the reason for this is the clipping, that is, at 400
ISO the streetlights were more luminous than RGB = 255,255,255 but they
have been clipped to that value.
But I think this is NOT the reason because if it should be, then what
would be the explanation for the fact that EVERY picture I take in a
whatever dark environment ALWAYS contains at least 1 pixel at with one
of the three components (R,G or B) at 255? It seems like there is an
algorithm which multiplies the information from the CCD until at least 1
pixel of the image reaches the maximum value (255).
BUT THEN, if such a normalizing algorithm exists, again the photos in
case a. and b. should present the same luminosity ratio between the
streetlights and the walls of the houses, while it is not like this.
So what?
Thanks in advance.