Sign in with
Sign up | Sign in
Your question

Why is high resolution so desireable?

Last response: in Graphics & Displays
Share
Anonymous
a b U Graphics card
June 15, 2005 12:19:55 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

I keep on seeing posts with comments that imply that such-and-such
(monitor, card, whatever) is better because it can display at a higher
resolution, etc.

I can accept that there are situations where the higher resolutions,
which result in smaller content on the screen, are an advantage in that
one is then able to fit more on the screen, and this is definitely
useful at times (but this has nothing to do with quality). But other
than that, what is the big deal about higher resolutions?

This leads me to wonder about the following: is there any difference
between viewing an image/DVD at a resolution of a x b, and viewing the
same image at a higher resolution and magnifying it using the
application's zoom software so that the size is now the same as that
under a x b? I have been unable to see any obvious difference, but then
again I haven't been able to do side-by-side tests (single monitor).

Thanks for any response.
Anonymous
a b U Graphics card
June 15, 2005 2:33:03 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

Wow, that is a lot of information, Bob, and, to tell you the truth, I
will have to re-read it slowly if I'm going to digest all the details.
I will do that, at my pace.

Let me say here that I almost included the word "misnomer" in my
original post, with reference to the term "resolution", but I refrained
from doing so because in a way we ARE talking about resolution. i.e. we
are changing the number of pixels within fixed screen dimensions,
hence we are changing resolution. I am questioning the benfits of doing
so.

You make reference to what the eye can resolve, under some conditions.
What I fail to see (sorry) is this: taking a specific image, say a 5cm
square on a given monitor at a specific resolution setting, what is
the benefit of displaying that image at a higher resolution if the
result is going to be smaller? Are we going to be able to discern more
detail? This is not the same as a photographic lens being capable of
greater resolution, because the benefits of this higher resolution
would show up in larger prints, otherwise there is no point. If you
are going to make small prints you don't need a lens that can resolve
minute detail. Yes, the hardware is providing us with more pixels per
cm/inch, but the IMAGE is not being displayed using more pixels. Not
only that, but the image is now smaller, possibly to the point of being
too small to view comfortably.

I can't help but suspect that everybody is chasing "higher resolution"
without knowing why they are doing so.

Thanks for your response.

Bill
Anonymous
a b U Graphics card
June 15, 2005 3:47:20 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

Many gamers usually strive to
run at 1600x1200 because it creates a cleaner edge around objects
without resorting to anti-aliasing.

Not being a gamer, I'd have no appreciation for this, but fair enough.
Related resources
Anonymous
a b U Graphics card
June 15, 2005 4:40:58 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

I think the more mainstream way you're seeing it involved higher
resolutions that involve a cleaner edge. Many gamers usually strive to
run at 1600x1200 because it creates a cleaner edge around objects
without resorting to anti-aliasing.

--
Cory "Shinnokxz" Hansen - http://www.coryhansen.com
Life is journey, not a destination. So stop running.
Anonymous
a b U Graphics card
June 15, 2005 8:29:19 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

"bxf" <bill@topman.net> wrote in message
news:1118848795.170276.266490@f14g2000cwb.googlegroups.com...
> I keep on seeing posts with comments that imply that such-and-such
> (monitor, card, whatever) is better because it can display at a higher
> resolution, etc.
>
> I can accept that there are situations where the higher resolutions,
> which result in smaller content on the screen, are an advantage in that
> one is then able to fit more on the screen, and this is definitely
> useful at times (but this has nothing to do with quality). But other
> than that, what is the big deal about higher resolutions?

You've hit on a very good point, but to cover it adequately I'm
first going to have to (once again) clarify exactly what we mean
by the often-misused word "resolution."

In the proper usage of the word (and, by the way, how you
most often see it used with respect to such things as printers
and scanners), "resolution" is that spec which tells you how
much detail you can resolve per unit distance - in other
words, if we're really talking about "resolution," you should
be seeing numbers like "dots per inch" or "pixels per visual
degree" or some such. Simply having more pixels is not always
a good thing - you have to first be able to actually resolve them
on the display in question (not generally a problem for fixed-
format displays such as LCDs, if run in their native mode) AND
you need to be able to resolve them visually. That last bit
means that the number of pixels you really need depends on
how big the display (or more correctly, the image itself) will
be, and how far away you'll be when you're viewing it.

The human eye can resolve up to about 50 or 60 cycles
per visual degree - meaning for each degree of angular
distance as measured from the viewing point, you can't
distinguish more than about 100-120 pixels (assuming
those pixels are being used to present parallel black-and
-white lines, which would make for 50-60 line pairs or
"cycles"). Actually, human vision isn't quite this good
under many circumstances (and is definitely not this good
in terms of color, as opposed to just black-and-white
details), but assuming that you can see details down to
a level of about one cycle per minute of angle is often used
as a rule-of-thumb limit.

This says that to see how much resolution you need, and
therefore how many pixels in the image, you figure the
display size, what visual angle that appears to be within
the visual field at the desired distance, and apply this
limit. Let's say you have a 27" TV that you're watching
from 8 feet away. A 27" TV presents an image that's
about 15.5" tall, and if you're 8 feet (96 inches) away,
then the visual angle this represents is:

2 x inv. tan (7.75/96) = 9.2 degrees

At the 60 cycles/degree limit, you can therefore visually
resolve not more than about 576 line pairs, or 1152
pixels. Anything more than this would be wasted, and
even this, again, should be viewed as an upper limit -
your "color resolution" (the spatial acuity of the eye in
terms of color differences) is nowhere near this good.
In terms of pixel formats, then, an image using
the standard 1280 x 1024 format would be just about as
good as you'd ever need to be at this size and distance.
Note that a 15.5" image height is also what you get from
roughly a 32" 16:9 screen, so the HDTV standard
1920 x 1080 format is just about ideal for that size and
distance (and an 8' distance may be a little close for
a lot of TV viewing).

However, this again is the absolute upper limit imposed by
vision. A more reasonable, practical goal, in terms of
creating an image that appears to be "high resolution" (and
beyond which we start to see diminishing returns in terms of
added pixels) is about half the 60 cycles/degree figure, or
somewhere around 30. This means that for the above-mentioned
27" TV at 8', the standard 480- or 576-line TV formats,
IF fully resolved (which many TV sets do not do), are actually
pretty good matches to the "practical" goal, and the higher-
resolution HDTV formats probably don't make a lot of
sense until you're dealing with larger screens.

At typical desktop monitor sizes and distances, of course,
you can resolve a much greater number of pixels; from perhaps
2' or so from the screen, you might want up to about 300
pixels per inch before you'd say that you really couldn't use
any more. That's comfortably beyond the capability of most
current displays (which are right around 100-120 ppi), but
again, this is the absolute upper limit. Shooting for around
150-200 ppi is probably a very reasonable goal in terms of
how much resolution we could actually use in practice on
most desktop displays. More than this, and it simply won't
be worth the cost and complexity of adding the extra pixels.


> This leads me to wonder about the following: is there any difference
> between viewing an image/DVD at a resolution of a x b, and viewing the
> same image at a higher resolution and magnifying it using the
> application's zoom software so that the size is now the same as that
> under a x b?

No, no difference. In terms of resolution (in the proper sense
per the above, pixels per inch) the two are absolutely identical.

Bob M.
Anonymous
a b U Graphics card
June 16, 2005 8:49:47 AM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

bxf wrote:

> Wow, that is a lot of information, Bob, and, to tell you the truth, I
> will have to re-read it slowly if I'm going to digest all the details.
> I will do that, at my pace.
>
> Let me say here that I almost included the word "misnomer" in my
> original post, with reference to the term "resolution", but I refrained
> from doing so because in a way we ARE talking about resolution. i.e. we
> are changing the number of pixels within fixed screen dimensions,
> hence we are changing resolution. I am questioning the benfits of doing
> so.
>
> You make reference to what the eye can resolve, under some conditions.
> What I fail to see (sorry) is this: taking a specific image, say a 5cm
> square on a given monitor at a specific resolution setting, what is
> the benefit of displaying that image at a higher resolution if the
> result is going to be smaller?

One adjusts other settings so that feature size is the same. This can cause
other difficulties however.

> Are we going to be able to discern more
> detail? This is not the same as a photographic lens being capable of
> greater resolution, because the benefits of this higher resolution
> would show up in larger prints, otherwise there is no point. If you
> are going to make small prints you don't need a lens that can resolve
> minute detail. Yes, the hardware is providing us with more pixels per
> cm/inch, but the IMAGE is not being displayed using more pixels. Not
> only that, but the image is now smaller, possibly to the point of being
> too small to view comfortably.
>
> I can't help but suspect that everybody is chasing "higher resolution"
> without knowing why they are doing so.
>
> Thanks for your response.
>
> Bill

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
Anonymous
a b U Graphics card
June 16, 2005 9:46:22 AM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

J. Clarke wrote:
> bxf wrote:
>
> >
> > You make reference to what the eye can resolve, under some conditions.
> > What I fail to see (sorry) is this: taking a specific image, say a 5cm
> > square on a given monitor at a specific resolution setting, what is
> > the benefit of displaying that image at a higher resolution if the
> > result is going to be smaller?
>
> One adjusts other settings so that feature size is the same. This can cause
> other difficulties however.

Hence my last question in the original post: what is the difference
between an image of a certain viewing size (dictated by the monitor
resolution), and the same image, viewed under higher resolution
settings and therefore a smaller image on the screen, all other things
being equal), but magnified by the application (or "other settings", as
you put it)?

Simplistically, this is how I see the situation: we have an image of A
x B pixels. If we view it under monitor resolution settings of say, 800
x 600, we will see an image of a certain size, which depends on the
monitor in use. If we change the resolution to 1600 x 1200, we are
halving the size of each monitor pixel, and the image will be half the
size that it was at 800 x 600. If we now tell the application to double
the size of the image, the application must interpolate, so that each
pixel in the original image will now be represented by four monitor
pixels. This would not result in increased image quality, and it
requires that the application do some CPU work which it didn't have
to do when the monitor was at the lower resolution setting.

So the question becomes one of comparing the quality obtained with
large monitor pixels vs the quality when using smaller pixels plus
interpolation. And, we can throw in the fact that, by having it
interpolate, we are forcing the CPU to do more work.

Any thoughts on this? Am I failing to take something into
consideration?
Anonymous
a b U Graphics card
June 16, 2005 12:56:46 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

J. Clarke wrote:
> bxf wrote:
>

> Not magnified. Font size, icon size, etc adjusted at the system level, so
> things are the same size but sharper.

How do these apply if I'm viewing an image with a graphics program or
watching a DVD?
Anonymous
a b U Graphics card
June 16, 2005 2:52:18 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

Bob Myers wrote:
> "bxf" <bill@topman.net> wrote in message
> news:1118925982.351383.175100@g49g2000cwa.googlegroups.com...
> > Hence my last question in the original post: what is the difference
> > between an image of a certain viewing size (dictated by the monitor
> > resolution), and the same image, viewed under higher resolution
> > settings and therefore a smaller image on the screen, all other things
> > being equal), but magnified by the application (or "other settings", as
> > you put it)?
>
> Here we again run into confusion problems between "resolution"
> as is commonly used here and the term in its technically proper
> sense - but the bottom line is that a given object rendered at
> a specific resolution (in terms of PPI) looks the same no matter
> how many pixels are in the complete image (i.e., the full screen)
> or how large that screen is. In other words, if you have an image
> of, say, an apple appearing on your display, and that apple appears
> 3" tall and at 100 ppi resolution (meaning the the apple itself is
> about 300 pixels tall), nothing else matters.

It's funny. Although I understand what you say and can clearly see its
obvious validity, I still find myself failing to understand how it
relates to the following:

If I have an image obtained from a digital camera, for example, that
image consists of a fixed number of pixels. If I want to see that image
on my screen at some convenient size, I can accomplish that in two
ways: I can set my monitor's "resolution" to values which more-or-less
yield that convenient size, or I can tell the application to manipulate
the image so that it is now displayed at this size.

If I use the latter technique, the application must discard pixels if
it is to make the image smaller, or, using some form of interpolation,
add pixels in order to make it larger. In either case there is image
degradation (never mind whether or not we can discern that
degradation), and hence my attempt to understand how this degradation
compares to the effect of viewing an unmanipulated image using larger
monitor pixels.

To tell you the truth, I can't help but feel that I'm confusing issues
here, but I don't know what they are. For example, I cannot imagine any
video player interpolating stuff on the fly in order to obey a given
zoom request.

Where is my thinking going off?
Anonymous
a b U Graphics card
June 16, 2005 3:12:29 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

bxf wrote:

>
> J. Clarke wrote:
>> bxf wrote:
>>
>> >
>> > You make reference to what the eye can resolve, under some conditions.
>> > What I fail to see (sorry) is this: taking a specific image, say a 5cm
>> > square on a given monitor at a specific resolution setting, what is
>> > the benefit of displaying that image at a higher resolution if the
>> > result is going to be smaller?
>>
>> One adjusts other settings so that feature size is the same. This can
>> cause other difficulties however.
>
> Hence my last question in the original post: what is the difference
> between an image of a certain viewing size (dictated by the monitor
> resolution), and the same image, viewed under higher resolution
> settings and therefore a smaller image on the screen, all other things
> being equal), but magnified by the application (or "other settings", as
> you put it)?

Not magnified. Font size, icon size, etc adjusted at the system level, so
things are the same size but sharper.

> Simplistically, this is how I see the situation: we have an image of A
> x B pixels. If we view it under monitor resolution settings of say, 800
> x 600, we will see an image of a certain size, which depends on the
> monitor in use. If we change the resolution to 1600 x 1200, we are
> halving the size of each monitor pixel, and the image will be half the
> size that it was at 800 x 600. If we now tell the application to double
> the size of the image, the application must interpolate, so that each
> pixel in the original image will now be represented by four monitor
> pixels. This would not result in increased image quality, and it
> requires that the application do some CPU work which it didn't have
> to do when the monitor was at the lower resolution setting.
>
> So the question becomes one of comparing the quality obtained with
> large monitor pixels vs the quality when using smaller pixels plus
> interpolation. And, we can throw in the fact that, by having it
> interpolate, we are forcing the CPU to do more work.
>
> Any thoughts on this? Am I failing to take something into
> consideration?

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
Anonymous
a b U Graphics card
June 16, 2005 3:44:51 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

As far as monitors are concerned -as long as the display quality is
there- then for viewing digital photos, I would say the higher
resolution the better. You are display more pixels per inch when the
desktop resolution is increased which your eyes/brain can easily
resolve.

Just consider that a monitor might display an average of 72dpi, where
as the guidelines for printing a 4"x6" digital photo recommends a dpi
value of around 300.
Anonymous
a b U Graphics card
June 16, 2005 3:57:09 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

Terence wrote:
> As far as monitors are concerned -as long as the display quality is
> there- then for viewing digital photos, I would say the higher
> resolution the better. You are display more pixels per inch when the
> desktop resolution is increased which your eyes/brain can easily
> resolve.

You are confusing issues, Terence. The more pixels you have in an
image, the better quality you can obtain when printing the image. But
this has nothing to do with monitor "resolution", where the image gets
smaller as you increase the "resolution" value.
Anonymous
a b U Graphics card
June 16, 2005 4:46:55 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

The point I was trying to make was that you can view a larger portion
of an image (viewed at full size) when increasing monitor resolution -
which is usually more desirable when performing image-editing work. I
do understand that whatever the desktop resolution may be set at has
nothing to do with the original quality of the image.
Anonymous
a b U Graphics card
June 16, 2005 4:52:36 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

chrisv wrote:
> bxf wrote:
>
> >Many gamers usually strive to
> >run at 1600x1200 because it creates a cleaner edge around objects
> >without resorting to anti-aliasing.
> >
> >Not being a gamer, I'd have no appreciation for this, but fair enough.
>
> No offense, but you really need to learn how to quote properly. You
> last post should have looked something like the below:
>
>
>
> Shinnokxz wrote:
>
> >I think the more mainstream way you're seeing it involved higher
> >resolutions that involve a cleaner edge. Many gamers usually strive to
> >run at 1600x1200 because it creates a cleaner edge around objects
> >without resorting to anti-aliasing.
>
> Not being a gamer, I'd have no appreciation for this, but fair enough.

So How's this? In fact, I was just told (after enquiring) how this is
done in another post.

Cheers.
Anonymous
a b U Graphics card
June 16, 2005 4:55:59 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

Terence wrote:
> The point I was trying to make was that you can view a larger portion
> of an image (viewed at full size) when increasing monitor resolution -
> which is usually more desirable when performing image-editing work. I
> do understand that whatever the desktop resolution may be set at has
> nothing to do with the original quality of the image.

OK, I understand you better now. In fact I make this point in my
original post. Of course, this does not precdlude the need to magnify
the image sometimes simply so that one can deal with areas that are not
so small that they become impossible to edit with any precision.
Anonymous
a b U Graphics card
June 16, 2005 6:41:53 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

bxf wrote:

>Many gamers usually strive to
>run at 1600x1200 because it creates a cleaner edge around objects
>without resorting to anti-aliasing.
>
>Not being a gamer, I'd have no appreciation for this, but fair enough.

No offense, but you really need to learn how to quote properly. You
last post should have looked something like the below:



Shinnokxz wrote:

>I think the more mainstream way you're seeing it involved higher
>resolutions that involve a cleaner edge. Many gamers usually strive to
>run at 1600x1200 because it creates a cleaner edge around objects
>without resorting to anti-aliasing.

Not being a gamer, I'd have no appreciation for this, but fair enough.
Anonymous
a b U Graphics card
June 16, 2005 6:56:51 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

bxf wrote:

>chrisv wrote:
>>
>> No offense, but you really need to learn how to quote properly. You
>> last post should have looked something like the below:
>
>So How's this? In fact, I was just told (after enquiring) how this is
>done in another post.
>
>Cheers.

Much better. 8)
Anonymous
a b U Graphics card
June 16, 2005 7:21:33 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

....
> How do these apply if I'm viewing an image with a graphics program or
> watching a DVD?
>

As long as the number of pixels in the source image is less than the number
being displayed, increased resolution doesn't buy you anything when viewing
the image. If the number of pixels in the source image is greater than the
current display setting, then a higher display "resolution" will improve the
picture because more of the source pixels can be represented.

For example:
Your screen is set at 800x600 = 480,000 pixels = 0.48MegaPixels.
You have a digital camera that take a 2MegaPixel picture = 1600x1200.
You will only be able to see about 1/2 of the detail in the picture if you
display the picture full screen. However, you buddy has a "high resolution"
monitor capable of 1600x1200 pixels. When he views the picture full
screen, he will see it in all it's glory. }:)  Now given that 4, 5, 6, and even
8 MP cameras are common today, you can see why higher resolutions
can be convenient for displaying and working with digital images.

In the case of a DVD, the picture is something like 852x480 (16:9 widescreen).
Your 800x600 display will display nearly all the information in every
frame of the DVD. On your buddy's system, either the picture will be smaller,
or interpolated to a larger size (likely causing a small amount of degredation).
You might argue that a screen setting just large enough to display a complete
852x480 window give the best results for watching a movie.

That's fine for DVD, but what if you want to watch your HD antenna/dish/cable
feed? Then you might want 1278x720, or even 1980x1080 to see all the detail
in the picture.

And this doesn't even begin to consider issue of viewing multiple images/windows/
etc at a time.

In summary, you are probably correct that "high resolution" isn't necessary for
DVD watching, but it certainly is useful for a lot of other things.


--
Dan (Woj...) [dmaster](no space)[at](no space)[lucent](no space)[dot](no
space)[com]
===============================
"I play the piano no more running honey
This time to the sky I'll sing if clouds don't hear me
To the sun I'll cry and even if I'm blinded
I'll try moon gazer because with you I'm stronger"
Anonymous
a b U Graphics card
June 16, 2005 9:04:41 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

"bxf" <bill@topman.net> wrote in message
news:1118925982.351383.175100@g49g2000cwa.googlegroups.com...
> Hence my last question in the original post: what is the difference
> between an image of a certain viewing size (dictated by the monitor
> resolution), and the same image, viewed under higher resolution
> settings and therefore a smaller image on the screen, all other things
> being equal), but magnified by the application (or "other settings", as
> you put it)?

Here we again run into confusion problems between "resolution"
as is commonly used here and the term in its technically proper
sense - but the bottom line is that a given object rendered at
a specific resolution (in terms of PPI) looks the same no matter
how many pixels are in the complete image (i.e., the full screen)
or how large that screen is. In other words, if you have an image
of, say, an apple appearing on your display, and that apple appears
3" tall and at 100 ppi resolution (meaning the the apple itself is
about 300 pixels tall), nothing else matters.

In the example you are talking about, though, the apple's image is NOT
necessarily "at a higer resolution" in a perceptual sense of the term. You
have more pixels in the entire display, but the same number in
the apple - making the apple smaller. Whether or not this LOOKS
better depends on just where the two cases were in terms of the spatial
acuity curve of the eye. If the smaller-but-same-number-of-pixels-version
now has the pixels sufficiently smaller such that you're past the acuity
limit,
all the detail might still be there but it's useless - your eye can't
resolve it,
and so you do not perceive it as being at the same level of "detail" or
"quality". This is why, for instance, it would be pretty silly to be
talking
about something like a 2048 x 1536 display on a PDA - you can't
possibly, in normal use, be seeing such a thing from a small enough
distance to really make use of all those pixels.

> Simplistically, this is how I see the situation: we have an image of A
> x B pixels. If we view it under monitor resolution settings of say, 800
> x 600, we will see an image of a certain size, which depends on the
> monitor in use. If we change the resolution to 1600 x 1200, we are
> halving the size of each monitor pixel, and the image will be half the
> size that it was at 800 x 600. If we now tell the application to double
> the size of the image, the application must interpolate, so that each
> pixel in the original image will now be represented by four monitor
> pixels. This would not result in increased image quality, and it
> requires that the application do some CPU work which it didn't have
> to do when the monitor was at the lower resolution setting.

And this is the problem with rendering images in terms of a fixed
number of pixels, rather than adapting to the available display
resolution (in terms of pixels per inch) and holding the image
physical size constant. Systems which do this are just fine as long
as all displays tend to have the same resolution (again in ppi, which
has been true for computer monitors for some time - right around
100 ppi has been the norm), but as we see more different display
technologies and sizes in the market, offering a much wider
range of resolutions (50 - 300 ppi is certainly possible already),
this model breaks.


Bob M.
Anonymous
a b U Graphics card
June 16, 2005 9:32:51 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

bxf wrote:

>
> J. Clarke wrote:
>> bxf wrote:
>>
>
>> Not magnified. Font size, icon size, etc adjusted at the system level,
>> so things are the same size but sharper.
>
> How do these apply if I'm viewing an image with a graphics program or
> watching a DVD?

Depends on the image. If it's 100x100 then you don't gain anything, if it's
3000x3000 then you can see more of it at full size or have to reduce it
less to see the entire image.

For DVD there's not any practical benefit if the display resolution is
higher than the DVD standard, which is 720x480.

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
Anonymous
a b U Graphics card
June 17, 2005 7:47:59 AM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

Dan Wojciechowski wrote:
> ...
> > How do these apply if I'm viewing an image with a graphics program or
> > watching a DVD?
> >
>
> As long as the number of pixels in the source image is less than the number
> being displayed, increased resolution doesn't buy you anything when viewing
> the image. If the number of pixels in the source image is greater than the
> current display setting, then a higher display "resolution" will improve the
> picture because more of the source pixels can be represented.
>
> For example:
> Your screen is set at 800x600 = 480,000 pixels = 0.48MegaPixels.
> You have a digital camera that take a 2MegaPixel picture = 1600x1200.
> You will only be able to see about 1/2 of the detail in the picture if you
> display the picture full screen. However, you buddy has a "high resolution"
> monitor capable of 1600x1200 pixels. When he views the picture full
> screen, he will see it in all it's glory. }:)  Now given that 4, 5, 6, and even
> 8 MP cameras are common today, you can see why higher resolutions
> can be convenient for displaying and working with digital images.

Sorry Dan, the above is incorrect.

If you view a large image on a screen set to 800x600, you will see only
a portion of the image. If you view the same image with a 1600x1200
setting, the image will be smaller and you will see a larger portion of
it. That's all. There's nothing here that implies better detail. The
image may appear SHARPER at 1600x1200, but that is simply because the
detail is smaller, just like small TV screens look sharper than large
ones.

> In the case of a DVD, the picture is something like 852x480 (16:9 widescreen).
> Your 800x600 display will display nearly all the information in every
> frame of the DVD. On your buddy's system, either the picture will be smaller,
> or interpolated to a larger size (likely causing a small amount of degredation).
> You might argue that a screen setting just large enough to display a complete
> 852x480 window give the best results for watching a movie.

Well, this makes sense to me, and I'm trying to confirm that I'm
understanding things correctly. In addition to less degradation, there
should also be less CPU overhead, due to the absence of interpolation.

> That's fine for DVD, but what if you want to watch your HD antenna/dish/cable
> feed? Then you might want 1278x720, or even 1980x1080 to see all the detail
> in the picture.

Once again, the monitor setting does not improve the detail you can
see. If your IMAGE is larger (e.g. 1980x1080 vs 1278x720), THEN you are
able to see more detail. But this is not related to your MONITOR
setting, which is only going to determine the size of the image and
hence what portion of it you can see.
Anonymous
a b U Graphics card
June 17, 2005 8:04:18 AM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

J. Clarke wrote:
> bxf wrote:
>
> >
> > J. Clarke wrote:
> >> bxf wrote:
> >>
> >
> >> Not magnified. Font size, icon size, etc adjusted at the system level,
> >> so things are the same size but sharper.
> >
> > How do these apply if I'm viewing an image with a graphics program or
> > watching a DVD?
>
> Depends on the image. If it's 100x100 then you don't gain anything, if it's
> 3000x3000 then you can see more of it at full size or have to reduce it
> less to see the entire image.

OK, but this is not a quality issue. You view the image at a size that
is appropriate for your purposes.

If I'm photoediting an image, I need to see a certain level of detail
in order to work. That means that, on any given monitor, I must have
the image presented to me at a size that is convenient for my intended
editing function. Does it matter whether this convenient size is
achieved by adjusting monitor "resolution" or by interpolation (either
by the video system or by the application)? If the "resolution" setting
is low, then I would ask the application to magnify the image, say,
20x, whereas at a higher "resolution" setting I may find it appropriate
to have the application magnify the image 40x (my numbers are
arbitrary). Is there a difference in the end result?
Anonymous
a b U Graphics card
June 17, 2005 9:36:02 AM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

In addition to the above we have the question of the larger pixels, but
I don't know how to fit that into the equation.
Anonymous
a b U Graphics card
June 17, 2005 1:38:16 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

J. Clarke wrote:

> If the feature size on
> the image is at 40x still smaller than the pixel size then you gain from
> the higher res. If not then you don't.

I believe this statement is relevant, but I need to know what you mean
by "feature size". Also, by "pixel size" do you mean the physical size
of the pixel on the monitor?

> If you're used to low resolution and you change to high resolution then you
> may not notice much difference. But when you go back to low-res you almost
> certainly will.

While I'm writing as if I believe that "resolution" setting makes no
difference to the image we see, I am in fact aware that this is not the
case. I know that at low settings the image looks course.
Anonymous
a b U Graphics card
June 17, 2005 2:55:48 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

bxf wrote:

>
> J. Clarke wrote:
>> bxf wrote:
>>
>> >
>> > J. Clarke wrote:
>> >> bxf wrote:
>> >>
>> >
>> >> Not magnified. Font size, icon size, etc adjusted at the system
>> >> level, so things are the same size but sharper.
>> >
>> > How do these apply if I'm viewing an image with a graphics program or
>> > watching a DVD?
>>
>> Depends on the image. If it's 100x100 then you don't gain anything, if
>> it's 3000x3000 then you can see more of it at full size or have to reduce
>> it less to see the entire image.
>
> OK, but this is not a quality issue. You view the image at a size that
> is appropriate for your purposes.
>
> If I'm photoediting an image, I need to see a certain level of detail
> in order to work. That means that, on any given monitor, I must have
> the image presented to me at a size that is convenient for my intended
> editing function. Does it matter whether this convenient size is
> achieved by adjusting monitor "resolution" or by interpolation (either
> by the video system or by the application)? If the "resolution" setting
> is low, then I would ask the application to magnify the image, say,
> 20x, whereas at a higher "resolution" setting I may find it appropriate
> to have the application magnify the image 40x (my numbers are
> arbitrary). Is there a difference in the end result?

Suppose your monitor could display one pixel? How much more useful to you
would a monitor that can display two pixels be? How about four? See where
I'm going?

If the application magnifies 40x, whether you get a benefit from higher
resolution or not depends again on the image size. If the feature size on
the image is at 40x still smaller than the pixel size then you gain from
the higher res. If not then you don't. One thing you do gain if you use
the default settings for font size and whatnot is that there is more
available screen area to display your image and less of it taken up by
menus and the like.

If you're used to low resolution and you change to high resolution then you
may not notice much difference. But when you go back to low-res you almost
certainly will.

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
Anonymous
a b U Graphics card
June 17, 2005 6:21:14 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

bxf wrote:

>
> J. Clarke wrote:
>
>> If the feature size on
>> the image is at 40x still smaller than the pixel size then you gain from
>> the higher res. If not then you don't.
>
> I believe this statement is relevant, but I need to know what you mean
> by "feature size".

The size of a single pixel of the raw image.

> Also, by "pixel size" do you mean the physical size
> of the pixel on the monitor?

Yes.

>> If you're used to low resolution and you change to high resolution then
>> you
>> may not notice much difference. But when you go back to low-res you
>> almost certainly will.
>
> While I'm writing as if I believe that "resolution" setting makes no
> difference to the image we see, I am in fact aware that this is not the
> case. I know that at low settings the image looks course.

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
Anonymous
a b U Graphics card
June 18, 2005 11:51:55 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

"bxf" <bill@topman.net> wrote in message
news:1119006258.326842.169360@g47g2000cwa.googlegroups.com...
> If I'm photoediting an image, I need to see a certain level of detail
> in order to work. That means that, on any given monitor, I must have
> the image presented to me at a size that is convenient for my intended
> editing function. Does it matter whether this convenient size is
> achieved by adjusting monitor "resolution" or by interpolation (either
> by the video system or by the application)? If the "resolution" setting
> is low, then I would ask the application to magnify the image, say,
> 20x, whereas at a higher "resolution" setting I may find it appropriate
> to have the application magnify the image 40x (my numbers are
> arbitrary). Is there a difference in the end result?

OK - I think I see what the basic question really is, now, and
also let me apologize for not having been able to keep up with the
conversation the last couple of days due to some business travel.

Unfortunately, the answer to the above is going to have to be "it
depends." Let's consider an original image with a pixel format
far beyond anything that could reasonably be accomodated, in
total, on any current monitor - say, something like a 4k x 3k image.
And all you have to view (and edit) this image on is a 1024 x 768
display. Clearly, SOMETHING has to give if you're going to
work with this image on that display.

You can, as noted, scale the full image down to the 1024 x 768
format of the display - which is effectively a combination of
resampling and filtering the high-resolution information available
in the original down to this lower format. Obviously, you
unavoidably lose information in presenting the image this way, since
you only have about 1/16 of the original pixels to deal with.
The other way is to treat the display as a 1024 x 768 "window"
into the original 4k x 3k space, which preserves all of the original
information but which means that you can't possibly see everything
at once. (We'll ignore intermediate combinations of these for the
moment.)

If you go with the latter approach, you can examine all of the detail
the original has to offer, but if you're trying to gauge qualities of
the original image which can't be observed by only looking at a
small part (the overall color balance or composition, say), then clearly
this isn't the way to go. Looking at the scaled-down image, on the
other hand, lets you see these "overall" qualities at the cost of not
being able to examine the fine details. So the answer to the question of
which one is "best" depends almost entirely on just what you're trying
to do with the image. For a lot of image-editing or creation work,
the optimum answer is going to be a combination of these two
approaches - showing a scaled-down but "complete" image for
doing color adjustments and so forth, and working with the raw
"full-resolution" version to observe and tweak the full details. As
long as you preserve the original 4k x 3k DATA somewhere, no
matter how you VIEW it, nothing is really lost either way.

Did that help more than the previous takes on this?

Bob M.
Anonymous
a b U Graphics card
June 19, 2005 12:05:05 AM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

"J. Clarke" <jclarke.usenet@snet.net.invalid> wrote in message
news:D 8v4rq011ue@news4.newsguy.com...
> > I believe this statement is relevant, but I need to know what you mean
> > by "feature size".
>
> The size of a single pixel of the raw image.

OK, but this gets into another often-overlooked aspect of
digital imaging, or rather the spatial sampling of images.
While you can speak of the pixel "pitch" of the raw image
(in terms of pixels per inch or cycles per degree or whatever),
the "pixels" of that image technically do not have ANY physical
size. Sampling theory requires that we consider them as
dimensionless point samples; in other words, ideally we have
a set of "pixels" which represent luminance and color information
taken at zero-size sampling points across the image. When
DISPLAYING this information, we then have to deal with various
forms of filtering that are then imposed upon this array of sampled
values (the most basic being just a "rectangular" or "block" filter,
i.e., the values of that sample are considered as applying equally
over the full width and height of a given physical area), but it is
this process which then introduces error/noise into the representation
of the image. (Now, it may be that the original image information
was produced by a device which does, in fact, employ physical
"pixels" of a given size - but when dealing with samples images
from a theoretical or mathematical perspective, it's still important
to understand why a "pixel" in the image data is to be considered
as a point sample.)

Dr. Alvy Ray Smith, who was the first Graphics Fellow at
Microsoft and one of the cofounders of Pixar, wrote a very readable paper
explaining which this is an important distinction to make; it can be
found at:

ftp://ftp.alvyray.com/acrobat/6_pixel.pdf


Bob M.
Anonymous
a b U Graphics card
June 20, 2005 8:43:58 AM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

Firstly, my apologies for the delay in responding. As I work away from
home, I have no web access over the weekend. Also, I am in a European
time zone.

Rather than make specific quotes from the last few posts, let me say
that all the provided info is useful and appreciated. I do find that
the questions in my own mind have been redefining themselves somewhat
as the thread progresses.

Bob, I'm glad you had comments to make about John's statement
"The size of a single pixel of the raw image", because I would not
have known how to tackle that. I was not able to associate the term
"size" with a pixel of an image. To me, it has no size. At least
not until I print it or display it on a monitor.

And yet, paradoxically, I can't help but feel that this statement is
relevant to the one issue that I feel still has not been fully answered
here. Specifically, what is the significance of large monitor pixels,
as opposed to small ones? I can see that if image pixels did in fact
have size, then one could express a relationship between image pixel
size and monitor pixel size. But, as Bob explains, image pixels have no
size of their own.

So, at the risk of repeating myself, let's see if the following will
help us pinpoint the question (my current question) more precisely: if
we have an image of 100x100 pixels, what is the difference, if any,
between displaying it at 200% magnification on a monitor set to
1600x1200, and displaying it at 100% magnification on a monitor set to
800x600? There is no issue here with resolution or detail, as in either
case all the pixels are visible in their entirety, nor is there an
issue with image size, as in either case the displayed image will be
exactly the same size. Are small pixels "better" than large ones?

If the answer to the above is "no real difference", then I would
have to wonder why not run at a lower monitor "resolution" setting
and relieve the video system of some of the hard work it must do when
coping with high "resolution" settings (ignoring, of course, the
need for a high setting when it is required in order to view the
desired portion of the image). I believe this question is valid for
those situation where one in fact has control over everything that is
displayed. Unfortunately, this is not often the case. We can control
the size of an image or a video clip, but we cannot usually control the
size of the application's user interface. Nor the size of the
desktop, explorer, or whatever. Because of this, it seems to me that my
questions have no potential practical benefit. Perhaps one day we will
have scalable GUIs, etc, at which time my points will have more
significance.
Anonymous
a b U Graphics card
June 20, 2005 4:03:19 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

Bob Myers wrote:
> "bxf" <bill@topman.net> wrote in message
> news:1119267838.963019.244460@o13g2000cwo.googlegroups.com...
>

> On the other hand, having more (but
> smaller) pixels with which you can play also opens up the
> possibility of certain tricks you could play with the original
> imaging data (like "smoothing" or "anti-aliasing" things a little
> better) which may make the image LOOK better, even though
> they are not in any way increasing the objective accuracy of its
> presentation. So it comes down to what you (or the particular
> viewer in question) are most concerned about, and nothing more.
> If it's just looking at the original data in an accurate presentation,
> warts and all, and doing this in the most efficient manner possible,
> you'd probably want to choose the "lower res" display setting.
> If you want to play games to make it "look good" (and aren't
> worried about what's actually in the original data), you may have
> some reason to go with the "higher-res" setting.
>
> Bob M.

I believe you've covered just about everything that could be said on
the subject (certainly at my level, and then some). The last paragraph
spells out some of the practical benefits of small pixels, which should
have been rather obvious, yet were not details that I had considered
while formulating the questions posed in this thread.

Thanks for the conversation and all contributions.

Bill
Anonymous
a b U Graphics card
June 20, 2005 10:16:18 PM

Archived from groups: comp.sys.ibm.pc.hardware.video (More info?)

"bxf" <bill@topman.net> wrote in message
news:1119267838.963019.244460@o13g2000cwo.googlegroups.com...

> Bob, I'm glad you had comments to make about John's statement
> "The size of a single pixel of the raw image", because I would not
> have known how to tackle that. I was not able to associate the term
> "size" with a pixel of an image. To me, it has no size. At least
> not until I print it or display it on a monitor.

Right - it has no size at all. What it DOES have, though - or rather,
what the entire set of image data represents - is a certain spatial
sampling frequency (or usually a pair of such frequencies, along
orthogonal axes, even though they are often the same value).
Nyquist's sampling theorem applies to image capture just as well
as to anything else - anything within the original which represents
a spatial frequency greater than 1/2 the sampling rate (in terms of
cycles per degree or however you choose to measure it) CANNOT
be captured in the sampled data, or worse results in undesirable
artifacts through an "aliasing" process (which is precisely what the
infamous "Moire distortion" really is).

>
> And yet, paradoxically, I can't help but feel that this statement is
> relevant to the one issue that I feel still has not been fully answered
> here. Specifically, what is the significance of large monitor pixels,
> as opposed to small ones?

And if it's in those terms - "large" vs. "small" pixels, with no other
considerations - then there is no significance at all. You must know
the size of the image in question, and the distance from which it
will be observed, to make any meaningful comments about what
differences will result from different "pixel sizes." Concerns over
the "pixel size of the original image" are really bringing up a related
but distinct topic, which is the effect of resampling the image
data (if "scaling" is done) in order to fit it to the display's pixel
array. And as long as you are UPscaling (i.e., going to a higher
effective sampling frequency), this process can be done with
zero loss of actual information (which is not the same thing, of
course, as saying that the resulting upscaled image LOOKS the
same). Downscaling (downsampling) must always result in a
loss of information - it's unavoidable.


>
> So, at the risk of repeating myself, let's see if the following will
> help us pinpoint the question (my current question) more precisely: if
> we have an image of 100x100 pixels, what is the difference, if any,
> between displaying it at 200% magnification on a monitor set to
> 1600x1200, and displaying it at 100% magnification on a monitor set to
> 800x600?

Assuming that the display "pixels" are the same shape in both case,
and that the image winds up the same size in both cases, then there is
virtually no difference between these two (assuming they have been done
properly, which is also not always the case).

> There is no issue here with resolution or detail, as in either
> case all the pixels are visible in their entirety, nor is there an
> issue with image size, as in either case the displayed image will be
> exactly the same size. Are small pixels "better" than large ones?

Ah, but the problem here is that you've brought an undefined (and
highly subjective!) term into the picture (no pun intended :-)) - just
what does "better" mean? More or less expensive? Having a
certain "look" that a given user finds pleasing? Being the most
accurate representation possible? Having the best color and
luminance uniformity, or the brightest overall image. Again, there
is exactly ZERO difference between these two cases in the amount
of information (in the objective, quantifiable sense) that is available
or being presented to the viewer. But that does not mean that all
viewers will "like" the result equally well.

> If the answer to the above is "no real difference", then I would
> have to wonder why not run at a lower monitor "resolution" setting
> and relieve the video system of some of the hard work it must do when
> coping with high "resolution" settings (ignoring, of course, the
> need for a high setting when it is required in order to view the
> desired portion of the image).

And you're right, that would be the sensible thing to do IF this
were all there was to it. On the other hand, having more (but
smaller) pixels with which you can play also opens up the
possibility of certain tricks you could play with the original
imaging data (like "smoothing" or "anti-aliasing" things a little
better) which may make the image LOOK better, even though
they are not in any way increasing the objective accuracy of its
presentation. So it comes down to what you (or the particular
viewer in question) are most concerned about, and nothing more.
If it's just looking at the original data in an accurate presentation,
warts and all, and doing this in the most efficient manner possible,
you'd probably want to choose the "lower res" display setting.
If you want to play games to make it "look good" (and aren't
worried about what's actually in the original data), you may have
some reason to go with the "higher-res" setting.

Bob M.
!