Aside from the jargon about having a high pixel density and fitting more pixels in a relatively smaller area, doesn't it violate the fundamental definition of a pixel?
A pixel is a fundamental unit which has its own dimensions. A 1920px X 1080 image is basically digital data which represents a 20 inch X14 inch image in terms of absolute dimensions, notwithstanding the actual size of your monitor. This means that if we had a hypothetical monitor with the dimensions 20 X14 inch, then it will display an image of the resolution 1920X1080p such that each pixel in the image will have a physical pixel on the monitor representing it. What happens when we get a bigger monitor is this density is diluted, and I can fathom this concept but not the reverse. How can we concentrate an image beyond the bare minimum size that it needs to occupy to represent an image of the given resolution, as Retina claims to do?
Now, taking that into consideration, how does one fit 'more pixels' in any area given that each individual pixel represents a unique bit of data and needs to occupy a certain amount of space for it to be called a pixel?
Taking my above example, fitting more pixels means keeping the resolution of the image intact but displaying it on a smaller screen, say 10X7 inch. Or using a resolution of 1440p on a 20X14 inch screen to squeeze in more resolution in a smaller resolution. So how are these extra pixels find place as opposed to the former alternative where we had a 1:1 representation for each pixel. Does it mean that the Retina display uses a smaller fundamental unit than what we conventionally know as a pixel?
A pixel is a fundamental unit which has its own dimensions. A 1920px X 1080 image is basically digital data which represents a 20 inch X14 inch image in terms of absolute dimensions, notwithstanding the actual size of your monitor. This means that if we had a hypothetical monitor with the dimensions 20 X14 inch, then it will display an image of the resolution 1920X1080p such that each pixel in the image will have a physical pixel on the monitor representing it. What happens when we get a bigger monitor is this density is diluted, and I can fathom this concept but not the reverse. How can we concentrate an image beyond the bare minimum size that it needs to occupy to represent an image of the given resolution, as Retina claims to do?
Now, taking that into consideration, how does one fit 'more pixels' in any area given that each individual pixel represents a unique bit of data and needs to occupy a certain amount of space for it to be called a pixel?
Taking my above example, fitting more pixels means keeping the resolution of the image intact but displaying it on a smaller screen, say 10X7 inch. Or using a resolution of 1440p on a 20X14 inch screen to squeeze in more resolution in a smaller resolution. So how are these extra pixels find place as opposed to the former alternative where we had a 1:1 representation for each pixel. Does it mean that the Retina display uses a smaller fundamental unit than what we conventionally know as a pixel?