Quote:
Originally Posted by JSWolf
500dpi is rather silly. 300dpi is more than enough for reading.
|
That kind of resolution was a usage requirement on specific technology implementation: AMOLED pentile, for example... But it came from a gross paradigmatic transition: the producer insisted in calling a pixel what instead was an incomplete, not-independent collection of subpixels - an improper terminology that could have been made illegal (those general-public technical descriptions were definitely tainted).
In other terms, some of us would agree that a "pixel" has to be an independent unit, for example, visually, able to represent all colours. Now, this is applicable, logically and in traditional practice, to a square containing Red, Green and Blue light emitters. But Samsung came out for some bad reason calling "pixels" one Red plus one Green subpixel, or a Blue plus one Green... And if laid out at traditional "pixel" densities, the result was a horrendous fishnet. A traditional 800x600 screen was made of 1'440'000 lights, a Samsung pentile of 960'000... Making the concept of resolution dummy.
Things change when a Samsung pentile pixel is considered made of eight lights instead of two - two Reds, four Greens and two Blues (↓): this restores the resolution definitions as we have always used it, makes the result a beautiful pixel, the rendering finally a good result, BUT
the 500DPI of the distributed specifications is in reality, visually, somehow a 250DPI...
Now, for completeness: terming "pixels" those ugly chunklies of a pixel was in some way legitimate because they were independently controllable, which is a very good thing for definition - it is like a direct partial control of some subpixel entity. Nonetheless, one pixel made of two leds is not a pixel for perceptual purposes.
#-#- (↑)
-#-#
#-#-
-#-#
(That's a single perceptual pixel, but someone makes it count as four)
Likewise, in some subtractive filtering technologies used for example on EPD, and even on CLEARink, the original resolution "before applying colour" has to be exploited as a subpixel matrix to render the final image. So, the same display may be considered a 200DPI with relevance to the blacks, when the filtered dots remain dark, but is perceptually a 100DPI there where polychromy is rendered. And the whites, well, the effect has to be seen to be sure that talking about "whites" is proper.
EDIT: but I want to let it all out, and be even more schematic.
If you have a 800x600 display, 8''x6'' big, that is construed to be a 100DPI display.
If the subpixels are a matrix of Red, Green, Blue and White, well the subpixels collection is "200DPI".
If you can control the subpixels directly, the Saleshrewd can sell it for a 200DPI thing, but it is
большойt, because visually it IS a 100DPI screen.
And if the visualization technology is made in some baroque way, the subpixel system density will HAVE to be huge, otherwise it will be like holding a Jumbotron as a newspaper...