Quote:
Originally Posted by pdurrant
[...] The most common* human eye detects red green and blue lightwaves. So additive (e.g. LCD TV screens) use red green and blue pixels. [...]
|
Actually, each "perceptual color" we detect consists of a wide band of possible color frequencies interdependent on different waveband color sensors, with an amplitude envelope of Gaussian distribution. Our color detectors overlap when the amplitude (brightness) is high enough. And in fact, our red detectors can see infrared when it is of strong enough amplitude AND it is not masked by other nearby "reddish" color frequencies.
Another way to verify that you can see infrared in special conditions is to grab a powerful infrared TV remote control, let your eyes dark adapt in a dark room, then shine it directly into one eye at close range while you press a button (but not for long) and you should see a dull red flicker. When you can see infrafed at all, it is very very high intensity, so as the somewhat humorous laser warning sticker says "do not try this with your remaining eye". But seriously, I have done it and it works, and I did not go blind (though I suppose that could be because my free hand was not busy "taking care of business", as in the classic mother's warning).
And back to color detection depending on signal strength, this is also why we can view monochromatic colors of frequencies BETWEEN those at the center of the Red/Green/Blue color frequency bands we detect. Our visual processing centers compare and balance them to see the relative strengths and positions of the summation of frequencies falling in the frequency range between them, and this is why an amber LED (single color frequency) looks yellow, and a pair of red and green LEDs (as in a color OLED display) can be set to appear as exactly the same color. So really, the colors we perceive in our narrow band of visible frequencies are not even really a good representation of the actual EM frequencies they are detecting, just a crude model with genetic survival value.
Our senses detect VERY LITTLE of what COULD be detected with adequate sensors (but our technology helps us with that). And even then, our perceptual filters discard the vast majority of what our senses DO detect from the firehose of sensation (unless you are autistic), to prevent us from being overwhelmed.
And of course, how those filters work is VERY DIFFERENT for NeuroTypical ("normal") people and aspies. NTs (NeuroTypicals) have very fast and power-efficient (but slow to adapt) hardwired social-perceptual filters, which work well even when tired or hungry, and can also detect subtle social cueing. ASD (Autistic Spectrum Disorder) people have reduced or absent hardwiring in this area, and MUST have very high intellect so they can simulate this function consciously (i.e. in "software"), or they will end up being overwhelmed by too much sensation (autistic). Because this type of social-perceptual filter is not the normally hardwired filter acquired genetically and during initial primary language acquisition, it must be learned by a child (i.e. self-programmed) if he is to acquire any social skills at all. Though such a "softwired" filter is slower and more power hungry than a hardwired filter, it does not work well when tired, hungry, or when annoyed by pain (vastly extra sensation in this condition) or by bullies. However, such a "software" filter can allow an aspie to be aware of so much more or can tune to a laserbeam focus and exclude all else for long periods of time. A beneficial side-effect is the ability to easily solve "impossible" problems quickly and easily (when they grab his attention), and these filters can adapt to rapid environmental and technological change (as in the oncoming technological singularity). Such folks have always existed but technology is driving such people together so that each generation has many more aspie geeks, who themselves push technology forward (toward the technologicla singularity). We are the outliers of society who, though too often shunned and abused, actually protect the tribe by warning them of external danger, and where and when the buffalo roam. And in the future, aspies may be the only ones who can adapt quickly enough to communicate with and understand our AI overlords of the future.
And while discussing visual perception and how our simulation of reality is not even a particularly accurate representation of the tiny portion of the electromagnetic and pressure spectrums, this Ted Talk is a "must see":
And this explains why we can mix a handful of pigments (or RGB display subpixels) to simulate the color of any particular visible monochromatic LED or laser.
And this also brings an end to another "
little professor" lecture.
(Click the link.)
But lets end this post with a very short and humorous little video (though all these videos are worth watching in their entirety, especially Donald Hoffman's Ted Talk):