Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Main Photography Discussion
Human eye vs the camera
Page 1 of 3 next> last>>
Jan 8, 2020 17:22:18   #
grahamfourth
 
Last night the moon was behind some clouds creating interesting rings of different colors (sort of like a circular rainbow). As I looked at this with my eyes, the intensity of the moon was brighter than the rings, but not by a lot. As a very rough approximation, if you considered the rings to be intensity of "1" (in the appropriate units) the moon may have been a "3". I took out my camera to record the image and the moon came out significantly brighter than the clouds. This time if the clouds were a "1", the moon could have been a "30". As I thought about this I am not sure that changing any settings on the camera (shutter speed, aperture size, etc.) would actually change anything because the relative intensities of the moon and the rings would track together (I think). So I have two questions: First, does anyone know how our eyes are able to adjust so that we can see details in darker and brighter objects side by side but the camera does not (or at least not as well)? Second, does anyone have any suggestions or tips for photographing brighter objects alongside dimmer ones?

Thanks in advance for your help and insight.

Reply
Jan 8, 2020 17:27:37   #
CHG_CANON Loc: the Windy City
 
For the second question, about exposure, use the Spot metering option with the moon centered in the frame. The camera will ignore the darkness of the sky and just consider the bright moon for an exposure setting. Use your editing tools, especially if captured in RAW, to adjust the relative brightness of the moon to the other details in the image to arrive at an image that is your desired representation of what you experienced.

Reply
Jan 8, 2020 17:36:31   #
Stardust Loc: Central Illinois
 
CC covered question two. Question one has to do that our eyes have rods and cones, with the rods able to see in darkness and the cones able to see the colors. However, if it gets too dark, the cones can not pick up enough light to see the colors, thus everything looks pretty black & white on really dark nights unless illuminated. I assume the camera does not see as well because it does not have this dual system but instead just one middle-of-the-road sensor.

Reply
 
 
Jan 8, 2020 17:39:38   #
bleirer
 
I've read in the past about our vision being logarithmic, in that we perceive 4 times the intensity as double, this allows us to have a far greater range of vision. I'll try to find the reference.

Reply
Jan 8, 2020 17:40:09   #
Floyd Loc: Misplaced Texan in Florence, Alabama
 
I've read our eyes have in excess of 500 pixels; thus able to see more shades of color, or lack thereof, than any camera, except military ones that are bigger than most 50 gal. trash cans. Quick change in focus capabilities are still out there on some far distant horizon.

Reply
Jan 8, 2020 17:46:53   #
Mongo Loc: Western New York
 
I can't get into all the detail here, but different parts of the retina are used for different light levels. An fully night accommodated healthy eye has been established to recognize irradiance where only a single photon hits the retina.

The rods have better low light sensitivity, but are not good chromatic differentiators. The foveal vision area has a high density of cones, and therefore had high acuity. Furthermore, if the cones are activated by enough photons, there will be chromatic (color) sensing.

If I were trying to capture the high dynamic range of your example, I would consider frame stacking, with varied exposures. That way you could create a composite image, using a lower sensitivity moon capture with a higher sensitivity show for the rings, or the stars.

The shot you are trying to take is a difficult one (I have tried with various methods), but is made easier with some of the tools available today. In olden times, the overexposed moon would be dodged out to allow the rings, stars or whatever to come through. Allot of guesswork was needed because there were not good instruments to control the exposures, both in the camera and in the dark room. And registration of the stacked shot was problematic.

Life is also better today for that shot, because if you have colored halos around the moon, the camera will render the colors with more vividness than film would have (exposure reciprocity in the film), and most likely better than your eyes will with respect color rendering.

There are numerous packages available for framestacking, including Deep Sky Stacker. You may want to look at others.

Frame stacking can also be used to average out atmospherics, deblur and remove other "artifacts." Oh, and as another poster already recommended, please capture RAW.

Reply
Jan 8, 2020 18:23:28   #
Longshadow Loc: Audubon, PA, United States
 
Also, the cones are primarily in the center of your eye and require more light to see stuff.
The rods are more sensitive though.
Ever notice that if you look directly at something in the dark, then when you look just off to the side it appears brighter? That's because the object is no longer primarily in the cone area and the rods are viewing it.

Reply
 
 
Jan 8, 2020 19:42:39   #
bleirer
 
https://www.cambridgeincolour.com/tutorials/cameras-vs-human-eye.htm


https://clarkvision.com/articles/human-eye/

Reply
Jan 8, 2020 20:04:19   #
lukevaliant Loc: gloucester city,n. j.
 
im gonna simplify all of this,,,we see far backgrounds or dark backgrounds because our brains are amazing .we can fill in things ,we know what it looks like.cameras or computers will never be able to do that.(or i might be wrong)

Reply
Jan 8, 2020 20:19:25   #
Longshadow Loc: Audubon, PA, United States
 
lukevaliant wrote:
im gonna simplify all of this,,,we see far backgrounds or dark backgrounds because our brains are amazing .we can fill in things ,we know what it looks like.cameras or computers will never be able to do that.(or i might be wrong)


The mind fills-in or overlooks many things. The camera does not.

Reply
Jan 8, 2020 21:02:05   #
TriX Loc: Raleigh, NC
 
It’s more complicated than rods and cones (although an understanding of photopic vs scotopic vision is certainly worthwhile). The images from your eyes are first preprocessed and merged by the LGB (Lateral Geniculate body) before being forwarded to the brain, and that image processing is amazingly sophisticated, and it’s very fast. Note that if you rotate your head very quickly, the image that you see doesn’t “slide” across your visual field the way it would if you panned a video camera - it stays fixed. The point I’m getting at is that your visual system is so much more sophisticated than your camera. The DR is huge compared to even the best DSLR, and the signal processing abilities of your brain is indeed phenomenal. It’s been optimized for survival over millennia.

Reply
 
 
Jan 9, 2020 02:17:26   #
wdross Loc: Castle Rock, Colorado
 
grahamfourth wrote:
Last night the moon was behind some clouds creating interesting rings of different colors (sort of like a circular rainbow). As I looked at this with my eyes, the intensity of the moon was brighter than the rings, but not by a lot. As a very rough approximation, if you considered the rings to be intensity of "1" (in the appropriate units) the moon may have been a "3". I took out my camera to record the image and the moon came out significantly brighter than the clouds. This time if the clouds were a "1", the moon could have been a "30". As I thought about this I am not sure that changing any settings on the camera (shutter speed, aperture size, etc.) would actually change anything because the relative intensities of the moon and the rings would track together (I think). So I have two questions: First, does anyone know how our eyes are able to adjust so that we can see details in darker and brighter objects side by side but the camera does not (or at least not as well)? Second, does anyone have any suggestions or tips for photographing brighter objects alongside dimmer ones?

Thanks in advance for your help and insight.
Last night the moon was behind some clouds creatin... (show quote)


Most people can see a contrast range of 1:128. Most cameras cannot get to a contrast range of 1:15. But that is not all. The eye generates a chemical all the time that is "destroyed" by photons all the time. This chemical has an approximate 200° Kelvin drop in color temperature and a slight hue change towards the magenta. With less light at night, this chemical - vision purple - tends to pile up around the rods in the retina. When a photon hits one of the molecules right and goes off, it sets off the whole pile that supplies all that energy to the rod. That enhances one's night vision. Also, the eye is only capable of seeing fine detail to 3.8° and can not see good detail beyond 5°. That is about the size of a quarter held at arms length. That should also give you an idea how fast the eye's movement and focal system is. And as some one mentioned before, rods are mainly your black and white night vision versus the cones which are your color vision. The fovea is your detail vision, totally cones, and it's size is the reason for the max 5° detail vision. The cones and rods start at about a 50/50 ratio just outside of the fovea and the ratio changes dramatically in favor of the rods from there.

All this affects why someone can see something but cannot always shoot it without special tools or special techniques.

Reply
Jan 9, 2020 06:01:12   #
Don, the 2nd son Loc: Crowded Florida
 
Stardust wrote:
CC covered question two. Question one has to do that our eyes have rods and cones, with the rods able to see in darkness and the cones able to see the colors. However, if it gets too dark, the cones can not pick up enough light to see the colors, thus everything looks pretty black & white on really dark nights unless illuminated. I assume the camera does not see as well because it does not have this dual system but instead just one middle-of-the-road sensor.



Reply
Jan 9, 2020 06:48:09   #
jlg1000 Loc: Uruguay / South America
 
As others pointed out, human perception (light, sound, touch, taste, heat) IS logarithmic.

As was discovered in the '20s human perception follows this basic equation: Lp = 10.Log(P/P0) where Lp is the logarithmic signal (perceived), Log ys logarithm in base 10 P0 is some base power and P is the measured power.

So If you perceived that the Moon was about three times brighter than the rings, being P0 the power of the light of the rings, then for P = 30.P0, the increase in eyesight signal was 14 dB (decibel).

If your base perception of the rings was about 4 dB, then you perceived the Moon with 14dB... about 3 times brighter. *BUT* the sensor in the camera is linear... so it correctly showed 30 times the light power of the rings.

To correct this, you could use +/- 2 ev 5 shot bracketing and performing an HDR merge in post. Later, you should apply a logarithmic curve to that part of the sky.

Note: "light power" means watt/square meter or - if you prefer quantic notation - photons/square millimeter. "Perceived signal" means the millivolts of signal in your optical nerve.

Reply
Jan 9, 2020 08:19:58   #
SuperflyTNT Loc: Manassas VA
 
The simple answer to question one is that the dynamic range we can see is much greater than the dynamic range a camera can capture. The simple answer to question two is PP.

Reply
Page 1 of 3 next> last>>
If you want to reply, then register here. Registration is free and your account is created instantly, so you can post right away.
Main Photography Discussion
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.