DoctorChas wrote:
I understand exactly what HDR is. My point was to try to understand what the mathematical difference between the data in a RAW file under or overexposed by, say, 1 stop in post and a RAW file actually over or underexposed by that same value. To my mind, provided that the data in either case lies within the available dynamic of the camera, there should be no discernible difference.
Surely that's why we shoot RAW; so we can play Old Harry with the math in post?
=:~)
DoctorChas,
To attempt to explain this in a slightly different way I will use an analogy that works for two similar imaging devices, video, and digital single frame cameras.
If you were to go outside on a bright sunny day and view a scene with normal Human eyes you would notice a very nice photographic opportunity. If you were to measure the light throughout this scene you would have a very wide range of light measurements.
If you now take your camera and use its exposure meter to measure the same scene you may well record a very wide exposure range as the camera passes over the bright sun lit areas, and the shadowed areas. Let us propose your camera can capture an exposure range of +1 - to -1 Ev from the average of the scene at 0Ev.
Now let us shift this measurement over to video. +1 Ev = .7 volts above 0v. (the average), and -1Ev = 0v. (Total black shadows). When the digital sensor reaches .7 volts the captured image blows out to pure white, and when the digital sensor falls to 0v. the captured image descends to pure black. (This holds true for Color and B&W, this is the Luminance Channel.)
When you set your camera up for bracketed captures, and each exposure is set for +1.75 Ev, 0 Ev, and -1.75 Ev, you have extended the exposure range of the sensor to well beyond .7v. to below 0v (-.3v). With the additional two exposures.
What you will accomplish is taking that part of the image, under normal exposure values, that would be clipped at .7volts and lowering the exposure value so that all the clipped areas would fall below .7volts. You would, also, take those parts of the scene that fell below 0v and raise the exposure values above 0v.
When you transfer your exposures into your computer and apply HDR Processing, you merge all three exposures into one well balanced final image. (The merging is usually performed by software using Masks and Layers to blend all three image exposures.) HDR captures is usually best made by setting the camera to Aperture Priority, this causes the Shutter Duration to make the exposure Value changes.
Michael G