Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Main Photography Discussion
Shooting HDR — without bracketing
Page <<first <prev 3 of 5 next> last>>
Aug 24, 2013 06:46:32   #
DoctorChas Loc: County Durham, UK
 
mel wrote:
Doctor Chas, absolutely AWSOME


Thank you. I'm here all week—don't forget to tip your waitress :D

=:~)

Reply
Aug 24, 2013 07:29:25   #
Papa Joe Loc: Midwest U.S.
 
DoctorChas wrote:
Folks

It recently occurred to me that it should be perfectly possible to shoot HDR photos without actually shooting bracketed shots. Essentially, the idea is to create two or more additional shots based off the original RAW file with the requisite over or under-exposure.

My own particular HDR workflow uses both Aperture and HDR Efex PRo 2. When applying any sort of adjustment to a RAW image in Aperture (and I suspect Lightroom works the same way), Aperture "overlays" any changes to the original image in real-time. When considering RAW images, a change to the exposure should simply be altering those raw values thus, functionally and mathematically, there should be virtually no difference between a shot deliberately under or over-exposed in camera to one where you adjust the exposure value in post.

My own experimentation in this area suggests that this is actually the case and in evidence, I offer the two examples below. The first is composited from three shots done in camera, The second is created by taking the original normal exposure and the creating two versions—one under-exposed, the other over-exposed by the same amount as the original, in this case 1&#8531; stops.

This means that it becomes possible to shoot stuff with rapidly moving objects like racing cars or aircraft in HDR—which would normally be impossible with conventional bracketed exposures— simply by manipulating the math. I have seen some discussion on other sites on this issue but, to me, it seems that they are not grasping what's happening to the underlying math when the manipulating the RAW file.

I would be most grateful for comments, gripes and whinges particularly those that indicate I am barking up the wrong tree.

=:~)
Folks br br It recently occurred to me that it sh... (show quote)



They look very close to being identical Doc! Nice approach. The top one might be just a tiny bit darker, thus exhibiting a little bit more detail - i.e. the rope, etc. but sounds like an idea I'd like to try - thank you!

Reply
Aug 24, 2013 07:30:18   #
sb Loc: Florida's East Coast
 
I have been reading about the Fuji FinePix HS50, which will actually take three images with bracketed exposure with a single push of the button - rather than taking three bracketed images and doing the HDR merge in-camera, like some of the Canon cameras now do. Since it is not using three separate exposures, I don't know if it is making three bracketed images by adjusting ISO, or what - but it seems a good way of doing this perhaps without needing a tripod always. Anyone have any experience doing HDR with this camera?

Reply
 
 
Aug 24, 2013 07:54:05   #
RJM Loc: Cardiff, S Wales, UK
 
I looked at both images and can't really see any difference!

Reply
Aug 24, 2013 07:59:11   #
winterrose Loc: Kyneton, Victoria, Australia
 
DoctorChas wrote:
Err sorry but bit depth is directly related to dynamic range. Ignoring the CFA for a moment, a sensor can only record the amount of light falling on it—the luminance. A 12-bit sensor can record 4096 possible values; a 14-bit sensor 16,384 and a 16-bit sensor 65,536.

Put another way, 12 bits give you a 4096:1 contrast ratio and a dynamic range of 12 stops, 14 bit 16,384:1 and a DR of 14 and 16 bits 65,536:1 CR and 16 stops DR. The total dynamic range, however, is limited by noise levels and the quality of the A/D convertor and thus the practical limit is 5-9 stops. Therefore I consider looking at the raw output of the sensor and its bit depth an indicator of the available dynamic range in precisely the same way that digital audio treats it: a 16-bit recording has a dynamic range of 96dB, one at 24-bits goes up to 120dB

Once through the A/D convertor, these higher bit values allow us a greater tonal range without running into posterisation problems.

Incidentally, all the literature I've read shows that, although the eye has a total range of about 30 stops, under normal light conditions the effective range is 10-14 stops. This is due to changes in the rhodopsin cycle between dark and light conditions. Watch out, folks—there's some seriously nutty biochemistry involved so make sure to take your Vitamin A :D

=:~)
Err sorry but bit depth is directly related to dyn... (show quote)


Sorry Doctor, you can rattle off all the numbers you like but even if you have everyone else in awe of your apparent knowledge, you are wrong. I will say it again....bit depth has nothing to do with the actual dynamic range to which any given sensor can usefully respond. Bit depth refers to the degree of resolution of the recorded data. It looks like I might have to write another one of my (annoying to some) threads to properly explain this concept to people who have it hopelessly screwed up. Cheers, Rob.

Reply
Aug 24, 2013 08:02:15   #
nairiam Loc: Bonnie Scotland
 
DoctorChas wrote:
Folks

It recently occurred to me that it should be perfectly possible to shoot HDR photos without actually shooting bracketed shots. Essentially, the idea is to create two or more additional shots based off the original RAW file with the requisite over or under-exposure.

My own experimentation in this area suggests that this is actually the case and in evidence, I offer the two examples below. The first is composited from three shots done in camera, The second is created by taking the original normal exposure and the creating two versions—one under-exposed, the other over-exposed by the same amount as the original, in this case 1&#8531; stops.
=:~)
Folks br br It recently occurred to me that it sh... (show quote)

I understood that bracketing was done by changing shutter speed as changing aperture changes depth of field, the shots are slightly different in that respect. The two examples you show however display little difference.

Reply
Aug 24, 2013 08:52:13   #
Mercer Loc: Houston, TX, USA
 
Thanks, DrChas, for a thoughtful and interesting post. I am not sure I understand some of it, but I look forward to trying it with my D3200, which does not have a bracketing setting. I can't help but wonder if HDR is a true result from using the single RAW camera image three times, because nothing is added or taken away from the original image; but maybe this is not a factor. So that we might better compare, maybe you could post an image that features a wider contrast range that would make the HDR process a bit more obvious. In any event, thanks for your work and your post.

Reply
 
 
Aug 24, 2013 09:00:16   #
winterrose Loc: Kyneton, Victoria, Australia
 
Mercer wrote:
Thanks, DrChas, for a thoughtful and interesting post. I am not sure I understand some of it, but I look forward to trying it with my D3200, which does not have a bracketing setting. I can't help but wonder if HDR is a true result from using the single RAW camera image three times, because nothing is added or taken away from the original image; but maybe this is not a factor. Maybe you could post an image that features a wider contrast range. In any event, thanks for your work and your post.
Thanks, DrChas, for a thoughtful and interesting p... (show quote)


Seriously, I wouldn't know what to call it but it definitely isn't HDR, all it is is very misleading to those who know better. If the good Doctor cares to try to convince me I would be happy to debate.

Reply
Aug 24, 2013 09:47:09   #
UtahBob Loc: Southern NJ
 
winterrose wrote:
Sorry Doctor, you can rattle off all the numbers you like but even if you have everyone else in awe of your apparent knowledge, you are wrong. I will say it again....bit depth has nothing to do with the actual dynamic range to which any given sensor can usefully respond. Bit depth refers to the degree of resolution of the recorded data. It looks like I might have to write another one of my (annoying to some) threads to properly explain this concept to people who have it hopelessly screwed up. Cheers, Rob.
Sorry Doctor, you can rattle off all the numbers y... (show quote)


I get what you are saying. If I were to take a 8 bit image and convert it to a 16 bit image, technically it still only has the elements of an 8 bit image. There is no added resolution provided by the 16 bits. Similarly, if a sensor can provide data for 12 bits but one choses to store it in 14 bits, it doesn't mean that you have the accuracy of 14 bits since you started with 12 bit data. I might have some terms here inappropriately used but that's what you are getting at I believe?

Reply
Aug 24, 2013 09:49:24   #
UtahBob Loc: Southern NJ
 
DoctorChas wrote:
There's a major update to PanoTour coming which should address that issue.

Thanks for looking :D

=:~)


I need to look at that software - at first I thought that was the same as Pano2vr but it is not ...

Reply
Aug 24, 2013 09:56:29   #
mikegreenwald Loc: Illinois
 
More simply put, if you take 3 or 5 shots at 2 EV intervals, the algorythms have more information to work with, and they're able to produce a better result in HDR. For shots with rapid subject motion, a single shot may be the better compromise because of ghosting problems. If you print photos with extreme highlights and shadows, the ONLY what to get highlight and shadow detail is with multiple exposures at the larger EV differentials.

Reply
 
 
Aug 24, 2013 09:57:39   #
mikegreenwald Loc: Illinois
 
More simply put, if you take 3 or 5 shots at 2 EV intervals, the algorythms have more information to work with, and they're able to produce a better result in HDR. For shots with rapid subject motion, a single shot may be the better compromise because of ghosting problems. If you print photos with extreme highlights and shadows, the ONLY way to get highlight and shadow detail is with multiple exposures at the larger EV differentials.

Reply
Aug 24, 2013 09:59:19   #
DoctorChas Loc: County Durham, UK
 
winterrose wrote:
Sorry Doctor, you can rattle off all the numbers you like but even if you have everyone else in awe of your apparent knowledge, you are wrong. I will say it again....bit depth has nothing to do with the actual dynamic range to which any given sensor can usefully respond. Bit depth refers to the degree of resolution of the recorded data. It looks like I might have to write another one of my (annoying to some) threads to properly explain this concept to people who have it hopelessly screwed up. Cheers, Rob.
Sorry Doctor, you can rattle off all the numbers y... (show quote)


I'll have respectfully disagree with you and my "apparent" knowledge is garnered through careful study. I do notice that you have failed to answer my original question: what is the functional difference between a RAW image recorded in-camera underexposed and Aperture taking a normally-exposed image and dialling in an identical under exposure value?

=:~)

Reply
Aug 24, 2013 10:32:11   #
R.G. Loc: Scotland
 
Can I add my simplistic take on the subject?

With 3 step bracketing there is an over-, an under- and a zero offset exposure. For a "normal" exposure, the camera is presumably programmed to keep the brightest of the highlights from being blown out, and if they are just short of being blown out, the over-exposed shot won't add any detail at the bright end because it will blow out the highlights. So bracketing or in-camera HDR won't add any information to the file (whether it's RAW or Jpeg).

However, if there's an intense source of light in the frame that's affecting how the camera evaluates both the light levels and the required exposure levels, the camera will try to accommodate the light source, and in the process it could unnecessarily lower the exposure level, resulting in an unnecessary compression of the range that the other data (the non-light-source data) falls within. Bracketing or in-camera HDR would result in a more even spread of data.

Further to the OP's comments, rearranging the luminosity values can achieve a subjective improvement as regards how a shot looks to the eye, but as far as improvements to the range of the captured data are concerned, if the zero offset shot was exposed as described above (the highlights just short of being blown out), neither in-camera HDR nor bracketing will add any further luminosity data. The game then centres around how the existing luminosity data is utilised. And it seems it is possible to play endless games with it (for better or worse).

Reply
Aug 24, 2013 11:21:47   #
Armadillo Loc: Ventura, CA
 
DoctorChas wrote:
I understand exactly what HDR is. My point was to try to understand what the mathematical difference between the data in a RAW file under or overexposed by, say, 1 stop in post and a RAW file actually over or underexposed by that same value. To my mind, provided that the data in either case lies within the available dynamic of the camera, there should be no discernible difference.

Surely that's why we shoot RAW; so we can play Old Harry with the math in post?

=:~)


DoctorChas,

To attempt to explain this in a slightly different way I will use an analogy that works for two similar imaging devices, video, and digital single frame cameras.

If you were to go outside on a bright sunny day and view a scene with normal Human eyes you would notice a very nice photographic opportunity. If you were to measure the light throughout this scene you would have a very wide range of light measurements.

If you now take your camera and use its exposure meter to measure the same scene you may well record a very wide exposure range as the camera passes over the bright sun lit areas, and the shadowed areas. Let us propose your camera can capture an exposure range of +1 - to -1 Ev from the average of the scene at 0Ev.

Now let us shift this measurement over to video. +1 Ev = .7 volts above 0v. (the average), and -1Ev = 0v. (Total black shadows). When the digital sensor reaches .7 volts the captured image blows out to pure white, and when the digital sensor falls to 0v. the captured image descends to pure black. (This holds true for Color and B&W, this is the Luminance Channel.)

When you set your camera up for bracketed captures, and each exposure is set for +1.75 Ev, 0 Ev, and -1.75 Ev, you have extended the exposure range of the sensor to well beyond .7v. to below 0v (-.3v). With the additional two exposures.

What you will accomplish is taking that part of the image, under normal exposure values, that would be clipped at .7volts and lowering the exposure value so that all the clipped areas would fall below .7volts. You would, also, take those parts of the scene that fell below 0v and raise the exposure values above 0v.

When you transfer your exposures into your computer and apply HDR Processing, you merge all three exposures into one well balanced final image. (The merging is usually performed by software using Masks and Layers to blend all three image exposures.) HDR captures is usually best made by setting the camera to Aperture Priority, this causes the Shutter Duration to make the exposure Value changes.

Michael G

Reply
Page <<first <prev 3 of 5 next> last>>
If you want to reply, then register here. Registration is free and your account is created instantly, so you can post right away.
Main Photography Discussion
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.