Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Main Photography Discussion
Shooting HDR — without bracketing
Page 1 of 5 next> last>>
Aug 23, 2013 16:34:11   #
DoctorChas Loc: County Durham, UK
 
Folks

It recently occurred to me that it should be perfectly possible to shoot HDR photos without actually shooting bracketed shots. Essentially, the idea is to create two or more additional shots based off the original RAW file with the requisite over or under-exposure.

My own particular HDR workflow uses both Aperture and HDR Efex PRo 2. When applying any sort of adjustment to a RAW image in Aperture (and I suspect Lightroom works the same way), Aperture "overlays" any changes to the original image in real-time. When considering RAW images, a change to the exposure should simply be altering those raw values thus, functionally and mathematically, there should be virtually no difference between a shot deliberately under or over-exposed in camera to one where you adjust the exposure value in post.

My own experimentation in this area suggests that this is actually the case and in evidence, I offer the two examples below. The first is composited from three shots done in camera, The second is created by taking the original normal exposure and the creating two versions—one under-exposed, the other over-exposed by the same amount as the original, in this case 1⅓ stops.

This means that it becomes possible to shoot stuff with rapidly moving objects like racing cars or aircraft in HDR—which would normally be impossible with conventional bracketed exposures— simply by manipulating the math. I have seen some discussion on other sites on this issue but, to me, it seems that they are not grasping what's happening to the underlying math when the manipulating the RAW file.

I would be most grateful for comments, gripes and whinges particularly those that indicate I am barking up the wrong tree.

=:~)

3 exposures in camera
3 exposures in camera...

3 exposures in post
3 exposures in post...

Reply
Aug 23, 2013 16:59:41   #
Wall-E Loc: Phoenix, AZ
 
DoctorChas wrote:
Folks

It recently occurred to me that it should be perfectly possible to shoot HDR photos without actually shooting bracketed shots. Essentially, the idea is to create two or more additional shots based off the original RAW file with the requisite over or under-exposure.

My own particular HDR workflow uses both Aperture and HDR Efex PRo 2. When applying any sort of adjustment to a RAW image in Aperture (and I suspect Lightroom works the same way), Aperture "overlays" any changes to the original image in real-time. When considering RAW images, a change to the exposure should simply be altering those raw values thus, functionally and mathematically, there should be virtually no difference between a shot deliberately under or over-exposed in camera to one where you adjust the exposure value in post.

My own experimentation in this area suggests that this is actually the case and in evidence, I offer the two examples below. The first is composited from three shots done in camera, The second is created by taking the original normal exposure and the creating two versions—one under-exposed, the other over-exposed by the same amount as the original, in this case 1⅓ stops.

This means that it becomes possible to shoot stuff with rapidly moving objects like racing cars or aircraft in HDR—which would normally be impossible with conventional bracketed exposures— simply by manipulating the math. I have seen some discussion on other sites on this issue but, to me, it seems that they are not grasping what's happening to the underlying math when the manipulating the RAW file.

I would be most grateful for comments, gripes and whinges particularly those that indicate I am barking up the wrong tree.

=:~)
Folks br br It recently occurred to me that it sh... (show quote)


Your camera (Canon EOS 1000D) has a dynamic range of 13.7 stops. The human eye has a range of 30 stops. That means that, at any exposure, you're collecting less than half of what the eye can see. To do 'real' HDR, you need to bracket exposures to collect more information for processing into a single image. What you are doing is just shifting the middle point of the image information, not adding any additional info.

Reply
Aug 23, 2013 17:12:20   #
SharpShooter Loc: NorCal
 
Doc, you don't have an HDR shot there. You may be treating it as HDR, but it's not.
Without getting crazy here, you usually need a strong light source, like the sun in your pic, to be HDR.
How are you going to recover pixels from a scene that is all white or all black or both, unless you expose for them?
Doc, shoot straight at the sun and try that with a single pic.
Lets not confuse or mislead here, on what HDR is. Not the process, the scene. SS

Reply
 
 
Aug 23, 2013 17:19:49   #
lightchime Loc: Somewhere Over The Rainbow
 
Wall-E wrote:
Your camera (Canon EOS 1000D) has a dynamic range of 13.7 stops. The human eye has a range of 30 stops. That means that, at any exposure, you're collecting less than half of what the eye can see. To do 'real' HDR, you need to bracket exposures to collect more information for processing into a single image. What you are doing is just shifting the middle point of the image information, not adding any additional info.



Yes! Yes! Yes!

There is a myth around this place that one image can make what we call an HDR.

No way can you increase the information by moving a midtone or dodging and burning. No new detail, just shifting tonality and redistributing what was already there. If you can do it with one file, an HDR was never necessary.

Happy that I will be away for a couple days so that I won't see the arguments of about how wrong I am .

Reply
Aug 23, 2013 17:22:01   #
Wall-E Loc: Phoenix, AZ
 
SharpShooter wrote:
Lets not confuse or mislead here, on what HDR is. Not the process, the scene. SS


Unh.....No.

HDR (common definition) *IS* the process.

http://en.wikipedia.org/wiki/High-dynamic-range_imaging

Whether there's that range in the image he's trying to capture or not has nothing to do with that. It DOES have an impact on the results. A subject with low range of EV's, is not a good candidate for HDR processing.

Reply
Aug 23, 2013 17:32:06   #
DoctorChas Loc: County Durham, UK
 
Wall-E wrote:
Your camera (Canon EOS 1000D) has a dynamic range of 13.7 stops. The human eye has a range of 30 stops. That means that, at any exposure, you're collecting less than half of what the eye can see. To do 'real' HDR, you need to bracket exposures to collect more information for processing into a single image. What you are doing is just shifting the middle point of the image information, not adding any additional info.


The eye's instantaneous dynamic range (as opposed to adaption from dark to light conditions) is actually only about 10-14 stops. Further, my 1000D records at 12-bit resolution so it's hard to see how to get nigh-on 14 stops dynamic range with that few bits available. Even high-end digital video cameras like Sony's F65 CineAlta at 16-bit 4K resolution can only manage 7 stops over and under key level.

I go back to my original question: how is underexposing or overexposing 1⅓ stops in camera any different from dialling those values into the RAW image in post?

I would also argue that the two example images I posted have negligible differences visually—the histograms are virtually identical.

Thanks for the comments. though :)

=:~)

Reply
Aug 23, 2013 17:55:43   #
UtahBob Loc: Southern NJ
 
DoctorChas wrote:
The eye's instantaneous dynamic range (as opposed to adaption from dark to light conditions) is actually only about 10-14 stops. Further, my 1000D records at 12-bit resolution so it's hard to see how to get nigh-on 14 stops dynamic range with that few bits available. Even high-end digital video cameras like Sony's F65 CineAlta at 16-bit 4K resolution can only manage 7 stops over and under key level.

I go back to my original question: how is underexposing or overexposing 1⅓ stops in camera any different from dialling those values into the RAW image in post?

I would also argue that the two example images I posted have negligible differences visually—the histograms are virtually identical.

Thanks for the comments. though :)

=:~)
The eye's instantaneous dynamic range (as opposed ... (show quote)

Not to steal the thunder, but I believe Photomatix has an option to load a raw file and then tone map it. You can also batch process single files. I don't know if the hdr program you use has that capability.

Tonemapping a single image (or brackets taken thereof) and changing the image in LR or Aperture is essentially the same since you are just working with a raw file that has the ability to move 2 stops under or over the taken exposure. You can go further but then you have noise, etc. The difference between doing it in camera is that if you take a stop and a half over and under that really gives you an additional three and a half over and under to work with during the tonemapping process.

Pulling brackets from a single image doesn't get you more range it just allows the tone mapping algorithims to be applied to the raw file - that's not available in LR for instance when you are just executing image adjustment - you won't get that hdr look. But I get the argument about moving subjects where the ghost removal becomes problematic and this is a way around it.

For the example you used, it works but if you had an image with a large dynamic range, it probably wouldn't come out as well as if you did true brackets. You could probably move the histogram for the 3 brackets in camera around much further and still have quality that you won't get by working with a single file?

Reply
 
 
Aug 23, 2013 18:15:56   #
Wall-E Loc: Phoenix, AZ
 
DoctorChas wrote:
Further, my 1000D records at 12-bit resolution so it's hard to see how to get nigh-on 14 stops dynamic range with that few bits available.


My error. Your 1000D has a dynamic range of 10.9 EV.
I was comparing it to another camera in SnapSort and got the columns swapped in my head.
Senior Moment. Unfortunately they're coming closer together, lately.<G>

As another said, what you're doing is tone mapping, not HDR.
Unless you're adding information at the top and bottom of the EV range, it's tone mapping.

Reply
Aug 23, 2013 18:40:57   #
DoctorChas Loc: County Durham, UK
 
SharpShooter wrote:
Doc, you don't have an HDR shot there. You may be treating it as HDR, but it's not.
Without getting crazy here, you usually need a strong light source, like the sun in your pic, to be HDR.
How are you going to recover pixels from a scene that is all white or all black or both, unless you expose for them?
Doc, shoot straight at the sun and try that with a single pic.
Lets not confuse or mislead here, on what HDR is. Not the process, the scene. SS


Forgive me but that makes absolutely no sense. Why should I need a strong light source to shoot HDR? I've taken 3-shot HDR inside a cave!

Let's look at this logically: provided I've got my initial exposure right, my image histogram should generally lie within the nominal white and black points. Provided I've not totally blown the white level (i.e., I still have data within white point recovery) I can dial down the exposure and thus have data in the highlights. The same applies to data below the black point: dial up the exposure and there's my data in the shadow areas.

I have to repeat the question: how is under or overexposing in camera when shooting RAW functionally any different from under or overexposing in post? Your going to blow the highlights in the overexposed in-camera shot and bottom out the black point in the underexposed shot anyway. the HDR program will take the best elements from all three (or more) images: highlights from the underexposed shot, shadow detail from the overexposed version and mid-tones from the normal shot.

OK, what essential point have I missed?

=:~)

Reply
Aug 23, 2013 18:41:55   #
LoneRangeFinder Loc: Left field
 
DoctorChas wrote:
The eye's instantaneous dynamic range (as opposed to adaption from dark to light conditions) is actually only about 10-14 stops. Further, my 1000D records at 12-bit resolution so it's hard to see how to get nigh-on 14 stops dynamic range with that few bits available. Even high-end digital video cameras like Sony's F65 CineAlta at 16-bit 4K resolution can only manage 7 stops over and under key level.

I go back to my original question: how is underexposing or overexposing 1&#8531; stops in camera any different from dialling those values into the RAW image in post?

I would also argue that the two example images I posted have negligible differences visually—the histograms are virtually identical.

Thanks for the comments. though :)

=:~)
The eye's instantaneous dynamic range (as opposed ... (show quote)


Because the single raw image cannot have the same information as the series of bracketed shots. Bumps against the limited dynamic range of digital. If you don't have the highlight detail in the single shot-- tone-mapping will leave you an adjusted tone of that blown highlight without the detail. You are just changing a blown highlight for a "grayer" version of the same...

This is not to say you can't improve over the original image-- but it will not contain more range than 3 (or more) bracketed shots.

Reply
Aug 23, 2013 18:43:13   #
Wall-E Loc: Phoenix, AZ
 
LoneRangeFinder wrote:
Because the single raw image cannot have the same information as the series of bracketed shots. Bumps against the limited dynamic range of digital. If you don't have the highlight detail in the single shot-- tone-mapping will leave you an adjusted tone of that blown highlight without the detail. You are just changing a blown highlight for a "grayer" version of the same...

This is not to say you can't improve over the original image-- but it will not contain more range than 3 (or more) bracketed shots.
Because the single raw image cannot have the same ... (show quote)


:thumbup: :thumbup: :thumbup:

Reply
 
 
Aug 23, 2013 18:57:42   #
Rongnongno Loc: FL
 
DoctorChas wrote:
Folks

(cut)

I would be most grateful for comments, gripes and whinges particularly those that indicate I am barking up the wrong tree.

First HDR is High Dynamic Range.
RAW creates images of up to 6 step in the human vision that is effectively limited to about 24 for the best and 14 or even lower for many.
JPG captures only 2 steps.
When shooting in RAW you have a potential of 6 steps (but usually reduced to 4). You can create what is called a 'pseudo HDR image' by playing with levels and curves with a single RAW file.
A JPG will never be able to that and requires at least 3 images if not more.
The key here is to understand that to get a 'true' HDR, one has to be aware that you are not dealing with color or color shade but luminosity. Color depth is not an issue here, not yet anyway.
When shooting for HDR using multiple images one has to be aware of the true format and camera capabilities.
When processing these images one MUST set the image bit depth to 16. Most software defaults to 8. This is true, especially for JPG that is limited to 8 by default.
Why then? Simply because when you manipulate the image and blending them you take advantage of the software superiority. When you save the image back to JPG it is reduced to 8 bit but with NEW data*...
I am sure I am not explaining this well enough, if someone can do it better, please do so.

* This data is created by the higher bit depth and, in this case, the blending. It somehow corrects some of the limitations of the JPG bit depth format. This is true for ALL JPG manipulations by the way.

Reply
Aug 23, 2013 19:07:15   #
DoctorChas Loc: County Durham, UK
 
UtahBob wrote:
Not to steal the thunder, but I believe Photomatix has an option to load a raw file and then tone map it. You can also batch process single files. I don't know if the hdr program you use has that capability.

Tonemapping a single image (or brackets taken thereof) and changing the image in LR or Aperture is essentially the same since you are just working with a raw file that has the ability to move 2 stops under or over the taken exposure. You can go further but then you have noise, etc. The difference between doing it in camera is that if you take a stop and a half over and under that really gives you an additional three and a half over and under to work with during the tonemapping process.

Pulling brackets from a single image doesn't get you more range it just allows the tone mapping algorithims to be applied to the raw file - that's not available in LR for instance when you are just executing image adjustment - you won't get that hdr look. But I get the argument about moving subjects where the ghost removal becomes problematic and this is a way around it.

For the example you used, it works but if you had an image with a large dynamic range, it probably wouldn't come out as well as if you did true brackets. You could probably move the histogram for the 3 brackets in camera around much further and still have quality that you won't get by working with a single file?
Not to steal the thunder, but I believe Photomatix... (show quote)


Aha! Now that makes sense. HDR Efex Pro applies tone mapping after the initial HDR blending. I'd like to see what effective difference using a camera with a higher bit depth can achieve.

Interestingly, I had a couple of prints done last week—one true HDR and one where I faked the over and under shots from the original. The difference to my eye was negligible although the scene was well within the dynamic range of the camera. Shots at night would be problematical particularly as my 1000D is noisy as hell in low light, arguably the one subject area where you could do with a better dynamic range.

Food for thought there, Bob—thank you :D

=:~)

Reply
Aug 23, 2013 19:10:58   #
UtahBob Loc: Southern NJ
 
DoctorChas wrote:


I have to repeat the question: how is under or overexposing in camera when shooting RAW functionally any different from under or overexposing in post? Your going to blow the highlights in the overexposed in-camera shot and bottom out the black point in the underexposed shot anyway. the HDR program will take the best elements from all three (or more) images: highlights from the underexposed shot, shadow detail from the overexposed version and mid-tones from the normal shot.

OK, what essential point have I missed?

=:~)
br br I have to repeat the question: how is unde... (show quote)


The images are not functionally equivalent. For the two you posted, if you look at the shadow detail under the pallet the ropes sit on you can see that the fake image is not as clean as the 3 shot real image. Not a lot of difference but it does exist. Now what you are trying to do works if you take the fake image and process it to where you like it and then take the real image (3 shot data) and try to mimic the fake image. You'll be able to do that because the 3 shot data incorporates the dynamic range of the single shot fake image.

But if you tonemap the 3 shot data real image to the point where you've brought out the shadow and highlight detail to their maximums before creating issues such as noise, there is no way that you can duplicate that with a fake one image shot.

If you try another image as a test that has a greater dynamic range, you can prove this to yourself. The one you use as an example just doesn't have enough range to prove the point. Again, I understand the point and desirablity of this method in certain circumstances, but they are not functionally equilvalent.

Reply
Aug 23, 2013 19:18:43   #
DoctorChas Loc: County Durham, UK
 
Rongnongno wrote:
First HDR is High Dynamic Range.
RAW creates images of up to 6 step in the human vision that is effectively limited to about 24 for the best and 14 or even lower for many.
JPG captures only 2 steps.
When shooting in RAW you have a potential of 6 steps (but usually reduced to 4). You can create what is called a 'pseudo HDR image' by playing with levels and curves with a single RAW file.
A JPG will never be able to that and requires at least 3 images if not more.
The key here is to understand that to get a 'true' HDR, one has to be aware that you are not dealing with color or color shade but luminosity. Color depth is not an issue here, not yet anyway.
When shooting for HDR using multiple images one has to be aware of the true format and camera capabilities.
When processing these images one MUST set the image bit depth to 16. Most software defaults to 8. This is true, especially for JPG that is limited to 8 by default.
Why then? Simply because when you manipulate the image and blending them you take advantage of the software superiority. When you save the image back to JPG it is reduced to 8 bit but with NEW data*...
I am sure I am not explaining this well enough, if someone can do it better, please do so.

* This data is created by the higher bit depth and, in this case, the blending. It somehow corrects some of the limitations of the JPG bit depth format. This is true for ALL JPG manipulations by the way.
First HDR is High Dynamic Range. br RAW creates im... (show quote)


I understand exactly what HDR is. My point was to try to understand what the mathematical difference between the data in a RAW file under or overexposed by, say, 1 stop in post and a RAW file actually over or underexposed by that same value. To my mind, provided that the data in either case lies within the available dynamic of the camera, there should be no discernible difference.

Surely that's why we shoot RAW; so we can play Old Harry with the math in post?

=:~)

Reply
Page 1 of 5 next> last>>
If you want to reply, then register here. Registration is free and your account is created instantly, so you can post right away.
Main Photography Discussion
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.