Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Main Photography Discussion
Shooting HDR — without bracketing
Page <<first <prev 4 of 5 next>
Aug 24, 2013 11:22:59   #
saichiez Loc: Beautiful Central Oregon
 
The difference between shooting your own HDR out of the camera and the HDR that is created within the newer camera's that include and process the image to HDR internally may affect the process of ghosting of slightly moving image subjects in the final output of those from the internal HDR CAMERA's.

My friend has done HDR using three images and HDR software for some time.

He then (when the internal HDR emerged) purchased a CanonT4i with HDR.

He has determined that the internal process is weak in the area of "registration" of the three bracketed images before the internal blend takes place. As a result, he finds "ghosting" in images with areas of slight movement. He showed me an image of four duck floating on placid water, and the ducks had minor ghosting of the heads.

He's done this before (prior to the in camera process) and been able to control the registration where it occurs by workin with the three individual images before the blend.

Reply
Aug 24, 2013 12:10:33   #
rmalarz Loc: Tempe, Arizona
 
DoctorChas wrote:
Folks


I would be most grateful for comments, gripes and whinges particularly those that indicate I am barking up the wrong tree.

=:~)


First, the image used as an example does not require HDR. The subdued lighting, due to the cloud cover, compresses the exposure range. One can do tone mapping on an image such as this.

Now, to the context of your statement. In bright sunlight, or situations which present a large exposure range, thus dark shadows which contain detail to bright sunlit areas which also contain detail, the ability to capture both ends of the exposure spectrum will require some kind of technique to compensate for the photo sensitive material's inability to do so.

With black and white film, one exposes for the dark area, adjusting to how much detail one wants to retain and then develops to keep the highlights from becoming opaque, or blown out.

In digital, it would require an exposure to retain the details, one wishes to keep, in the shadow areas of the image and another of the exact same scene, but exposed to keep the high illumination values from blowing out. The first will have nicely exposed shadows with completely blown highlights. The second will contain little, if any, shadow details, but have rich details in the highlight areas of the image. HDR software will combine both images to produce details without blocking shadows or blowing highlights.

Simply adjusting levels in PP to produce three, or however many, images will not add details that aren't there at either end. Again, in your example image, there is detail in the entire image. Try that experiment with a brightly lit scene in sunlight with light values that exceed your sensor's capability of capturing details completely.
--Bob

Reply
Aug 24, 2013 12:12:25   #
LoneRangeFinder Loc: Left field
 
DoctorChas wrote:
Err sorry but bit depth is directly related to dynamic range. Ignoring the CFA for a moment, a sensor can only record the amount of light falling on it—the luminance. A 12-bit sensor can record 4096 possible values; a 14-bit sensor 16,384 and a 16-bit sensor 65,536.

Put another way, 12 bits give you a 4096:1 contrast ratio and a dynamic range of 12 stops, 14 bit 16,384:1 and a DR of 14 and 16 bits 65,536:1 CR and 16 stops DR. The total dynamic range, however, is limited by noise levels and the quality of the A/D convertor and thus the practical limit is 5-9 stops. Therefore I consider looking at the raw output of the sensor and its bit depth an indicator of the available dynamic range in precisely the same way that digital audio treats it: a 16-bit recording has a dynamic range of 96dB, one at 24-bits goes up to 120dB

Once through the A/D convertor, these higher bit values allow us a greater tonal range without running into posterisation problems.

Incidentally, all the literature I've read shows that, although the eye has a total range of about 30 stops, under normal light conditions the effective range is 10-14 stops. This is due to changes in the rhodopsin cycle between dark and light conditions. Watch out, folks—there's some seriously nutty biochemistry involved so make sure to take your Vitamin A :D

=:~)
Err sorry but bit depth is directly related to dyn... (show quote)


I'm not gonna pretend to know more than I do, however, I have a different understanding of bit-depth. A jpeg records at 8 bit & raw at 12 or 16 bit (a sidebar here: an important reason to shoot raw if you intend to do more with the image-- I digress....).
As I understand it: bit depth impacts the information recorded at an "individual Pixel" level. So the issue would be increased resolution-- but no increase in dynamic range. An 8-bit jpeg, for example, would tend to show more gradation (banding/posterization) in the sky-- made more apparent when adjusting contrast levels.

Reply
 
 
Aug 24, 2013 12:28:29   #
Otis
 
Isn't the point, art? Whether it is "true" HDR or achieved in some other way. The end result is what did the artist want to convey and did it work. This is of course only true if you are an artist or a techie and gear hog. Sorry but the result is what matters not the process.
DoctorChas wrote:
I understand exactly what HDR is. My point was to try to understand what the mathematical difference between the data in a RAW file under or overexposed by, say, 1 stop in post and a RAW file actually over or underexposed by that same value. To my mind, provided that the data in either case lies within the available dynamic of the camera, there should be no discernible difference.

Surely that's why we shoot RAW; so we can play Old Harry with the math in post?

=:~)

Reply
Aug 24, 2013 12:30:05   #
13oct1931 Loc: Lebanon, Indiana
 
Just to muddy-up the waters:
1. You experts are way over my head--but thankfully we have you .
2. I am a Picasa fan. In Picasa, there is a control button which can produce a psuedo HDR effect from a single shot. I wonder how its done ? Probably a trade secret. I use it sometimes; but it can be over done. When used VERY carefully, it produces a nice affect.
ALYN

Reply
Aug 24, 2013 12:31:24   #
LoneRangeFinder Loc: Left field
 
rmalarz wrote:
First, the image used as an example does not require HDR. The subdued lighting, due to the cloud cover, compresses the exposure range. One can do tone mapping on an image such as this.

Now, to the context of your statement. In bright sunlight, or situations which present a large exposure range, thus dark shadows which contain detail to bright sunlit areas which also contain detail, the ability to capture both ends of the exposure spectrum will require some kind of technique to compensate for the photo sensitive material's inability to do so.

With black and white film, one exposes for the dark area, adjusting to how much detail one wants to retain and then develops to keep the highlights from becoming opaque, or blown out.

In digital, it would require an exposure to retain the details, one wishes to keep, in the shadow areas of the image and another of the exact same scene, but exposed to keep the high illumination values from blowing out. The first will have nicely exposed shadows with completely blown highlights. The second will contain little, if any, shadow details, but have rich details in the highlight areas of the image. HDR software will combine both images to produce details without blocking shadows or blowing highlights.

Simply adjusting levels in PP to produce three, or however many, images will not add details that aren't there at either end. Again, in your example image, there is detail in the entire image. Try that experiment with a brightly lit scene in sunlight with light values that exceed your sensor's capability of capturing details completely.
--Bob
First, the image used as an example does not requi... (show quote)


Nailed it in a more detailed fashion, than I.

:thumbup:

Reply
Aug 24, 2013 13:00:43   #
dave sproul Loc: Tucson AZ
 
rmalarz wrote:
First, the image used as an example does not require HDR. The subdued lighting, due to the cloud cover, compresses the exposure range. One can do tone mapping on an image such as this.

Now, to the context of your statement. In bright sunlight, or situations which present a large exposure range, thus dark shadows which contain detail to bright sunlit areas which also contain detail, the ability to capture both ends of the exposure spectrum will require some kind of technique to compensate for the photo sensitive material's inability to do so.

This explains how I perceive HDR http://www.uglyhedgehog.com/compose_reply.jsp?topicnum=142765&postnum=2416140#
With black and white film, one exposes for the dark area, adjusting to how much detail one wants to retain and then develops to keep the highlights from becoming opaque, or blown out.

In digital, it would require an exposure to retain the details, one wishes to keep, in the shadow areas of the image and another of the exact same scene, but exposed to keep the high illumination values from blowing out. The first will have nicely exposed shadows with completely blown highlights. The second will contain little, if any, shadow details, but have rich details in the highlight areas of the image. HDR software will combine both images to produce details without blocking shadows or blowing highlights.

Simply adjusting levels in PP to produce three, or however many, images will not add details that aren't there at either end. Again, in your example image, there is detail in the entire image. Try that experiment with a brightly lit scene in sunlight with light values that exceed your sensor's capability of capturing details completely.
--Bob
First, the image used as an example does not requi... (show quote)

This how I think of HDR works -- thanks :thumbup: :thumbup:

Reply
 
 
Aug 24, 2013 13:12:40   #
bunuweld Loc: Arizona
 
DoctorChas wrote:
Folks

It recently occurred to me that it should be perfectly possible to shoot HDR photos without actually shooting bracketed shots. Essentially, the idea is to create two or more additional shots based off the original RAW file with the requisite over or under-exposure.

My own particular HDR workflow uses both Aperture and HDR Efex PRo 2. When applying any sort of adjustment to a RAW image in Aperture (and I suspect Lightroom works the same way), Aperture "overlays" any changes to the original image in real-time. When considering RAW images, a change to the exposure should simply be altering those raw values thus, functionally and mathematically, there should be virtually no difference between a shot deliberately under or over-exposed in camera to one where you adjust the exposure value in post.

My own experimentation in this area suggests that this is actually the case and in evidence, I offer the two examples below. The first is composited from three shots done in camera, The second is created by taking the original normal exposure and the creating two versions—one under-exposed, the other over-exposed by the same amount as the original, in this case 1&#8531; stops.

This means that it becomes possible to shoot stuff with rapidly moving objects like racing cars or aircraft in HDR—which would normally be impossible with conventional bracketed exposures— simply by manipulating the math. I have seen some discussion on other sites on this issue but, to me, it seems that they are not grasping what's happening to the underlying math when the manipulating the RAW file.

I would be most grateful for comments, gripes and whinges particularly those that indicate I am barking up the wrong tree.

=:~)
Folks br br It recently occurred to me that it sh... (show quote)


Your idea makes partial sense, and the pictures are an impressive demonstration, at least for the particular environment of the scene. I will refrain from calling it HDR or any other term that could be considered heretic. Clearly, PP of a a grossly underexposed picture can bring details by increasing exposure that our eye could not see in the original image, and at least to that extent your combination of non-bracketed but post-processed photos ought to add some visual value to the image. Just looking at your original download is a good demonstration. "Eppur si muove" :)

Reply
Aug 24, 2013 13:24:28   #
Mousie M Loc: Coventry, UK
 
Here is my lees technical take on this. If you can capture the tones In a scene, without blowing the highlights or losing the shadows, then you don't need HDR. You capture it in RAW or whatever and port process it until you are happy with it.

If you can't capture all the tonal range, then you take say three shots all closely aligned in view and depth of focus, at different shutter speeds. The normal one gets all the mid tones and loses detail at both ends. The underexposed one captures some more detail in the highlights, and loses more at the dark end. The overexposed one capture the shadows, and blows the highlights and some of the middle. Then the HDR software takes all three and combines them, using information from each, to create an image which could not be captured by the sensor ina single exposure.

You can argue that there is an area in the normal exposure between capturing the shadows and losing them, where you capture them but with reduced tonal information. The overexposure capture these better with a higher number of tones. Similarly at the other end. This information is only kept in RAW. If you use the full bit depth available to you, the software sorts out the parts of each exposure to use in the combination, without us having to have a big discussion about maths. But it won't do it without the three exposures, because the sensor cannot capture the full range in one exposure.

Pseudo HDR mimics this by taking as much information as it can drag out of one exposure at both ends, intensifying both, and recombining them together. This is a bit like turning up the extreme base and treble on the stereo and listening to the result. It may be a nice effect, but is not HDR.

OK, does this make sense?

Reply
Aug 24, 2013 13:28:47   #
Mousie M Loc: Coventry, UK
 
So, my conclusion is that you have created fine images, but they are not HDR.

If you don't believe me, find a scene which blows the highlights and loses the shadows and try your process. You won't get it to work. Then try a set of exposures and combine then with HDR software and see the difference.

Reply
Aug 24, 2013 13:48:23   #
Armadillo Loc: Ventura, CA
 
Mousie M wrote:
Here is my lees technical take on this. If you can capture the tones In a scene, without blowing the highlights or losing the shadows, then you don't need HDR. You capture it in RAW or whatever and port process it until you are happy with it.

If you can't capture all the tonal range, then you take say three shots all closely aligned in view and depth of focus, at different shutter speeds. The normal one gets all the mid tones and loses detail at both ends. The underexposed one captures some more detail in the highlights, and loses more at the dark end. The overexposed one capture the shadows, and blows the highlights and some of the middle. Then the HDR software takes all three and combines them, using information from each, to create an image which could not be captured by the sensor ina single exposure.

You can argue that there is an area in the normal exposure between capturing the shadows and losing them, where you capture them but with reduced tonal information. The overexposure capture these better with a higher number of tones. Similarly at the other end. This information is only kept in RAW. If you use the full bit depth available to you, the software sorts out the parts of each exposure to use in the combination, without us having to have a big discussion about maths. But it won't do it without the three exposures, because the sensor cannot capture the full range in one exposure.

Pseudo HDR mimics this by taking as much information as it can drag out of one exposure at both ends, intensifying both, and recombining them together. This is a bit like turning up the extreme base and treble on the stereo and listening to the result. It may be a nice effect, but is not HDR.

OK, does this make sense?
Here is my lees technical take on this. If you ca... (show quote)


Mousie,

You hit the nail square on the head, and with just the right sized hammer. :thumbup:

Michael G

Reply
 
 
Aug 24, 2013 14:20:59   #
DoctorChas Loc: County Durham, UK
 
Armadillo wrote:
DoctorChas,

To attempt to explain this in a slightly different way I will use an analogy that works for two similar imaging devices, video, and digital single frame cameras.

If you were to go outside on a bright sunny day and view a scene with normal Human eyes you would notice a very nice photographic opportunity. If you were to measure the light throughout this scene you would have a very wide range of light measurements.

If you now take your camera and use its exposure meter to measure the same scene you may well record a very wide exposure range as the camera passes over the bright sun lit areas, and the shadowed areas. Let us propose your camera can capture an exposure range of +1 - to -1 Ev from the average of the scene at 0Ev.

Now let us shift this measurement over to video. +1 Ev = .7 volts above 0v. (the average), and -1Ev = 0v. (Total black shadows). When the digital sensor reaches .7 volts the captured image blows out to pure white, and when the digital sensor falls to 0v. the captured image descends to pure black. (This holds true for Color and B&W, this is the Luminance Channel.)

When you set your camera up for bracketed captures, and each exposure is set for +1.75 Ev, 0 Ev, and -1.75 Ev, you have extended the exposure range of the sensor to well beyond .7v. to below 0v (-.3v). With the additional two exposures.

What you will accomplish is taking that part of the image, under normal exposure values, that would be clipped at .7volts and lowering the exposure value so that all the clipped areas would fall below .7volts. You would, also, take those parts of the scene that fell below 0v and raise the exposure values above 0v.

When you transfer your exposures into your computer and apply HDR Processing, you merge all three exposures into one well balanced final image. (The merging is usually performed by software using Masks and Layers to blend all three image exposures.) HDR captures is usually best made by setting the camera to Aperture Priority, this causes the Shutter Duration to make the exposure Value changes.

Michael G
DoctorChas, br br To attempt to explain this in a... (show quote)


That makes sense, however, if my original exposure has pixels that go over peak white but below the recovery point, I can either use Recovery to pull those pixels within the white point or reduce the exposure (which of course lowers every other pixel). The same should apply when I go the other way: lower the black point or raise the exposure (and, again, affect every other pixel).

But it still does not answer my original question...

=:~)

Reply
Aug 24, 2013 15:59:45   #
GaryS1964 Loc: Northern California
 
I shoot RAW. I do this regularly when I want to bring out more shadow information in a single image. However in my experience the end results is not always satisfactory. It depends a lot upon the original image and how much shadow information is available.

Try this experiment. Take a picture of a scene with a wide dynamic range properly exposed for the overall scene. Now take a picture of the same scene exposed for the shadow areas to maximize information in the shadows. Now do some pixel peeping in the shadow areas of each image and see which has more detail with less noise. My guess is you will see more shadow information with less noise in the image exposed for the shadow areas. This information is brought into a true HDR image. The first image can't possibly bring in that amount of clear shadow detail. At least that has been my experience.

Reply
Aug 24, 2013 16:20:18   #
mikegreenwald Loc: Illinois
 
Michael G: Can you explain why aperture priority is preferable?

Reply
Aug 24, 2013 16:32:00   #
Mousie M Loc: Coventry, UK
 
GaryS1964 wrote:
I shoot RAW. I do this regularly when I want to bring out more shadow information in a single image. However in my experience the end results is not always satisfactory. It depends a lot upon the original image and how much shadow information is available.

Try this experiment. Take a picture of a scene with a wide dynamic range properly exposed for the overall scene. Now take a picture of the same scene exposed for the shadow areas to maximize information in the shadows. Now do some pixel peeping in the shadow areas of each image and see which has more detail with less noise. My guess is you will see more shadow information with less noise in the image exposed for the shadow areas. This information is brought into a true HDR image. The first image can't possibly bring in that amount of clear shadow detail. At least that has been my experience.
I shoot RAW. I do this regularly when I want to br... (show quote)


Yes, exactly. And at the highlight end of the spectrum the cause is different, but the end result is the same. In your average exposure there is more information in the RAW data in the highlights than in the shadows (see the various discussions around on the subject of "expose to the right") until the point where you blow the hIghlights. The underexposed one will capture further detail from the highlights, and bring it into the HDR composite.

Reply
Page <<first <prev 4 of 5 next>
If you want to reply, then register here. Registration is free and your account is created instantly, so you can post right away.
Main Photography Discussion
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.