Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Posts for: DoctorChas
Page: 1 2 3 4 5 6 ... 13 next>>
Aug 24, 2013 14:20:59   #
Armadillo wrote:
DoctorChas,

To attempt to explain this in a slightly different way I will use an analogy that works for two similar imaging devices, video, and digital single frame cameras.

If you were to go outside on a bright sunny day and view a scene with normal Human eyes you would notice a very nice photographic opportunity. If you were to measure the light throughout this scene you would have a very wide range of light measurements.

If you now take your camera and use its exposure meter to measure the same scene you may well record a very wide exposure range as the camera passes over the bright sun lit areas, and the shadowed areas. Let us propose your camera can capture an exposure range of +1 - to -1 Ev from the average of the scene at 0Ev.

Now let us shift this measurement over to video. +1 Ev = .7 volts above 0v. (the average), and -1Ev = 0v. (Total black shadows). When the digital sensor reaches .7 volts the captured image blows out to pure white, and when the digital sensor falls to 0v. the captured image descends to pure black. (This holds true for Color and B&W, this is the Luminance Channel.)

When you set your camera up for bracketed captures, and each exposure is set for +1.75 Ev, 0 Ev, and -1.75 Ev, you have extended the exposure range of the sensor to well beyond .7v. to below 0v (-.3v). With the additional two exposures.

What you will accomplish is taking that part of the image, under normal exposure values, that would be clipped at .7volts and lowering the exposure value so that all the clipped areas would fall below .7volts. You would, also, take those parts of the scene that fell below 0v and raise the exposure values above 0v.

When you transfer your exposures into your computer and apply HDR Processing, you merge all three exposures into one well balanced final image. (The merging is usually performed by software using Masks and Layers to blend all three image exposures.) HDR captures is usually best made by setting the camera to Aperture Priority, this causes the Shutter Duration to make the exposure Value changes.

Michael G
DoctorChas, br br To attempt to explain this in a... (show quote)


That makes sense, however, if my original exposure has pixels that go over peak white but below the recovery point, I can either use Recovery to pull those pixels within the white point or reduce the exposure (which of course lowers every other pixel). The same should apply when I go the other way: lower the black point or raise the exposure (and, again, affect every other pixel).

But it still does not answer my original question...

=:~)
Go to
Aug 24, 2013 09:59:19   #
winterrose wrote:
Sorry Doctor, you can rattle off all the numbers you like but even if you have everyone else in awe of your apparent knowledge, you are wrong. I will say it again....bit depth has nothing to do with the actual dynamic range to which any given sensor can usefully respond. Bit depth refers to the degree of resolution of the recorded data. It looks like I might have to write another one of my (annoying to some) threads to properly explain this concept to people who have it hopelessly screwed up. Cheers, Rob.
Sorry Doctor, you can rattle off all the numbers y... (show quote)


I'll have respectfully disagree with you and my "apparent" knowledge is garnered through careful study. I do notice that you have failed to answer my original question: what is the functional difference between a RAW image recorded in-camera underexposed and Aperture taking a normally-exposed image and dialling in an identical under exposure value?

=:~)
Go to
Aug 24, 2013 06:46:32   #
mel wrote:
Doctor Chas, absolutely AWSOME


Thank you. I'm here all week—don't forget to tip your waitress :D

=:~)
Go to
Aug 24, 2013 06:44:48   #
Err sorry but bit depth is directly related to dynamic range. Ignoring the CFA for a moment, a sensor can only record the amount of light falling on it—the luminance. A 12-bit sensor can record 4096 possible values; a 14-bit sensor 16,384 and a 16-bit sensor 65,536.

Put another way, 12 bits give you a 4096:1 contrast ratio and a dynamic range of 12 stops, 14 bit 16,384:1 and a DR of 14 and 16 bits 65,536:1 CR and 16 stops DR. The total dynamic range, however, is limited by noise levels and the quality of the A/D convertor and thus the practical limit is 5-9 stops. Therefore I consider looking at the raw output of the sensor and its bit depth an indicator of the available dynamic range in precisely the same way that digital audio treats it: a 16-bit recording has a dynamic range of 96dB, one at 24-bits goes up to 120dB

Once through the A/D convertor, these higher bit values allow us a greater tonal range without running into posterisation problems.

Incidentally, all the literature I've read shows that, although the eye has a total range of about 30 stops, under normal light conditions the effective range is 10-14 stops. This is due to changes in the rhodopsin cycle between dark and light conditions. Watch out, folks—there's some seriously nutty biochemistry involved so make sure to take your Vitamin A :D

=:~)
Go to
Aug 24, 2013 05:36:14   #
UtahBob wrote:
Some of those are pretty sweet. One thing you might want to think about is multiple OS viewing. I first tried on an Android and the images in full rez wanted flash. Gigapan was that way until recently - I think they switched to html.


There's a major update to PanoTour coming which should address that issue.

Thanks for looking :D

=:~)
Go to
Aug 23, 2013 19:58:46   #
UtahBob wrote:
What do you stitch with? I like to merge to 32bit in Photomatix, then stitch in PTGui, then tonemap in PHotomatix, and then pp in PS and then back to PTGui if a spherical for vr purposes. I have not tried the single image as a tonemapped pano yet. Might try that soon.


I use AutoPano Giga for stitching and PanoTour Pro to create interactive web versions. Oddly, I've just finished putting together a pano gallery on my camera club's website:

http://edps.org.uk/member-showcase/chas-stoddard/panoramas/index.html

I love shooting panos—tricky devils to do but loads of fun putting them together :D

=:~)
Go to
Aug 23, 2013 19:51:04   #
UtahBob wrote:
I like math but not enough to work up a lather any time soon. It does intrigue me as to how color can be derived from the layout of the sensor. I'm sure if you do the math, someone, including me probably will go through it but in my case don't expect any corrections. :D


Curiously, a man after my own heart, Bob :-D

=:~)
Go to
Aug 23, 2013 19:49:21   #
Rongnongno wrote:
If you want a deep math formula, do your own research, I hate math.


So do I (much to the utter annoyance of my late father who was a math teacher) but there is something quite fascinating about how it all works.

For me, the serious freak-out is the realisation the the perception of colour is a complete and utter fabrication of our brains and that that perception is deeply rooted in language.

Nuts, I tell you—nuts! :D

=:~)
Go to
Aug 23, 2013 19:38:33   #
UtahBob wrote:
Just for kicks, what happens if you take that HDR image that is created and don't tone map it at all. Does the program allow you to save it as a 16 bit file? or can you save it as 32bit and then use PS to save it as 16 bit?

Does that 16 bit image then look like the raw file you started out with? I tried something similar a while back to what you are doing with PS and I couldn't make the tonemapped image significantly different from the initial raw. I guess I never liked the PS tonemapper and ended up with Photomatix.
Just for kicks, what happens if you take that HDR ... (show quote)


Aperture outputs the RAW files to HDR Efex Pro as 16 bit TIFFS and you can indeed apply no tone mapping if required. Personally I dislike PS—I shoot a lot of HDR panoramas and trying to manage the workflow in PS would be a nightmare. I do like to get to bed occasionally :D

=:~)
Go to
Aug 23, 2013 19:32:19   #
UtahBob wrote:
The images are not functionally equivalent. For the two you posted, if you look at the shadow detail under the pallet the ropes sit on you can see that the fake image is not as clean as the 3 shot real image. Not a lot of difference but it does exist. Now what you are trying to do works if you take the fake image and process it to where you like it and then take the real image (3 shot data) and try to mimic the fake image. You'll be able to do that because the 3 shot data incorporates the dynamic range of the single shot fake image.

But if you tonemap the 3 shot data real image to the point where you've brought out the shadow and highlight detail to their maximums before creating issues such as noise, there is no way that you can duplicate that with a fake one image shot.

If you try another image as a test that has a greater dynamic range, you can prove this to yourself. The one you use as an example just doesn't have enough range to prove the point. Again, I understand the point and desirablity of this method in certain circumstances, but they are not functionally equilvalent.
The images are not functionally equivalent. For t... (show quote)


Point well made and taken absolutely. There's a lot of mind-boggling math that goes on in creating digital images (I nearly gave myself a hernia reading the original paper on Advanced Homogenity-Directed Interpolation) so I hope you'll forgive my fascination with what goes on under the hood, so to speak.

=:~)
Go to
Aug 23, 2013 19:18:43   #
Rongnongno wrote:
First HDR is High Dynamic Range.
RAW creates images of up to 6 step in the human vision that is effectively limited to about 24 for the best and 14 or even lower for many.
JPG captures only 2 steps.
When shooting in RAW you have a potential of 6 steps (but usually reduced to 4). You can create what is called a 'pseudo HDR image' by playing with levels and curves with a single RAW file.
A JPG will never be able to that and requires at least 3 images if not more.
The key here is to understand that to get a 'true' HDR, one has to be aware that you are not dealing with color or color shade but luminosity. Color depth is not an issue here, not yet anyway.
When shooting for HDR using multiple images one has to be aware of the true format and camera capabilities.
When processing these images one MUST set the image bit depth to 16. Most software defaults to 8. This is true, especially for JPG that is limited to 8 by default.
Why then? Simply because when you manipulate the image and blending them you take advantage of the software superiority. When you save the image back to JPG it is reduced to 8 bit but with NEW data*...
I am sure I am not explaining this well enough, if someone can do it better, please do so.

* This data is created by the higher bit depth and, in this case, the blending. It somehow corrects some of the limitations of the JPG bit depth format. This is true for ALL JPG manipulations by the way.
First HDR is High Dynamic Range. br RAW creates im... (show quote)


I understand exactly what HDR is. My point was to try to understand what the mathematical difference between the data in a RAW file under or overexposed by, say, 1 stop in post and a RAW file actually over or underexposed by that same value. To my mind, provided that the data in either case lies within the available dynamic of the camera, there should be no discernible difference.

Surely that's why we shoot RAW; so we can play Old Harry with the math in post?

=:~)
Go to
Aug 23, 2013 19:07:15   #
UtahBob wrote:
Not to steal the thunder, but I believe Photomatix has an option to load a raw file and then tone map it. You can also batch process single files. I don't know if the hdr program you use has that capability.

Tonemapping a single image (or brackets taken thereof) and changing the image in LR or Aperture is essentially the same since you are just working with a raw file that has the ability to move 2 stops under or over the taken exposure. You can go further but then you have noise, etc. The difference between doing it in camera is that if you take a stop and a half over and under that really gives you an additional three and a half over and under to work with during the tonemapping process.

Pulling brackets from a single image doesn't get you more range it just allows the tone mapping algorithims to be applied to the raw file - that's not available in LR for instance when you are just executing image adjustment - you won't get that hdr look. But I get the argument about moving subjects where the ghost removal becomes problematic and this is a way around it.

For the example you used, it works but if you had an image with a large dynamic range, it probably wouldn't come out as well as if you did true brackets. You could probably move the histogram for the 3 brackets in camera around much further and still have quality that you won't get by working with a single file?
Not to steal the thunder, but I believe Photomatix... (show quote)


Aha! Now that makes sense. HDR Efex Pro applies tone mapping after the initial HDR blending. I'd like to see what effective difference using a camera with a higher bit depth can achieve.

Interestingly, I had a couple of prints done last week—one true HDR and one where I faked the over and under shots from the original. The difference to my eye was negligible although the scene was well within the dynamic range of the camera. Shots at night would be problematical particularly as my 1000D is noisy as hell in low light, arguably the one subject area where you could do with a better dynamic range.

Food for thought there, Bob—thank you :D

=:~)
Go to
Aug 23, 2013 18:40:57   #
SharpShooter wrote:
Doc, you don't have an HDR shot there. You may be treating it as HDR, but it's not.
Without getting crazy here, you usually need a strong light source, like the sun in your pic, to be HDR.
How are you going to recover pixels from a scene that is all white or all black or both, unless you expose for them?
Doc, shoot straight at the sun and try that with a single pic.
Lets not confuse or mislead here, on what HDR is. Not the process, the scene. SS


Forgive me but that makes absolutely no sense. Why should I need a strong light source to shoot HDR? I've taken 3-shot HDR inside a cave!

Let's look at this logically: provided I've got my initial exposure right, my image histogram should generally lie within the nominal white and black points. Provided I've not totally blown the white level (i.e., I still have data within white point recovery) I can dial down the exposure and thus have data in the highlights. The same applies to data below the black point: dial up the exposure and there's my data in the shadow areas.

I have to repeat the question: how is under or overexposing in camera when shooting RAW functionally any different from under or overexposing in post? Your going to blow the highlights in the overexposed in-camera shot and bottom out the black point in the underexposed shot anyway. the HDR program will take the best elements from all three (or more) images: highlights from the underexposed shot, shadow detail from the overexposed version and mid-tones from the normal shot.

OK, what essential point have I missed?

=:~)
Go to
Aug 23, 2013 17:32:06   #
Wall-E wrote:
Your camera (Canon EOS 1000D) has a dynamic range of 13.7 stops. The human eye has a range of 30 stops. That means that, at any exposure, you're collecting less than half of what the eye can see. To do 'real' HDR, you need to bracket exposures to collect more information for processing into a single image. What you are doing is just shifting the middle point of the image information, not adding any additional info.


The eye's instantaneous dynamic range (as opposed to adaption from dark to light conditions) is actually only about 10-14 stops. Further, my 1000D records at 12-bit resolution so it's hard to see how to get nigh-on 14 stops dynamic range with that few bits available. Even high-end digital video cameras like Sony's F65 CineAlta at 16-bit 4K resolution can only manage 7 stops over and under key level.

I go back to my original question: how is underexposing or overexposing 1⅓ stops in camera any different from dialling those values into the RAW image in post?

I would also argue that the two example images I posted have negligible differences visually—the histograms are virtually identical.

Thanks for the comments. though :)

=:~)
Go to
Aug 23, 2013 16:34:11   #
Folks

It recently occurred to me that it should be perfectly possible to shoot HDR photos without actually shooting bracketed shots. Essentially, the idea is to create two or more additional shots based off the original RAW file with the requisite over or under-exposure.

My own particular HDR workflow uses both Aperture and HDR Efex PRo 2. When applying any sort of adjustment to a RAW image in Aperture (and I suspect Lightroom works the same way), Aperture "overlays" any changes to the original image in real-time. When considering RAW images, a change to the exposure should simply be altering those raw values thus, functionally and mathematically, there should be virtually no difference between a shot deliberately under or over-exposed in camera to one where you adjust the exposure value in post.

My own experimentation in this area suggests that this is actually the case and in evidence, I offer the two examples below. The first is composited from three shots done in camera, The second is created by taking the original normal exposure and the creating two versions—one under-exposed, the other over-exposed by the same amount as the original, in this case 1⅓ stops.

This means that it becomes possible to shoot stuff with rapidly moving objects like racing cars or aircraft in HDR—which would normally be impossible with conventional bracketed exposures— simply by manipulating the math. I have seen some discussion on other sites on this issue but, to me, it seems that they are not grasping what's happening to the underlying math when the manipulating the RAW file.

I would be most grateful for comments, gripes and whinges particularly those that indicate I am barking up the wrong tree.

=:~)

3 exposures in camera


3 exposures in post

Go to
Page: 1 2 3 4 5 6 ... 13 next>>
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.