Wallen wrote:
For the last time,
I explained why JPEG can not handle a wide dynamic scene straight out the camera.
Why other steps are needed to cheat the data in, that is out of bounds, and how those step works.
Here's what you originally said: "The bit size and compression limits the dynamic range of a jpeg image. If the scene has a narrow dynamic range, this would not be a problem. If the scene has a wider dynamic range, you need to bracket the scene and merge it at post to capture as much as possible."
I read that and I re-read that and I come to the conclusion that you are saying the structure of a JPEG (8 bit + compression) is why our cameras have to go through "other steps" "to cheat the data in." In other words if the JPEG structure was different those other steps wouldn't be necessary.
You said, "If the scene has a narrow dynamic range, this would not be a problem." The pronoun this in your sentence refers to the JPEG file structure. You follow that with, [but] "If the scene has a wider dynamic range, you need to...." take other steps to cheat the data in. To cheat the data into what? According to you the JPEG file structure. That's incorrect.
It is true that for wide DR scenes, "...other steps are needed to cheat the data in, that is out of bounds,..."
However those other needed steps are not needed to cheat the data into the JPEG file structure. That's where what you're saying is wrong. Those other steps are needed to cheat the data into the standard output target.
You're saying that it is the JPEG file structure that is forcing us to employ other steps to cheat the data in and that's wrong. It's the standard output target that is forcing us to employ other steps to cheat the data in.
---------------------------------------------------------------------
In case anyone else is still reading this and finds it confusing, let's begin at the beginning:
We have in the photo industry a standard output target. It's a photographic print.
We start with a piece of paper. We make it as white as we possibly can. We even coat it with stuff to make it whiter. Then we make black ink. We search for dies and pigments to make the ink as black as we possibly can. We put the black ink on the white paper and that's the "real" dynamic range of that print material. It's basically four stops.
Well we take photos of scenes that have lots more than four stops in the scene. So what happens that we can make a print? We apply a curve (tone curve) to the data and compress both ends of the scene DR so we can fit it onto the paper. It works and we find the result convincing. How much then in the actual scene can we get onto the final print? About 6.5 stops. Why not more? The curve becomes to severe and the image starts to look unnatural. We figured all this out long ago before any of us were born.
Let's look at a photo. Below is a snap of Pop's Blue Moon, a local bar in my neighborhood. The conditions are blue sky on a sunny afternoon with the subject directly sunlit -- the average photo. My camera (Fuji X-T4) recorded 9.5 stops of usable data. But I chose not to show all that data. Look at the green awning on the building in the background. Up under that awning my camera recorded plenty of usable data. I however am letting that data go as black shadows. The photo would look unnatural if I didn't have those dark shadows. How much visible dynamic range from the scene actually shows in the photo? 6.5 stops.
So here's the trick. Every camera and photo software vendor knows this and all their products are designed to this standard output target (print). When we calibrate our displays guess what the calibration hardware and software is calibrating them to? So when your camera goes to create a JPEG what is the camera processor adjusting the image for -- the standard output target.
That image of Pop's was saved by my camera as a 14 bit raw file. As I noted it recorded 9.5 stops of data not 6.5. What happens if I let a software editor process that file and make all the choices automatically? Let's give the image to the newest version of Affinity Photo and that image is below my version. Look how AP handled the shadows and highlights -- almost identical to what I did. We both worked to the same standard output target. AP knows what that is. And so does my camera -- so do all our cameras. And if you leave your camera to do it's default best and if you just click auto as I did in AP you consistently get a best effort from the engineers who designed your camera and your processing software to deliver a standard output target image.
Now here's an important point. What if you really want to display 9 or even more stops of DR in a photo. Can you process to a different output target?
The answer is no. Not if you want to ultimately make a print or see your image on a standard display. What we have to do in that case is apply more sophisticated methods than just the standard tone curve and very often we have to adjust parts of the image separate from other parts. But we still have to deliver a printable/viewable image that meets the standard output target parameters. That's a critical point.
So let's look at another image. I placed two versions of a photo from our park side by side. This photo is backlit and my camera recorded it's full sensor capacity of 10.5 stops of data. On your left is AP's auto version of the photo. On your right is my hand processed version of the photo. Both are processed to the same standard output target. The black in the AP version is the same as the black in my version. AP doesn't have a blacker black. The whites in my version are the same as the whites in the AP version. The range of the output target is the same. What's different is the distribution of everything in the middle. AP blew highlights in the clouds and I didn't. AP's middle tones are too dark and muddy. AP's shadows are too blocked up. AP did much better with a more normal frontlit sunny image. Backlight requires more effort. To create my version of the photo required me to apply masks and adjust local areas of the image separately -- too sophisticated for the camera software or AP's auto function. They work with images that respond to a default tone curve.
But as I worked to apply my manipulations
I still had to work within the parameters of the standard output target if eventually I'm going to print the photo or view it on a calibrated display. In other words I can't really produce an image that would require printing paper with more than 4 real stops of DR or try and force more than 6.5 stops of curve adjusted data onto the printing paper. That's why in this previous post
https://www.uglyhedgehog.com/t-760595-2.html#13629216 Jersey guy correctly noted that my earlier example looked HDRish -- staying within the limits of the standard output target but overstuffing data beyond the point of looking natural.
The standard output target's limits don't change and except in rare cases we need to work to meet those limits. As I said we figured this out long ago and when digital showed up one of the earliest and obvious questions that had to be answered was, "how much digital storage do we require to adequately store the standard output target?" The answer is 8 bit and that's why JPEGs are an 8 bit file format. The storage capacity of a JPEG is sufficient to store a standard output target image.