CatMarley wrote:
To be factually correct about invariant ISO involves an understanding of the processing beyond my ability to understand well enough to explain completely or to any software engineer's satisfaction. I only know the highlights that I have gleaned from what I have read, but no one source was able to condense or simplify. If you have a simpler and more correct explanation, please do tell us. I am sure a lot of us would like to know more!
Well Cat (if I may call you that), I’ll give it a try.
While an in depth technical knowledge of the DSLR digitizing process isn’t necessary to take good photos, if you’re going to consider alternate exposure scenarios, it helps to understand what that really entails and the consequences. The idea of intentionally under exposing at base ISO and bringing up in post with an “ISO invariant” Camera with no adverse consequences is one of those, so let me explain.
Upon exposure, the sensor produces an array of analog data which is amplified (one element of ISO control) then digitized by an analog to digital converter (ADC). In modern cameras, the ADC outputs an array of digital values, typically at 12-14 bit resolution, which is then processed/demosaiced by the processor. The dynamic range of the camera is determined by the resolution of the ADC (I bit=1 stop=6.02 dB) and the noise produced by the entire signal chain - the sensor, amplifier and ADC. The noise floor limits the DR on the low end and saturation of the ADC (all bits =1) on the top end.
Let’s say your camera has a 14 bit ADC, which means it can theoretically resolve 16,383 levels (although the theoretical resolution is almost never actually realized due to noise and other anomalies). The output from the sensor is typically/often too low in value to completely use the entire 14 bits of the ADC, so the analog signal is then amplified by the amplifier, the gain of which is controlled by the ISO setting. The digital output of the ADC may also be multiplied by a constant at ISOs > 800, once the maximum amplifier gain has been utilized. In that case, the entire dynamic range of the system is utilized - almost all the 16K levels of the ADC are used. But what happens if you intentionally underexpose by 5 stops, essentially using no amplifier gain prior to the ADC? Now the ADC is using only 9 bits or 2048 levels of its range. Now if you later multiply that output in post to return to the full 14 bits or 16K of range, where does the extra data come from? (Remember we only have 2048 levels or 9 bits of actual data to work with). The answer is that it is interpolated - the missing levels are calculated as a value between the actual points - you cannot create data by multiplying data. The brightness of the shot may be the same, but the actual array of data only has a REAL maximum theoretical resolution of 9 bits - all the rest is interpolated data, and the subsequent demosaicing of the data is using interpolated data. This is very similar to using 8-bit PP as opposed to 16-bit and it potentially has similar consequences.
So the question becomes: you’ve just paid big $ for a high resolution imaging device - why intentionally limit that resolution to 5 bits lower resolution and then interpolating the missing data rather than use the full resolution of your system? Now if you’re trying to save a badly underexposed shot or need 1 or more stops of “headroom” in the highlights, then this is an available tool, but doesn’t make sense as a regular practice. If so, why not just buy a cheap camera with a 8 or 9 bit ADC and PP with an 8 bit package? Photos are often posted showing dramatic “saves” from dramatically underexposed shots, but what is rarely shown is the same shot (in raw, not 8 bit JPEG) correctly exposed and the associated histograms. When/if they are, you’ll note that the histograms are different, showing what is essentially a different greyscale or color pallet.
End of soapbox