TriX wrote:
I’ve asked the owner of the Photons to Photos site and an expert on the subject (Bill Claff) to comment - I hope he will. Lots of assumptions here…
I'll try to stick to the necessary facts and to present them in a logical order.
Dynamic range is the ratio of a high value divided by a low value expressed as a logarithm.
In engineering it's log10 and the typical unit is the decibel dB.
In photography it's log2 which I usually call an EV (Exposure Value). People also say "stops".
For sensors we use the clipping value as the high value although because of the nature of logarithms it's not so critical to get the exact value.
For example there's no meaningful difference between log2 of 16300 and log2 of 16000.
For the low value we use what is called the noise floor.
At the pixel level this is read noise. Read noise is determined statistically, it's a standard deviation.
I call pixel level dynamic range Engineering Dynamic Range (EDR), DxOMark calls it "screen".
When we measure read noise we do it from raw data that is recorded in Digital Number (DNs) also known as Analog to Digital Units (ADUs).
Because an Anaglog to Digital Converter (ADC) changes the analog voltage into an integer it is subject to something called qualization error.
Mathematically quantization error limits the smallest read noise (and therefore highest dynamic range) that can be measured.
An n-bit ADC is limited to about n.5 EV of dynamic range, eg. a 14-bit ADC is limited to 14.5 (not 14) EV (stops).
Once the ADC has sufficient bit depth to resolve read noise additional bits will not result in more dynamic range.
At PhotonsToPhotos I use the standard Circle of Confusion (COC) to create a virtual pixel that has the area of the COC.
This is used for the noise floor for Photographic Dynamic Range (PDR distinct from EDR).
This measure is resolution independent and I do not resample to some set resolution.
At DxOMark they simply take their "screen" read noise and normalize for an 8MP image.
They call this "print" as opposed to "screen".
Their approach is inferior to that at PhotonsToPhotos because underneath it makes an assumption that something called the Photon Transfer Curve (PTC) is a straight line (at least at the low end) and it is not.
This can lead to clearly wrong results.
You can read more about these topics and see data like PTCs at PhotonsToPhotos.