TRBenjeski wrote:
<clip> I understand that the digital camera’s imaging sensor size significantly contributes to the digital image quality, as well as megapixel quantity of the camera. But, I believe that there are additional factors that affect digital image quality, including megabyte size of the digital image file itself.
If you might be able to recommend one or more texts that could answer these and other questions I have regarding digital image quality, how it is achived, and what factors affect it, I would sincerely appreciate it.
Additionally, I would like to try to “connect the dots” between analog and digital photography, wherever possible. Basically, I would like to identify the technical aspects in digital photography that have a corresponding technical aspect in analog photography.
Please accept my sincere appreciation for your time in reading this message, and for any assistance you can provide. Thank you very much!
<clip> I understand that the digital camera... (
show quote)
The combination of sensor surface area and sensel density together determine the dynamic range (photon sucking potential) of a camera. The bigger the sensels (sensor elements), the more light each of them can turn into electrons. So both the "chip" size and the "megapixel density" of the sensor affect dynamic range, color depth, and low light performance.
Sensors are analog. Their output is optionally amplified, then digitized with an analog-to-digital converter, and then either saved into a raw file, or converted in the camera to a JPEG image. (Actually, the JPEG conversion ALWAYS happens, because a small JPEG preview image is stuffed into every raw file's "wrapper" format.)
Different cameras perform both of these operations differently! So the processing design engineered into the camera can have a profound effect on the appearance of the output. Two sensors of the same size with 16 MP output will produce different results due to different sensor design, and different processing. Two 16 MP sensors of different size, will have those differences, plus the inherent differences in signal-to-noise ratio caused by the size difference of their sensels.
At this point I want to make a very important distinction between a sensor element and a pixel. If you always think of pixels as NUMBERS, numbers that have no associated physical properties, you will understand digital photography much better!
A sensor element by itself (except in the Foveon sensor) is not a pixel, because it cannot represent more than one color. The sensor elements on the sensor are covered with red, or green, or blue filters. Their output is stored in an array. Data from several to many adjacent sensor elements is processed into pixel data with very complex algorithms. So a pixel is really a calculation of a point of color and brightness based on SEVERAL points of filtered light.
If raw data is processed into an image file in post production software on a computer, we have an analogy to color negative film. The raw file is likened to exposed film that has not been developed, but that can be developed in an infinite number of ways, both now, and in the future. It includes everything that the sensor recorded that the A/D converter could turn into numbers. But it is NOT an image.
If raw data is processed to an 8-bit JPEG in the camera, we have an analogy to color slide film. The image quality is highly dependent upon camera menu settings, especially white balance and exposure. Most of the raw data is discarded during this processing. What remains has limited latitude for adjustment later.
The BIT DEPTH of the data in the raw file determines potential dynamic range of the file. The physical properties of the sensor limit the actual dynamic range recorded. Bit depth is how many binary digits are used to store each value... 8 bits is 256 bits per color, 12 bits is 4096 bits per color, 14 bits is 16,384 bits per color. Better digital cameras record 12 to 14 stops of dynamic range in raw... But a JPEG contains roughly six stops of data, and you can only fit about 5.5 stops onto most photo papers.
Resolution is affected by many factors, but the basic one is *image dimensions in pixels*. A 6000 pixel by 4000 pixel image (24 Megapixels) is a relatively high resolution file. It can make a very nice 20" by 13.333" print at 300
INPUT Pixels Per Inch (NOT dpi, which is an output device (printer) resolution measurement, or a scanner input resolution measurement.
DOTS have dimensions. PIXELS have ONLY VALUES that can be represented by dots.)
You can scale pixels to any size, with or without interpolation. Maintaining the same pixel count changes the resolution. Interpolation modifies the pixel count, either "faking data" (enlargement) or discarding data (reduction).
It is best to forget about megabyte sizes of files. Various file compression schemes (especially JPEG) render file size useless as a guide to image quality. Pixel dimensions are really the only accurate indicator. Divide each dimension by your intended "output resolution input" (how many pixels from the original, uninterpolated file you will convert to each inch of printed or displayed output) to get the optimal maximum size. In the example above, 6000x4000 at 300PPI yields 20"x13.333" on paper. Each of those pixels may be reproduced by a varying number of dots, depending on the output device.
Of course, most of the same things from analog photography affect digital image quality, too. Factors such as lens performance (MTF performance, coma, astigmatism, distortion, chromatic aberrations...) play a similar, if even more critical role. Light is light, although the linearity of digital camera response means you may prefer much less specularity, greater fill light in shadows, and lower overall lighting ratios. The S-curve of the H&D plots we had for film must be simulated in post-production from raw files — if you prefer a more film-like look.
That's all for now... food for thought. WELCOME TO THE 'HOG.