srt101fan wrote:
First, my question is NOT about Raw vs JPEG! This has been discussed extensively, so please don't go there!
I'd like to better understand what happens when a camera saves Raw data and JPEG files and how the Raw data is converted to an image viewable on a computer screen. (I use a Nikon D5300 with monochrome in-camera setting; I save both Raw and JPEG; I use Affinity for editing (just started!); having said that, my questions really go beyond specific cameras, in-camera settings, and editing software.)
So,
1) I understand that the JPEG file produced by the camera, saved to the card, and visible on your camera screen is based on your in-camera menu settings (monochrome, contrast, brightness, etc, etc). I've read that there is also a JPEG embedded in the Raw file. Is this the same JPEG saved to the card and seen on the camera screen? If not, how is the Raw-embedded JPEG different and when and how is it used?
2) I understand that the Raw data cannot be viewed directly; it has to be processed to some extent. When you open a Raw file in a program like Affinity or LR you see the image, so it must have been processed to some extent. In Affinity this initial "Raw made viewable" image is apparently not based on the saved JPEG (my JPEGS are set to monochrome but the initial Raw image shown is in color!) What kind of processing has taken place to give you a viewable image in the editing program? Does it depend on the editing program?
3) Is it significant to know what processing has taken place in the creation of the initial viewable image from the Raw file? Or can you get to the same processing end point regardless of the starting point you work from?
I'd appreciate any constructive comments you might have....(I'll even take "don't worry about it, go out and shoot, and just push the PP sliders around")
First, my question is NOT about Raw vs JPEG! This... (
show quote)
1. Geometry conversion (for each color plane): from a hexagonal field of pixels to a rectilinear array of pixels.
2. De-mosaicing: converting three separate RG&B color planes to RGB values for each pixel.
For a JPEG:
3. Optional lossy compression.
For any format:
4. Opening a disk file.
5. Writing the header to required by the format (TIFF, JPEG, whatever) to the disk file
6. Writing correctly-formatted data to the disk file
7. Closing the file (the OS updates the directory entry with the file size).
There are many different algorithms for geometry conversion and de-mosaicing, depending the
specific image sensor and on various trade-offs. Some sensors (e.g., those with a global shutter)
have the pixels spaced irregularly: there are gaps or rows where there are no pixels.
I somebody gave you three "honeycombs" -- corresponding to some shades of red, green and
blue -- and told you to make an image out of them, you can see that there might be a lot of
different ways you could do it.
First, you'd need to know a lot about the layout of the sensor, it's "microlenses", and the particular
color mask filter (or "Bayer filter") it uses. But then you'd have to decide which way produces the
best-looking image.
A diagonal line in the image will become a zig-zag line on the sensor honeycomb. This gets
converted to a stair-stepped line (at best) in the rectilinear array. The algortihm must try to
minimize these "digital artifacts". (In nature, diagonal lines really are diagonal! And in film,
a line is represented the same way regardless of its orientation.)
The sensor data includes not only image information, but also noise. The higher the ISO, the
more noise.
RGB color is basd on a color triangle -- but different sensors use different color triangles.
All this has to be converted to the RGB standard used by monitors. And all these color triangles
are different than the tringle used by the human eye.
Every color triangle can only reproduce the hues that lie within that particular triangle
on the color wheel. Hues outside the triangle cannot be reprodued.
Compression usually takes place "on the fly" -- as the data is being written to the disk file.
JPEG compression is "lossy" -- it loses information from the image (reducing resolution,
among other things).
We are used to lossless compression for text files--but nobody has been able to figure out a lossless
way to compress images that's efficient (produces significant reduction in size). Some formats
do use lossless compression, but it's not much better than "run length encoding".
The whole process is made much more complicated by the fact that there is no industry-standard
RAW mode format.
Never ask how sausage is made.