akamerica wrote:
From my D850 I import RAW (lossless compressed NEF) pictures to my computer. Post process some or all in Camera Raw, saving those modified as NEF. Then I choose to batch save all pictures as JPG 12 - max quality.
How - if at all - are the pictures that were NOT subject to modifications changed by being converted to JPG? If so, based on what settings?
How - if at all - are the post-processed pictures changed by being converted to JPG?
Inquiring minds want to know...
Art
PS. The processing that takes place in the camera to create a NEF or other adjustments when opening Camera Raw is not of concern.
From my D850 I import RAW (lossless compressed NEF... (
show quote)
First, the images you see on which you did no post processing were still in fact processed to a degree, in that a raw file (no capitalization required) is not an image at all, but rather is a stream of data from the imaging chip. It is is not an image at all until it is "de-mosaiced"
This makes sense once you understand how a digital sensor works.
Think of a digital imaging chip as a matrix of rows and columns, where an array of 6000 by 4000 would yield 24 million "dots" or pixels (picture elements). A raw file is the stream of data that comes from the sensor. With a few exceptions (Leica's monochrome, Foveon) ALL digital sensors, from that new Hasselblad to your smart phone camera, work as follows: while they too have a matrix of dots (called photo sites), each of those photo sites is covered with a colored filter that is either Red, Green or Blue. This is because the sensor chip itself is natively "color blind", each photo site can only register how many photons have struck it when exposed. The pattern of those colored filters (called the Bayer pattern) is R-G-G-B (for upper left, upper right,, lower left, lower right) - and there are twice as many green filters as there are red or blue because the human eye is more sensitive to green. Fuji's X-Trans chip uses a different pattern, but the concept is the same.
When an exposure is made, the data captured by the imaging chip is a bunch of values that represent how much light hit each photo site - and those measurements are all based on the light that made it through those filters. As a thought experiment, imagine a subject that was only pure blue - the photo sites with red and green filters above them would not register anything! Lots of black gaps in that file, eh?
So a raw file first needs to be rejiggered to become a true image file, where each image pixel has a stated RGB value. On the raw file, each spot has only an R OR a B OR a G value, but they are not blended. That process is called de-mosaicing, and the output of the process is the resultant image. Obviously there are a lot of calculations required to do this, but that's what the computer built into the camera (or phone) does. Cameras that only output JPEG do in fact create raw files to start with (there is no other option) but they quickly do the calculations and discard the raw file when the JPEG is created.
So, unlike a JPEG or TIF etc. file, the computer processing a raw file must interpret what actual color should appear at a given pixel, based on the readings made from the surrounding photo sites. There is no absolute lookup table, as there is for image file RGB values, to decide what color purple a given spot should be if one adjacent red reading was 500, another from the blue filter was 644 and yet another from a green filter spot was 42, or whatever. For those who say "yeah but you need a computer to interpret" any digital file!” I say that is actually incorrect - where a JPG specifies that RGB value (and leaves it to the hardware drivers and gear to not screw it up) the various demosaicing programs can actually result in different outputs from the same original raw file. Apple includes demosaicing software in OS X, but DxO, Phase One's Capture One, Adobe and others (including the camera manufacturers themselves) all have their own demosaicing software. While different software won't completely change the look of a given image, the subtle tonalities can well be different if you take an image and process it with C1 and compare that to the same raw file run through Adobe Lightroom.
By the bye, since you CANNOT see a raw image what you are looking at (on the back of the camera when you chimp, or on a computer screen) is a demosaiced image that has been interpreted by a computer, be it in the camera or on your desktop.
I hope that helps you understand how the system works.