Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Main Photography Discussion
A question for pixel peepers
Page <<first <prev 3 of 4 next>
Oct 20, 2018 14:27:33   #
BboH Loc: s of 2/21, Ellicott City, MD
 
burkphoto wrote:
The problem is that you can’t interpolate peaches or paper...


Not trying to. The 3 objects are physical things - applied what was done to the one to the other two, but different objects. Process/question is the same - does reducing physical size diminish the quality of what remains? I don't see that it does.

What I now do see is how blowing up the resulting pixels to fill the dimension that the original group would have created will create a softer image because the remaining pixels in use would have to be spaced farther apart.

Reply
Oct 20, 2018 14:56:25   #
Notorious T.O.D. Loc: Harrisburg, North Carolina
 
Simply put if you crop and try to print to the same size the original would print you are spreading less data over the same area. You can interpolate data to create more data but it is just an educated guess and won’t improve sharpness.

Reply
Oct 20, 2018 15:01:46   #
burkphoto Loc: High Point, NC
 
nadelewitz wrote:
What does the OP mean by "equivalent"?

This is all way too confusing. Does this simplify the answer to the original question:

If you just crop the original image, leaving 20 megapixels out of the 30, it is not "equivalent" to anything. It is just a portion of the original image. There's no difference in the pixels. They are the SAME.

If you ENLARGE the cropped portion to the same size as the original, the pixels will be LARGER. This isn't "equivalent" either.
What does the OP mean by "equivalent"? b... (show quote)


You're right about equivalence... There is no meaning to the use of that phrase here.

However, pixels do not have a physical dimension. They are just numbers representing color brightness values. They can be made larger or smaller when represented by dots of light or ink. That's what the term 'PPI' means. It's an expression of how many pixels will be spread over each inch of output. The very separate term, 'dpi,' is used when talking about scanner samples or printer resolution. How many samples per linear inch am I scanning from a piece of film or a print? How many dots am I using to reproduce each linear inch of output, regardless of the number of pixels spread over that inch? The terms are not really interchangeable.

DOTS do have dimension. When you "enlarge" an image, you are either creating more *pixels* through software interpolation, each of which will be represented by the printer using the same number of dots as if the image were not enlarged, or you are telling a printer driver to use more dots to represent a pixel over a broader area of the paper. Neither process can replace detail, and in fact, both of them slightly reduce detail (interpolation hides the loss a lot better).

There is no substitute for original data. Cropping discards some of what is captured. Enlarging via interpolation in software uses sophisticated algorithms to create fake pixels based upon the original ones, to fill in the "holes" around the real ones when you send the file to the printer for reproduction at a certain size. The idea is to reproduce the pixels with finer granulation (fewer dots per pixel), to make their edges smoother and less visible.

Reply
 
 
Oct 20, 2018 15:15:04   #
burkphoto Loc: High Point, NC
 
BboH wrote:
Not trying to. The 3 objects are physical things - applied what was done to the one to the other two, but different objects. Process/question is the same - does reducing physical size diminish the quality of what remains? I don't see that it does.

What I now do see is how blowing up the resulting pixels to fill the dimension that the original group would have created will create a softer image because the remaining pixels in use would have to be spaced farther apart.


You are correct that cropping doesn't reduce or even alter the quality of what remains. It simply throws away the border area. Enlarging via software fills in the spaces with fake pixels, if you maintain the same "resolution" settings for all sizes of prints. (i.e.; If making an 8x10 at 250PPI, you can "enlarge" in software to create an image to be printed as if it were a 16x20 at 250 PPI. Three out of every four pixels will be fake, but the input resolution to the printer driver, and the printed number of dots PER SQUARE INCH is the same. And, of course, the image will be softer if viewed from less than its diagonal distance.

If you view an 8x10 at 13 inches, enlarge the file to print a 16x20, and view the 16x20 at 26 inches, you will see about the same amount of detail.

Remember, a 65" Full HDTV is only 1920x1080 pixels. Yet it looks tack sharp from six feet away. But if you get too close, you will see each dot. A good 32" Full HDTV displays the same 1920x1080 pixels, but the dots are smaller... You can view them from three feet away. But a good PHOTO print of 1920x1080 pixels will be about 8" by 4.5". You can view it from 9.2 inches and it will look like either of the other examples.

Reply
Oct 20, 2018 15:27:38   #
drklrd Loc: Cincinnati Ohio
 
gvarner wrote:
If you shoot full frame with 30 MP and crop to 2/3 of that in post, do you get an equivalent 20 MP edited photo?


Check the properties pf the finished shot after saving it. In properties it will tell under details what the final pixels are. As well as the dpi in horizontal and vertical. There is no guessing that way.

Reply
Oct 20, 2018 15:58:14   #
JD750 Loc: SoCal
 
drklrd wrote:
Check the properties pf the finished shot after saving it. In properties it will tell under details what the final pixels are. As well as the dpi in horizontal and vertical. There is no guessing that way.


Best Answer.

Reply
Oct 20, 2018 16:09:48   #
burkphoto Loc: High Point, NC
 
drklrd wrote:
Check the properties pf the finished shot after saving it. In properties it will tell under details what the final pixels are. As well as the dpi in horizontal and vertical. There is no guessing that way.


The dpi header of a file is pretty useless in the photographic world. It is a holdover from the graphic arts pre-press prep world of the 1980s. Page layout programs used it to size an image when flowing it onto a page. It is still a reference to SCANNER resolution, its original meaning.

The only thing that truly matters in digital photography is how many pixels you have coming from the camera. If you crop without resizing, that X by Y dimension set is what you evaluate to determine the size print you can make. Most labs want between 240 and 300 input pixels *from the camera* to spread over each inch of output. If you interpolate, you're either throwing away detail (reduction) or spreading it out (by generating fake pixels to enlarge the image).

Reply
 
 
Oct 20, 2018 17:06:25   #
OllieFCR
 
In the context of a camera sensor pixels do, in fact, have physical dimension. In almost all dslr sensors they consist of squares in a Bayer Grid. Larger pixels have an inherent advantage in signal to noise ratio, hence better high ISO performance. Smaller pixels have an inherent advantage in resolution as there are more of them in the same size sensor.


Notorious T.O.D. wrote:
Simply put if you crop and try to print to the same size the original would print you are spreading less data over the same area. You can interpolate data to create more data but it is just an educated guess and won’t improve sharpness.

Reply
Oct 20, 2018 17:10:49   #
OllieFCR
 
In the context of a camera sensor pixels do, in fact, have physical dimension. In almost all dslr sensors they consist of squares in a Bayer Grid. Larger pixels have an inherent advantage in signal to noise ratio since more photons hit them in a given time period, hence better high ISO performance. Smaller pixels have an inherent advantage in resolution as there are more of them in the same size sensor. Each pixel produces one of the data points in your photograph. The raw data is volts for each pixel in a CMOS sensor. Because of the Bayer color filter array the color of each pixel can be calculated from this raw data.



burkphoto wrote:
You're right about equivalence... There is no meaning to the use of that phrase here.

However, pixels do not have a physical dimension. They are just numbers representing color brightness values. They can be made larger or smaller when represented by dots of light or ink. That's what the term 'PPI' means. It's an expression of how many pixels will be spread over each inch of output. The very separate term, 'dpi,' is used when talking about scanner samples or printer resolution. How many samples per linear inch am I scanning from a piece of film or a print? How many dots am I using to reproduce each linear inch of output, regardless of the number of pixels spread over that inch? The terms are not really interchangeable.

DOTS do have dimension. When you "enlarge" an image, you are either creating more *pixels* through software interpolation, each of which will be represented by the printer using the same number of dots as if the image were not enlarged, or you are telling a printer driver to use more dots to represent a pixel over a broader area of the paper. Neither process can replace detail, and in fact, both of them slightly reduce detail (interpolation hides the loss a lot better).

There is no substitute for original data. Cropping discards some of what is captured. Enlarging via interpolation in software uses sophisticated algorithms to create fake pixels based upon the original ones, to fill in the "holes" around the real ones when you send the file to the printer for reproduction at a certain size. The idea is to reproduce the pixels with finer granulation (fewer dots per pixel), to make their edges smoother and less visible.
You're right about equivalence... There is no mean... (show quote)

Reply
Oct 20, 2018 18:03:45   #
burkphoto Loc: High Point, NC
 
OllieFCR wrote:
In the context of a camera sensor pixels do, in fact, have physical dimension. In almost all dslr sensors they consist of squares in a Bayer Grid. Larger pixels have an inherent advantage in signal to noise ratio since more photons hit them in a given time period, hence better high ISO performance. Smaller pixels have an inherent advantage in resolution as there are more of them in the same size sensor. Each pixel produces one of the data points in your photograph. The raw data is volts for each pixel in a CMOS sensor. Because of the Bayer color filter array the color of each pixel can be calculated from this raw data.
In the context of a camera sensor pixels do, in fa... (show quote)


A sensor has photoreceptors or “sensels” that receive filtered light. Pixels are the result of processing sensel data from several adjacent sites. Pixels do not have size. Sensels do. Pixels are just numbers that result from the post-processing of raw data.

Reply
Oct 20, 2018 18:28:21   #
JD750 Loc: SoCal
 
burkphoto wrote:
A sensor has photoreceptors or “sensels” that receive filtered light. Pixels are the result of processing sensel data from several adjacent sites. Pixels do not have size. Sensels do. Pixels are just numbers that result from the post-processing of raw data.


Excellent explanation.

Reply
 
 
Oct 20, 2018 20:16:24   #
hassighedgehog Loc: Corona, CA
 
BboH wrote:
I have never been able to get my head around the issue in the stream of thought that the loss of pixels upon cropping denigrates the resolution, quality of those pixels remaining in the image notwithstanding Burk's good explanation and PHRubn's following comment.
To make an analogy:
I have a jar of sliced peaches which I purchased because I like its taste (resolution) - the label advertises (x) number of slices (pixels). I dish out a serving and the reduction of the number of slices (pixels) does nothing to the taste (quality) of those slices (pixels) remaining; the taste (quality) is unchanged from what it was before the jar was opened. Another - I have a piece of 8x11 1/2 paper which I trim down to 6x8 - the quality of the 6x8 is unchanged from what it was before it was trimmed out.
?????
I have never been able to get my head around the i... (show quote)


This applies as long as the resulting image is only displayed at 6x8, if displayed larger some of the resolution is lost. The pixels that are left have to be interpolated to display at 8x 11 1/2 again. The taste of the peaches does not change, but you get fewer bites (or bytes).

Reply
Oct 20, 2018 20:51:07   #
portcragin Loc: Kirkland, WA
 
If you crop an image from a 30mp camera you just choose not to use all of the real estate of that processor chip. The pixels don't change as long as you don't enlarge or resize. I know people will have input here

Reply
Oct 20, 2018 21:00:03   #
joer Loc: Colorado/Illinois
 
gvarner wrote:
If you shoot full frame with 30 MP and crop to 2/3 of that in post, do you get an equivalent 20 MP edited photo?


Take the linear pixel count of the 2/3 image in each direction square it and add them together then find the square root and that the resulting MP.

Reply
Oct 20, 2018 22:21:48   #
OllieFCR
 
That may be your definition of a pixel but it is not the generally accepted one. The individual photoreceptors are the pixels. Once the raw voltage data is processed, 20 million for a 20 mp sensor, you actually get a lot more data than the 20 million since you have both intensity and color information for each original data point. Unfortunately, for printing (and other non-photography applications) there are alternate definitions for pixels. This may be what is confusing you.



burkphoto wrote:
A sensor has photoreceptors or “sensels” that receive filtered light. Pixels are the result of processing sensel data from several adjacent sites. Pixels do not have size. Sensels do. Pixels are just numbers that result from the post-processing of raw data.

Reply
Page <<first <prev 3 of 4 next>
If you want to reply, then register here. Registration is free and your account is created instantly, so you can post right away.
Main Photography Discussion
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.