Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Main Photography Discussion
What Drives Pixelization - The Sensor or The Lens?
Page <prev 2 of 2
Jul 28, 2018 21:27:42   #
wdross Loc: Castle Rock, Colorado
 
bcrawf wrote:
Visible pixillation is a matter of the combination of sensor and magnification.


Yes, but which magnification. Resizzing up? Resizzing down? Software differences? Pixel peeping? Since the higher pixel count sensor is showing signs of pixlation sooner, via pixel peeping, than the sensor with less pixels, this says it is the differences between software and the image's settings for output.

Reply
Jul 28, 2018 21:38:12   #
bcrawf
 
wdross wrote:
Yes, but which magnification. Resizzing up? Resizzing down? Software differences? Pixel peeping? Since the higher pixel count sensor is showing signs of pixlation sooner, via pixel peeping, than the sensor with less pixels, this says it is the differences between software and the image's settings for output.


If by 'resizing' you mean enlarging, then of course that is the answer (to making pixellation visible). Pixel count and pixel density are two different things. The higher the pixel density, the less visible pixellation is at any given degree of image enlargement.

Reply
Jul 28, 2018 21:46:14   #
Rongnongno Loc: FL
 
A pixel or a group of pixel regardless of origin, digital painting, camera or scanner capture are the same thing. The density of pixels does not change the nature of a pixel.

When magnified regardless of reason pixelization happens.

You confuse artifacts created by modifications or compression with pixels. These (pixels) do not change their nature regardless of the data they display.

Reply
 
 
Jul 28, 2018 21:51:46   #
selmslie Loc: Fernandina Beach, FL, USA
 
Apaflo wrote:
I saw no indication that anyone did not understand what a pixel is, at least until your post. ...

Does that mean you didn’t understand the title of the thread? That was a clue.

Reply
Jul 28, 2018 21:56:20   #
bcrawf
 
Rongnongno wrote:
A pixel or a group of pixel regardless of origin, digital painting, camera or scanner capture are the same thing. The density of pixels does not change the nature of a pixel.

When magnified regardless of reason pixelization happens.

You confuse artifacts created by modifications or compression with pixels. These (pixels) do not change their nature regardless of the data they display.


We cannot tell what you are responding to. I will note that you are misconstruing the term 'pixellation', which means visibility of pixels (in a specified viewing), not the existence of pixels.

Reply
Jul 28, 2018 22:04:09   #
wdross Loc: Castle Rock, Colorado
 
bcrawf wrote:
If by 'resizing' you mean enlarging, then of course that is the answer (to making pixellation visible). Pixel count and pixel density are two different things. The higher the pixel density, the less visible pixellation is at any given degree of image enlargement.


By software, I can take a 24mp image and produce a 24 pixel image or a 36mp image from it. That has nothing to do with the sensor or actual sensor pixel size. Each camera manufacturer has software that will size an image that is not RAW. What each camera manufacture sets as their default JPEG can be very different. I have mine set to the very finest JPEG setting. The defualt setting for my camera's JPEG will show pixelation long before my set JPEG image setting even though it is from the same camera and sensor. That has to do with software and how the images are reported out. And if you go into your camera settings or manual, you will see a chart of image sizing. The smaller the xx by xx for the image, the faster your camera operates and the smaller the image is in megapixels on your chip. That is why some of us want the biggest and fastest (300mbs and up) chip we can afford so we can shoot the largest JPEG with our RAW image and not slow our cameras down.

So there are other ways, other that pixel peeping, that will affect pixelation of a person's image.

Reply
Jul 29, 2018 13:33:12   #
BobHartung Loc: Bettendorf, IA
 
Longshadow wrote:
The sensor.
That's where the pixels are located. Each different sensor model can only take so much enlargement due to the number of pixels in different sensor sizes.



Reply
 
 
Jul 29, 2018 21:16:11   #
f8lee Loc: New Mexico
 
wdross wrote:
A pixel is the individual sensor on the sensor chip. The representation of the image's pixels on a monitor is a digitally prodouced square or rectangle to produce a mosaic of what the pixels at the sensor saw. Pixel size, pixel density, interpolation, algorithms, monitor resolution, Bayer pattern, etc. all affect "pixelation". One can take any 24mp image and process it to produce a 24 "pixel" image (4 X 6 image) on the monitor. Each "pixel" will be the average color and brightness of the 1mp it represents. It will not be a pretty or a recognizable image but it can be done. And one can go the other way with software. There is software that will take a "look" at the existing pixels and generate pixels that do not exist in the original image (resizzing up) through the software algorithms and interpolation. One can take a 24mp image and produce a 36mp image that way. And one can zoom in on the individual pixels of the actual image (pixel peeping).

In the OP's case, it is the difference in the software for each camera manufacturer, and how that manufacturer outputs the data, without the photograher's interference, that seems to be the difference of what the OP is seeing in pixelation.
A pixel is the individual sensor on the sensor chi... (show quote)


I believe it's important to differentiate what a picture element ("pixel") is when referring to the sensor and then referring to the final image.

On the sensor, each individual photo site can be considered a pixel (perhaps not strictly speaking, but that's what most folks think) - so a sensor with a matrix of 4000x6000 photo sites is a 24MP sensor. But of course, except in the case of a Foveon or Leica monochrome chip, each of those photo sites is only recording the amount of a particular color of light that passes through the R, G or B filter placed above it. And as raw data, this is not an image at all - this very concept is the source of a lot of misinformation and confusion in these pages.

A pixel in the final image is the result of the blending together of data from adjacent photo site readings - a process typically called de-mosaicing. The final color of pixel 1:1 in the final image depends on the readings of the photo sites (again, each only reading one color of light) and the software doing the task. It is for this reason that ACR, for example, can ascertain a different shade of blue than Capture One or the camera manufacturer's own software.

With this in mind, remember that one major difference between (most) Fuji digital cameras and all the others is the pattern of R-G-B filters that is overlaid on the photo sites. The Trans-X chips use a different pattern (you can look it up to see the difference). But my point is, it may well be that differential that creates the OPs original observation - above and beyond pixel pitch etc. when comparing the Fuji to the other brands, even with the same MP count.

Reply
Jul 29, 2018 21:45:26   #
Shutterbug57
 
f8lee wrote:
I believe it's important to differentiate what a picture element ("pixel") is when referring to the sensor and then referring to the final image.

On the sensor, each individual photo site can be considered a pixel (perhaps not strictly speaking, but that's what most folks think) - so a sensor with a matrix of 4000x6000 photo sites is a 24MP sensor. But of course, except in the case of a Foveon or Leica monochrome chip, each of those photo sites is only recording the amount of a particular color of light that passes through the R, G or B filter placed above it. And as raw data, this is not an image at all - this very concept is the source of a lot of misinformation and confusion in these pages.

A pixel in the final image is the result of the blending together of data from adjacent photo site readings - a process typically called de-mosaicing. The final color of pixel 1:1 in the final image depends on the readings of the photo sites (again, each only reading one color of light) and the software doing the task. It is for this reason that ACR, for example, can ascertain a different shade of blue than Capture One or the camera manufacturer's own software.

With this in mind, remember that one major difference between (most) Fuji digital cameras and all the others is the pattern of R-G-B filters that is overlaid on the photo sites. The Trans-X chips use a different pattern (you can look it up to see the difference). But my point is, it may well be that differential that creates the OPs original observation - above and beyond pixel pitch etc. when comparing the Fuji to the other brands, even with the same MP count.
I believe it's important to differentiate what a p... (show quote)


The chip type is probably the answer.

Reply
Jul 29, 2018 22:20:12   #
wdross Loc: Castle Rock, Colorado
 
Shutterbug57 wrote:
The chip type is probably the answer.


Although it might not be the reason or total reason, it has to contribute at the least. The algorithms for the Bayer pattern (50% green 25% both red and blue) versus the algorithms for the X-Trans sensor pattern (56% green, 22% red and blue) cannot possibly be the same and more than likely will produce a different JPEG (smaller) for the same amount of processing time and power.

Reply
Jul 29, 2018 22:26:07   #
wdross Loc: Castle Rock, Colorado
 
Entered in error.

Reply
Page <prev 2 of 2
If you want to reply, then register here. Registration is free and your account is created instantly, so you can post right away.
Main Photography Discussion
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.