Hammer wrote:
This is confusing me and I’d be grateful for some help
A pro photographer told me that the higher the number of megapixels on a sensor the higher the shutter speed needed to get sharp photos and the earlier that diffraction sets in. The sensor on my camera is 41MP and seen examples where diffraction has set in at F8.
A video by the Northrups showed that the denser pixels did defract earlier but the extra detail gave better results overall.
The 60+MP full frame sensors on the market I just can’t see the sense or logic
Help and keep safe.
This is confusing me and I’d be grateful for some ... (
show quote)
This is one of several photographic principles that is difficult to discuss rationally. To fully understand what is happening, it is necessary to understand the source of diffraction. Years ago it was part of the standard physics curriuculum, but it has been a number of years since I have even seen it mentioned. It is a phenomenon which arises because of the interference of light traveling two paths of slightly different length. The most common place it occurs (and the easiest way to create it) is by passing light through a very narrow slit. It is possible to demonstrate this to yourself by passing a fairly bright light (no...not the sun...not ever) through a slit made by holding two of your fingers very close together and looking at it withy your eye very close to the slit. If you are good (and lucky) you will be able to see alternating dark and bright lines rather than a uniform field of light.
Diffraction also occurs at a single edge.The effect is just much less and much harder to detect. Diaphragm blades in a lens present such a single edge, so effects of diffraction are always present in every image of every lens at every f stop.
The question if visible diffraction arises, then, from two different sources. One of them turns out to be pretty much meaningless most of the time, but the other can, on some occasions, be real.
Single edge diffraction can sometimes be real. The defining parameter, since there is diffracted light in every lens all the time, becomes "of all of the light falling on the sensor, how much of it is diffracted light, and how much of it is not?" Since the diffracted light comes only from the tiny slice right next to the diaphragm edge, it should be clear that lots and lots of pure, unadulterated light can pass through a lens with a wide aperture, while much less pure light can pass through a small aperture. (Of course the length of aperture gets shorter at small apertures too, but it varies linearly, while the area of the opening varies as the square of the diameter of the opening.)
Slit diffraction, on the other hand, is much less important in a lens. Diffraction through a slit can be really impressive in a laboratory demonstration. But being able to see it requires three things: a very narrow slit (1 mm or even less), which means a pretty bright light source, and finally, monochromatic light (like a yellow sodium vapor lamp or a visible laser, usually a red one). A 300mm lens set at f32 has an aperture opening of just more than 9mm. That's way too wide to produce any detectable slit diffraction. A 14mm lens set at f22 has a diameter of about 0.7mm. It might create a tiny amount of detectable diffraction in some really high contrast situations.
It is certain that a high resolution sensor is capable of "seeing" things that a lower resolution sensor cannot. Quite frankly, without that distinction, there would be absolutely no reason to pursue or purchase higher resolution cameras. When printed to the same size (and to a size that is achievable by the lower resolution sensor), there should be no significant difference in the appearance of integral flaws. There should also be no important visible differences when viewing a larger high resolution image at the equivalent apparent size. Close inspection of a larger print will always reveal more shortcomings, even when comparing between two prints made from images from the same camera.
Now...considering motion and shutter speed. It is not debatable that higher resolution cameras are capable of "seeing" smaller things...smaller details and smaller movements. Sometimes this is important and sometimes it is not. For me, it can be. I do night sky photography. An important fact to understand about stars is that they are "point sources" of light. They have no width and no height. When properly focused through a good lens, the light of one star will fall on exactly one pixel. This is true regardless of lens focal length or telescope magnification. (It is different from planets, which because they are closer to us do have width and can be magnified into images of finite size.) It is the main reason that stars "twinkle" and planets do not. It has been demonstrated that even with very short focal length lenses, stars quickly move from one sensor to the next if exposure times are too long. The result is that the ability of a high resolution sensor to record individual stars exceeds that of a lower resolution sensor, just as we would expect. But it also means that this ability can be easily lost if the shutter is left open too long, resulting in the same star moving across two or more sensor elements. So the long established "Rule of 500" for exposure times for the night sky has been supplanted by a new "Rule of 300" for cameras with sensor sizes greater than, say 30mp or so. That doesn't mean that a photographer can't use the Rule of 500 any longer, just that more ideal results will be achievable by limiting to the shorter exposures.The same principle which led to this revision would also be applicable to managing other camera or subject movement, which might or might not even be applicable, depending on the subject matter.
Any way...I have found that what has been said about higher resolution sensors is true. But depending on what you are doing, it may or may not matter.