bgrn wrote:
Thanks, I understand the principles as to why, I was just curious as to how large of an object people have been able to hide outside of the focal range. I do like the idea of this AP. i'm going to have to check it out.
Great photo.
How big an object with "vanish" depends on how out of focus it is, and much it contrasts
with the rest of the scene.
The chain link fence is still in the picture. It's just very, very blurry and superimposed
on the entire scene. It's still there, degrading your image a tiny bit,
Not much, because the fence was kind of a neutral gray color, and fairly thin wire.
Had it been thicker or painted black or white, it would degrade the image more and
be more noticable.
If you took another exposure
without the chain-link fence....you'd be a leopard's dinner.
But if you somehow didn't get eaten, you'd get a slightly sharper picture with slightly more
contrast. Whether or not the difference would be noticable I can't say.
Whether or not one can
see defects in an image depends on its subject and how
the image is displayed: the size, dynamic range, etc. A good analogy is: how much
static can there be in an AM radio signal before it becomes noticable? Obviously, it
depends on the program (quiet or noisy) and whether you are listening though a tinny
speaker or high-quality headphones.
In this case, the noise isn't static, but had a regular pattern. In physicist David Bohm's
term, it is an "implicate order" in the image. I don't know if it could ever be made
visible in this photo, but a pattern of light and dark areas would probably show up
if a statistical test designed to look for it was run on the image file. (Another kind of
implicate order is a hologram. In that case, the image is recoverable.)
Anyway, getting back to how images are displayed: lots of images look great on the
camera's display don't look good on a computer monitor. And lots of photos that look
good on the computer monitor don't look good when printed at 24" x 16" Resolution
is a big factor, but there are others: each device or medium has a different dyanmic range.
And as I said, the subject matters. Take distortion for example How do you tell if a photo
of a octopus has distoration?
Finally, human beings notice some kinds of image degredation much more than others.
Impressionist painters exploited this fact: painting precise detail only where it was
necessary. In George Seurat's potillist paintings for example, there are lines only
where there need to be (e.g., the edges of a boat's sails, or of a tree trunk). Elsewhere,
there are only dots of color (rather like a color lithograph, but much larger).
https://upload.wikimedia.org/wikipedia/commons/9/96/Georges_Seurat_026.jpgThere is some similarity between how Seurat painted and what the "sharpen" filter
does. But there are two huge differences: (1) the shapren filer ia an algorithm not a
painter: it may create edges where tehre aren't supposed to be anay. And (2) a
painting does not start out as a photograph: an optical image of a real scene.
There is a vast amount of lighting informatoin in any real scene. Global changes
(such as lighten or darken) will still look natural. But local changes -- unless done
with the lighting knowledge of a painter--risk crearting "impossible lighting" that
won't look real. Lighten the shadows under a gree too much, and it will look like
someone is under there with a flashlight.
So it's best to be quite conservative about local changes: small ones only, and never
with sharp edges. Remember: "invisble" is relative: painters and some photographers
see things in an image that most people would miss. (In the old days, instructors would
say that if they could spot your dodging and burning, it was too much.)
Anybodywho know enough to start moving light around can make a lot more money
as a painter than as a photographer. How much do painters know about light?
http://boydgavin.com/wp-content/uploads/2016/07/Trucks-and-Mud-3.jpghttps://efgprivatecollections.com/wp-content/uploads/2016/03/GavinSpilledCrayonsImage.jpg