Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Main Photography Discussion
Detail in Photographs
Page <<first <prev 5 of 7 next> last>>
Mar 7, 2022 10:12:20   #
larryepage Loc: North Texas area
 
JD750 wrote:
Larry how many camera brands have you tested?


I claim only to personally know about Nikon. But when I was a volunteer at the local arboretum, I noticed that essentially all photographers who came to photograph the flowers and plants used Nikon and essentially all who came to photograph people used Canon. (Yes...that was a few years ago and other brands were mostly "in the noise.")

I was able to visit with them and discovered that almost all, including the professionals, were shooting JPEGs even though about half did some post-processing. All were using their cameras just as they came out of the box.

My interest in this has persisted, and I continue to ask people about it when the opportunity arises. (Yes...my photography network goes way beyond UHH.) And guess what...it is mostly the same today, except that more of the people I visit with do at least some post processing. They are shocked to find that a few simple adjustments can drastically improve their photos and/or make their later work much easier.

So I've learned quite a bit about how cameras tend to perform out of the box and am confident in my statements

Reply
Mar 7, 2022 10:34:01   #
burkphoto Loc: High Point, NC
 
tgreenhaw wrote:
There is a misconception that anything more than 170 or 300 dpi is wasted when printing. This is not necessarily true depending on how the images is viewed or printed. Your input resolution should be at least the same or in some cases twice the output resolution to get the highest possible quality

If you are viewing on a 65inch 4k monitor, or many other standard monitors, the resolution is roughly 72dpi. Higher resolution images will not benefit much. Onscreen viewing on today's monitors generally won't benefit much from having more than 3840 x 2160 pixel, 6.7 megapixels.

Most ink jet printers including those used for high quality metal prints get only a minimal benefit from higher resolutions and 170 dpi is enough, 300 dpi is ideal and more is almost never beneficial at all.

When mass printed using offset lithography, RGB images are converted to CMYK and then printed using halftone dots. At the lowest end, 65 line per inch are used in B&W newspapers with 85lpi being more common. Color newspapers use 120 lpi. Magazines use 150 and 175 lpi with 175 being more common. When printing with halftone dots, you must have at least twice the input resolution as the halftone line screen or you will see artifacts like a moiré and other undesirable technical issues. For this reason, if your images are ever to be printed in a magazine, you will want at a minimum 300 or 350 dpi. When an image contains hard edges and line art components, they benefit from higher resolution up to maybe 600 dpi. Lower resolutions will show stair steps on these high contrast boundaries.

Having a decade of prepress experience preparing thousands of high quality magazine ads, being an engineer on a design team for prepress equipment, having used Photoshop since version 1.0, and being a software developer specializing in graphics has taught me a thing or two on this subject :-)
There is a misconception that anything more than 1... (show quote)


Much of what applies in the graphic arts is irrelevant in photo printing. I worked three decades in a yearbook and portrait company. The technologies are related, but different in important ways. I learned that when running the pre-press prep department for our elementary school memory books.

Because we ran so much volume through our seven yearbook printing plants and four photo plants, we did very careful tests to determine what settings would yield perceptible differences in the product. Our goal was to minimize server space, network bandwidth, rendering times, and processing horsepower requirements. I was in our photo plant, but worked with my good friend at the yearbook plant on the same campus to optimize our workflow and quality.

What we found was that there is no hard and fast "rule of thumb" answer, in either field. There are only experts who like to pick a figure that "everybody else" says is it. I was rather impressed by the work of Dr. Taz Talley, who wrote several books on scanning for the graphic arts. I attended his lectures at the GATF shows in the late 1990s/early 2000s. He was the first graphic arts guy who was willing to tell the truth of the 300 dpi myth and back it up with proof.

I will spare the gory details, but suffice it to say, with digital screening methods, 200 dpi 1:1 scans of prints used for color separations that will be rendered for 150 lines per inch 4-color offset printing are indistinguishable from 300 dpi. What Taz knew was that in the NYC magazine pre-press prep industry, 300dpi evolved from EDITORIAL needs. The idea was that if you told a scanning house (later, photographers) to supply 300 dpi images, and later found that you had to enlarge an image up 33% to fit it into a layout design change, you could do so without re-scanning and without quality loss. That was important in the days when scanning and color separations were done by separate companies in the ad and journalism businesses! Separations were expensive sets of film negatives, and the time it took to turn them around was ridiculous by today's standards.

In the photo business, we care primarily about pixels, once we have an image from a camera or a scanner driver. (dots are talked about in reference to scanners and printers, which are used at process start and finish, in analog devices). Back in the early days of digital imaging at Kodak, they ran hundreds of tests to determine what image density in pixels would yield what they called "extinction resolution." That's the point at which the human eye cannot detect more detail if you represent an image with more pixels.

It is all based upon print size and image content. An 8x10 inch print, viewed at its diagonal distance of 12.8 inches, requires a minimum of 240 original, from-the-scanner-driver or from-the-camera-bitmap pixels per linear inch, to contain sufficient detail. If you add more pixels to represent the same scene, there is no additional perceivable detail *on that 8x10 print.* You might see it in a larger print, however.

On either side of 8x10 magnification, there is no direct relationship between PPI requirements and print size. In other words, you don't need 480 pixels per inch for a 4x5, and you may need more than 120 PPI for a 16x20. Our tests in the lab in the early 2000s showed that a 4x5 needs about 325 PPI for maximum detail, while a 16x20 can look good with 180 PPI or more, depending upon the nature of the subject. We rendered group photos of graduating classes from the largest original files we had, because head sizes on large prints (20 to 30 inches wide) could be as small as 1/8 inch!

We had tables of minimum pixel sizes for images sent to the lab for specific print sizes. Our "standard" for most submissions was enough pixels to make 250 PPI at 8x10. That meant a 6 MP camera was the bare minimum requirement for school portraits up to 10x13 inches. We used 8.2 MP Canon EOS 20D bodies on our first mass rollout of digital gear to our employee photographers. Later 15 to 21 MP cameras did not improve that product, but they did allow better looking large group photos.

It is important to note that standard viewing distance for prints is one to one and a half times the diagonal dimension of the print, with the near limit being capped at 8 inches in most cases for small prints. Standard viewing distance will normally allow viewing the entire print image within one's visual field. When that truly is the case, a file that makes a great 8x10 will make a nice 16x20, even at 120 to 125 PPI, so long as you don't get closer to the print than 26"! A head and shoulders portrait image might be fine, viewed at that distance, but a group photo of 180 seniors in a graduating class needs full 250 PPI resolution (again, from original pixels generated by a camera or a JPEG from a post-processed raw file, with no interpolation or down-sampling). That's because customers always PIXEL PEEP group photos.

When you make five million school portrait packages a year, you optimize everything. The transition from optically printed film to scanned film printed digitally to digital images printed digitally was EXPENSIVE, costing us many millions of dollars over a 15 year period. So we tested until we got it right.

The biggest thing we learned was not to trust anyone's pre-conceived notions or rules of thumb. We trusted groups of employees viewing sample press sheets and photo prints in double-blind scientific tests.

The funniest test we did was the render test of TIFF vs JPEG for color separations. None of the printing plant managers could see any differences between press sheets made from TIFF files, and press sheets made from JPEGs. At the time, TIFF was considered a "sacred" file format. So the next year, we saved about 80% of the rendering time for school portrait panels in yearbooks across seven printing plants. We also did not need seven new servers the yearbook plants were planning to buy the next year.

Then, by switching to *local* RGB to CMYK color separations at the yearbook plants instead of in the photo plants, we had the added benefit of better color from custom ICC profiles made for local press and paper pairings. The "expert" at corporate who had resisted these changes for five years found another job.

Test everything. Old myths die hard.

Reply
Mar 7, 2022 10:38:28   #
Ysarex Loc: St. Louis
 
burkphoto wrote:
It is important to remember that the REASON there's more DOF at the same aperture on m43 is that for the same field of view, you have to use half the focal length. If you were to use a 24mm lens on full frame and a 25mm lens on m43, print an 8x10 from m43 and a 16x20 from full frame, then cut the center 25% area out of the full frame image, so you had the same field of view as the m43 print, you would see nearly identical DOF at the same aperture.

If you crop the FF down then you're no longer comparing the FF sensor output. You're comparing same with same which is expectedly the same.

To make an appropriate comparison between DOF from different format cameras you need to control the comparison by "taking the same photograph." To take the same photograph with each camera you need to match field of view and perspective and if DOF is what you want to test for then same f/stop.

Taking the same photo with the same field of view, perspective and f/stop, a smaller sensor camera will render more DOF than a larger sensor camera. If you look at the math formulae used then to calculate DOF you'll find there is a variable in the formulae that accounts for sensor size. JD750 is basically correct.

Reply
 
 
Mar 7, 2022 10:54:29   #
Ysarex Loc: St. Louis
 
tgreenhaw wrote:
The second image is higher resolution...

The second image is from a 10 megapixel P&S compact camera with a 5.2 - 15.6mm 3X zoom lens and tiny 1/1.7 sensor. The first image is cropped and then substantially downsized from a 46 megapixel Z7 with the 24-70mm f/2.8 Nikon 3X zoom and FX sensor.

Reply
Mar 7, 2022 10:57:12   #
burkphoto Loc: High Point, NC
 
Ysarex wrote:
If you crop the FF down then you're no longer comparing the FF sensor output. You're comparing same with same which is expectedly the same.

To make an appropriate comparison between DOF from different format cameras you need to control the comparison by "taking the same photograph." To take the same photograph with each camera you need to match field of view and perspective and if DOF is what you want to test for then same f/stop.

Taking the same photo with the same field of view, perspective and f/stop, a smaller sensor camera will render more DOF than a larger sensor camera. If you look at the math formulae used then to calculate DOF you'll find there is a variable in the formulae that accounts for sensor size.
If you crop the FF down then you're no longer comp... (show quote)


It is accounting for magnification. YES, if I compose a scene with a 25mm m43 lens on a 25MP GH6, then compose the same scene with a 50mm lens on a 24MP full frame camera, at f/2.8 on m43 and f/5.6 on full frame, the DOF will be about the same. Or if both are at f/2.8, the full frame scene either has better or worse depth of field, depending on what I want for the scene. It definitely has LESS DOF.

Because the longer lens has twice the magnification, we are able to put the same composition on full frame that the shorter lens puts on m43. It is the magnification (crop, in some folks' parlance) that creates the DOF difference.

All of this is discussed at length in one of my Time-Life Library of Photography books that illustrated the difference when using various film formats and focal lengths. f/64 on 8x10 is like f/16 on 4x5 is like f/8 on 6x8... when you change the focal lengths — the magnifications — among formats to yield the same composition. It's all the same Tri-X film, though.

Reply
Mar 7, 2022 11:06:37   #
Ysarex Loc: St. Louis
 
burkphoto wrote:
It is accounting for magnification. YES, if I compose a scene with a 25mm m43 lens on a 25MP GH6, then compose the same scene with a 50mm lens on a 24MP full frame camera, at f/2.8 on m43 and f/5.6 on full frame, the DOF will be about the same. Or if both are at f/2.8, the full frame scene either has better or worse depth of field, depending on what I want for the scene. It definitely has LESS DOF.

Because the longer lens has twice the magnification, we are able to put the same composition on full frame that the shorter lens puts on m43. It is the magnification (crop, in some folks' parlance) that creates the DOF difference.
It is accounting for magnification. YES, if I comp... (show quote)

Yes, DOF can be expressed in simplified form as a function of magnification and f/stop. But in practice we need to be able to calculate it in the context of normal use. So simple question then: The size of the sensor is a variable in the math formulae we use to calculate DOF. Why is that variable there in the formulae if the sensor size is not a factor?

Reply
Mar 7, 2022 12:21:05   #
burkphoto Loc: High Point, NC
 
Ysarex wrote:
Yes, DOF can be expressed in simplified form as a function of magnification and f/stop. But in practice we need to be able to calculate it in the context of normal use. So simple question then: The size of the sensor is a variable in the math formulae we use to calculate DOF. Why is that variable there in the formulae if the sensor size is not a factor?


Only because it is less confusing for lay people to think that way.

At the lab where I worked, we had charts equating film and sensor formats to focal lengths required for equivalent fields of view. Along with each recommended lens for each film or sensor format, we also had the recommended working aperture for portraits at a six foot distance from the camera, and for various full length prom and team group scenarios. Then we had recommended lighting power profiles for portrait and group setups... and charts of recycle times at quarter, half, and full power...

We had to do that because our "photographers" were salespeople and customer service folks first, photographers last. The product had to be uniform, so it matched the advertising flyers. The technical stuff was pre-formulated. But it was all based on magnification.

Reply
 
 
Mar 7, 2022 12:43:03   #
cactuspic Loc: Dallas, TX
 
Ysarex wrote:
Yes, DOF can be expressed in simplified form as a function of magnification and f/stop. But in practice we need to be able to calculate it in the context of normal use. So simple question then: The size of the sensor is a variable in the math formulae we use to calculate DOF. Why is that variable there in the formulae if the sensor size is not a factor?


I think there is an implied step using the concept of equivalent field of view. If you use the equivalent field of view, you would use a wider angle lens with the smaller sensor, which in turn would have greater depth of field at the same f/stop than the full framed image. I am not sure visually what happens to perceived DOF when you print and have to enlarge the cropped frame image more than the full framed image to get the equivalent size print.

Reply
Mar 7, 2022 12:47:16   #
Ysarex Loc: St. Louis
 
burkphoto wrote:
Only because it is less confusing for lay people to think that way.

No. It's because it's useful to photographers who take photographs.

For example it is correct that: given the same perspective, framing and diameter entrance pupil all cameras produce the same DOF. That's very simple and very useless. Look at any of your lens for engraved values of entrance pupil diameters.

We have developed definitions and formulae to correctly model the behavior of DOF using variables that are available and useful to ourselves as photographers. We use f/stop in the formulae and not entrance pupil diameter. That's not for lay people that's for us.

In those formulae sensor size is a determinant variable. That's how we've defined it. We calculate DOF using values for:

focal length
subject distance
f/stop
circle of confusion (predicated on sensor size).

I redefined DOF above in a useless way with the variables perspective, framing and entrance pupil. It's interesting and informative to see it that way but useless and inappropriate in practice. We've defined DOF to be useful in practice. You want to change how it's defined you've got a lot of work to do and until then sensor size remains a determinant factor.

Reply
Mar 7, 2022 12:48:40   #
Ysarex Loc: St. Louis
 
cactuspic wrote:
I think there is an implied step using the concept of equivalent field of view. If you use the equivalent field of view, you would use a wider angle lens with the smaller sensor, which in turn would have greater depth of field at the same f/stop than the full framed image. I am not sure visually what happens to perceived DOF when you print and have to enlarge the cropped frame image more than the full framed image to get the equivalent size print.

See post: https://www.uglyhedgehog.com/t-731170-5.html#12981232

Reply
Mar 7, 2022 13:07:57   #
burkphoto Loc: High Point, NC
 
Ysarex wrote:
No. It's because it's useful to photographers who take photographs.

For example it is correct that: given the same perspective, framing and diameter entrance pupil all cameras produce the same DOF. That's very simple and very useless. Look at any of your lens for engraved values of entrance pupil diameters.

We have developed definitions and formulae to correctly model the behavior of DOF using variables that are available and useful to ourselves as photographers. We use f/stop in the formulae and not entrance pupil diameter. That's not for lay people that's for us.

In those formulae sensor size is a determinant variable. That's how we've defined it. We calculate DOF using values for:

focal length
subject distance
f/stop
circle of confusion (predicated on sensor size).

I redefined DOF above in a useless way with the variables perspective, framing and entrance pupil. It's interesting and informative to see it that way but useless and inappropriate in practice. We've defined DOF to be useful in practice. You want to change how it's defined you've got a lot of work to do and until then sensor size remains a determinant factor.
No. It's because it's useful to photographers who ... (show quote)


I think you completely misunderstand me. When I calculate DOF, I plug focal length, distance, f/stop, and camera model (circle of confusion) into my DOFC app on the iPhone... just as you mention it above.

But I don't fool myself. It is primarily the amount of lens magnification needed to cover the film or sensor area that reveals shallower and shallower depth of field as the lens gets longer. The larger the light sensing area, the longer the lens must be for the same field of view. The longer the lens, the greater the magnification. I don't know why that is difficult to understand.

Reply
 
 
Mar 7, 2022 14:45:05   #
Ysarex Loc: St. Louis
 
burkphoto wrote:
I think you completely misunderstand me. When I calculate DOF, I plug focal length, distance, f/stop, and camera model (circle of confusion) into my DOFC app on the iPhone... just as you mention it above.

But I don't fool myself. It is primarily the amount of lens magnification needed to cover the film or sensor area that reveals shallower and shallower depth of field as the lens gets longer. The larger the light sensing area, the longer the lens must be for the same field of view. The longer the lens, the greater the magnification. I don't know why that is difficult to understand.
I think you completely misunderstand me. When I ca... (show quote)

I don't think I'm misunderstanding you. I originally responded to what you replied to JD750 and not to what you're saying now.

JD750 said; "Smaller sensors have more DOF at the same Aperture than larger sensors."

You responded; "It is important to remember that the REASON there's more DOF at the same aperture on m43 is that for the same field of view, you have to use half the focal length...

Less magnification used for the same field of view is what provides the additional DOF on smaller formats. It's not the sensor itself." [my bold]

JD750 was correct and you were wrong. The sensor itself is a determinant factor. Now you've changed what you originally called "the REASON" to, "It is primarily..."

We don't need to work out what percent of the difference is due to focal length change and what percent is due to the sensor size but you originally dismissed the sensor size as not a factor. It is. And it's not just a case of the sensor size requiring a focal length change. If that were true then the math would work without a value for the sensor size -- it doesn't.

Reply
Mar 7, 2022 14:54:02   #
cactuspic Loc: Dallas, TX
 
burkphoto wrote:


Our tests in the lab in the early 2000s showed that a 4x5 needs about 325 PPI for maximum detail,



Bill, is that because we tend to look at 4x5's at pixel peeping distances? The reason I ask, is that one of the things I do with my photography is to focus stack plant macros with a great deal of fine texture and detail and then pixel peep the print. I want to be able to see the number, arrangement, texture of the fine spines, tricomes, and even cells in my botanicals. I am not interested in the least in what is required for printing if viewed at standard distances. I intend to pixel peep at the outset as will much of my clientele. When you were experimenting with pixel density, did you by chance discover the "extinction resolution" for pixel peepers? (And yes, I do "pixel peep" wall sized Monets as well as viewing them from standard viewing distance)

Irwin

Reply
Mar 7, 2022 15:59:42   #
JD750 Loc: SoCal
 
larryepage wrote:
I claim only to personally know about Nikon. But when I was a volunteer at the local arboretum, I noticed that essentially all photographers who came to photograph the flowers and plants used Nikon and essentially all who came to photograph people used Canon. (Yes...that was a few years ago and other brands were mostly "in the noise.")

I was able to visit with them and discovered that almost all, including the professionals, were shooting JPEGs even though about half did some post-processing. All were using their cameras just as they came out of the box.

My interest in this has persisted, and I continue to ask people about it when the opportunity arises. (Yes...my photography network goes way beyond UHH.) And guess what...it is mostly the same today, except that more of the people I visit with do at least some post processing. They are shocked to find that a few simple adjustments can drastically improve their photos and/or make their later work much easier.

So I've learned quite a bit about how cameras tend to perform out of the box and am confident in my statements
I claim only to personally know about Nikon. But w... (show quote)


I have noticed JPEGs can vary not only from manuf to manuf but also from camera model camera to camera model within the same brand.

JPEG output, can be modified so it’s good to let people know that it is possible to adjust the jpeg output.

Some people “only shoot raw” so for them it doesn’t matter because they (must) process every shot.

Reply
Mar 7, 2022 16:43:19   #
burkphoto Loc: High Point, NC
 
Ysarex wrote:
I don't think I'm misunderstanding you. I originally responded to what you replied to JD750 and not to what you're saying now.

JD750 said; "Smaller sensors have more DOF at the same Aperture than larger sensors."

You responded; "It is important to remember that the REASON there's more DOF at the same aperture on m43 is that for the same field of view, you have to use half the focal length...

Less magnification used for the same field of view is what provides the additional DOF on smaller formats. It's not the sensor itself." [my bold]

JD750 was correct and you were wrong. The sensor itself is a determinant factor. Now you've changed what you originally called "the REASON" to, "It is primarily..."

We don't need to work out what percent of the difference is due to focal length change and what percent is due to the sensor size but you originally dismissed the sensor size as not a factor. It is. And it's not just a case of the sensor size requiring a focal length change. If that were true then the math would work without a value for the sensor size -- it doesn't.
I don't think I'm misunderstanding you. I original... (show quote)


Okay, yes, that’s true. In my experience that is a minor influence.

Reply
Page <<first <prev 5 of 7 next> last>>
If you want to reply, then register here. Registration is free and your account is created instantly, so you can post right away.
Main Photography Discussion
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.