jackm1943 wrote:
I.e., get a good zoom lens and crop in-camera as much as possible ? 😃
On the photography side of things, that's a start. But I was referring to the age-old misunderstanding of the difference between dots and pixels.
Dots are physical points... Pixels are numbers. Put yer boots on, it's gonna get deep!
The offset, letterpress, gravure, and other ancient printing industries (Graphic Arts industry, as it were) talks about DOTS. They use various halftone technologies to print fine dots of varying sizes on paper.
Before digital, those dots were created by finely lined screens overlaid on high contrast black-and-white film during a copy exposure of a print or transparency. The developed negative was called a halftone because it had no real GRAY in it. All tones of the original were represented with solid black dots and the clear space between them. The bigger the dot, the darker the tone. The bigger the clear area, the lighter the tone.
Halftones SIMULATED continuous tone black-and-white originals with dots. The halftone negative would be used to print an offset plate for a press. It would be printed with black ink to simulate a black-and-white photo.
Four color reproduction (CMYK) used four halftones made through four color filters. The resulting halftone plates printed yellow, magenta, cyan, plus black (the Key color that adds Kontrast). Those are the subtractive primaries that create a simulation of color by subtracting and reflecting light from paper.
When digital came along, the dots were created by scanning copy with a device that "broke up" the page into a grid of evenly spaced cells (called dots). A "300 dpi scan at 1:1 would capture a grid of 300 samples per linear inch both across and down the page. The analog voltage sampled from each cell (or dot, a spot on a page with a brightness value) would be digitized with an analog-to-digital converter, and the value stored as an array of pixel data. That data would be used to direct a modulated laser to create a halftone negative (or four, for color separations). The negatives made with scanning technology were better and sharper than those made with cameras and film and filters, because we suddenly had finer control. But the output was still dots on film.
The graphic arts industry's misunderstanding of dots and pixels is derived from the long standing practice of making halftones and scans at 1:1 reproduction size. If I scan an 8x10 at 300dpi I get output from the scanner driver of 2400 by 3000 PIXELS. Those are just numbers. THEY ARE NOT DOTS. You can use dots of ink, or dots on a monitor, or a modulated laser exposing spots on a negative or plate, to reproduce them at any size! They can be sub-sampled to a smaller image, or interpolated to a larger one.
It is this concept of pixels as numbers, not hard physical dots, that makes the entire digital world so useful. Once I have numbers, I can represent them with dots to make an image any size I want it to be. But that involves some responsibility for the INPUT to the transformation process.
YES, I can make a 320x400 pixel image into an 8x10 print. But it will look terrible, because I am interpolating it to 2400x3000 pixels (if 300dpi is what the designer calls for), and then interpolating that into a frequency modulated inkjet printer output at 1440 by 2880 dpi, or offset press output at 150 lines per inch. Each of those 320x400 pixels creates an intermediate grid of 2400x3000 pixels (equivalent to dots, at 1:1, because it simulates 300 dpi). Then the printer translates those *virtual dots* into smaller ink droplets and sprays them on paper. Or the raster image processor turns those *virtual dots* into a 150 line halftone on a printing plate.
But think about that! We started with only 320x400 pixels. That is going to make an excellent postage stamp-sized yearbook portrait, not an 8x10 reproduction. To get an 8x10 to look like a decently made photograph, you need at least 240 pixels per linear inch of photo paper, or 200 PPI of data to create a halftone or separation, and those pixels have to be generated directly from the sensor output in the camera, whether by the camera's JPEG engine or post-processing software. Without enough data to represent the detail of the original scene, the output will be jagged. Stair-stepped. Blurry.
When my friend who ran the pre-press area at our yearbook plant visited me at the photo lab's pre-press prep area for memory books (basically, paperback yearbooks for elementary schools), we had a long talk about this. Phil had grown up with his dad in the business, and knew only the graphic arts side of the story. I knew both, having worked in both businesses for 20 years at that point. So I explained to him the dots vs pixels conundrum.
He understood immediately. I think he already knew intuitively, anyway, but needed better terminology to explain to his managers and visiting customers why certain images looked terrible and could not be made to look good. We sat down at a flatbed scanner and Mac and created samples to illustrate our proposed requirements, and to demonstrate a new workflow process he wanted to use. Our Kodak technical service rep was in the building to see me about some film scanner issues. So he stopped by and joined the conversation, confirming and correcting this and that, and calling his experts in Rochester to get more information.
By the end of the day, we had what we needed to talk to managers and customers about dpi vs PPI. The important thing was to clarify that the input to our yearbook production process needed to be based on the source: camera or scanner output size in pixels, relative to final reproduction size. We had to define for customers how large a file we needed for great reproduction. The result was a chart of minimum PIXEL DIMENSIONS for various reproduction sizes, not a simple "dpi" spec. Ultimately, we surprised a lot of folks by demonstrating with samples that you might need MORE or FEWER than 300 pixels per inch to make certain products look good at certain sizes.
We quit talking about DOTS in front of customers, altogether! We started relating everything back to the pixels coming from the camera's JPEG engine, or a raw file rendered to a bitmap (TIFF, JPEG...), or the OUTPUT file from a scanner. PIXELS. Not dots. Dots are physical (sensels, scanner samples, laser exposures on silver halide paper, inkjet printer droplets...). Pixels are digitized, virtual references to original sensel output voltages and scanner sample voltages. You can *represent* pixels with various quantities of dots on an output medium, but that happens in the output device. In between, we deal with numbers. No longer is there a required 1:1 relationship between a scanner sample (dot or cell area of original copy) and a dot in a halftone negative.
Dr. Nicholas Negroponte of MIT once said, in a lecture at the Photo Marketing Association International conference, "Bits beat atoms." This was his summary of the idea that abstract numbers are more useful than fixed physical forms, because abstracts can be distilled into any number of different physical forms. He pointed back to the Ancient Greek philosopher, Plato, whose concept of "forms" applied rather nicely.
Many of us who grew up in the analog days of printing and photography had a hard time making the transition. But those of us who lived it have come out on the other side knowing that it was enormously "freeing" and "empowering." It absolutely blew some minds that we could print five different products at the same time, from the same file on our server. We no longer had to beat up film negatives by moving them from special printer to special printer. One digital printer could replace a dozen different optical printers that only made one specific product.
But to make it all happen successfully, we had to know how many pixels we needed to make the particular products we made look good at a given size.