edrobinsonjr wrote:
Sorry if this has been addressed before. Searched the site and found a few answers but still do not quite get it.
I am preparing images for an art competition for my wife. In the past the image requirements were expressed in pixels (height and width) and dots per inch. This time they have called for images to be 4x5 inches and 300 DPI. There seems to be a lot of confusion about this - myself included. I realize that the DPI unit is a print unit. My printer is specified at something like 1200 x 1200 and up to 4800 DPI in color. Is there a direct correlation between DPI and Pixels?
Can I assume that the 4x5 translates to 1200px X 1500px (300 x 4 and 300 x 5). Using Irfanview, resizing to 4x5 inches results in a badly distorted image.
I read where the long dimension should be used in resizing with the requirement that that the proportions remain the same.
Any help and enlightenment would be appreciated.
Thanks,
Ed
Sorry if this has been addressed before. Searched ... (
show quote)
That's absolutely correct.
In photography, pixels are always the individual picture elements in a file. They are DATA. Each pixel can be sized as needed by the output device, but HAS NO SIZE IN AND OF ITSELF. Sizing instructions MAY be included in the file header, but are (most) often ignored unless used by a page layout program when the image is imported.
Dots, on the other hand, are physical. They are either tiny ink dots on paper, or laser flashes onto silver halide paper, or something physical that can be measured in microns, picoliters of ink, etc. Dots are used to REPRESENT pixels. Many dots of different colors of ink can be used to represent just one pixel, for instance.
Where it gets a little tricky is with scanners, and the reasons why are historical. Scanners were introduced first into the PRINTING industry. They were used to dots, so they use the term loosely, to this day. Scanner drivers still refer to dpi, because they are talking about dividing up a physical dimension of the original into so many parts per inch. Before digital came along, those parts WERE dots, created by screens overlaid on graphic arts film in the back of a huge camera. So a scanner creates one pixel in a file for every "dot" (grid cell) it scans from the original. And in that sense, where a scanned dot becomes a pixel in a file, that's okay.
But it confuses the heck out of most people...
The important thing is to realize that you need a certain INPUT DENSITY of pixels represented on an output surface. In photography, the minimum is generally specified as 240 PPI (original, scanned, or in-camera captured pixels) spread over each inch of output. In the printing industry, they use 300 PPI as a standard practice, because editors like to be able to enlarge by 40-50% without going back to the client or other source for another copy of the original.
Whenever you resize, you need to MAINTAIN ASPECT RATIO to keep the same proportions. If you have a 5x10 original that is supposed to fit into a 4x5 space, you can crop it (BAD idea with artwork), or fit to 2.5x5 within the 4x5 space, or perform some compromise of crop AND fit. Your imaging software should have a "maintain aspect ratio" button or check box or other provision. However, Microsoft products are known for allowing distortion by default! (i.e.; Microslop Weird).
So if you make the long side 1500 pixels, and let the short side match the aspect ratio up to 1200 pixels, you'll give them what they want.