Mr.Ft wrote:
My question involves my canon 80d and photoshop. Now I'm very new to photoshop, but no matter what resolution I set my camera to when I upload it to photoshop the resolution in 72 dpi. Now is this a function of my camera or photoshop? Is there a way to raise the dpi or resolution? Any help would be greatly appreciated!
GAAAA! This topic is beaten to death on a weekly basis here on UHH. PLEASE, just do a search. That said, this is a summary of the basics. It's more than you asked for, but you need to know it all (and more).
72dpi is an EXIF or TIFF file header (metadata) field value that tells you NOTHING about the image. It simply tells graphic arts pagination software "how big to make the pixels" when representing them with dots on a printed page. 72dpi will make a very large print. 288dpi will make a print 1/2 that size on the diagonal, with 1/4 the area.
THE ONLY THING THAT REALLY MATTERS is the file dimension in *pixels* — THAT tells you the true potential of the file. You can reproduce ANY size file at "72dpi". The number of pixels tells you how big it will be at that resolution setting. 3000x2000 pixels at 72dpi makes a print 3000/72 by 2000/72, or 41.667 x 27.778 inches. Change the resolution without resizing (which JUST changes the EXIF resolution header value and NOTHING else), and suddenly the print is a different size. The same 3000x2000 pixels at 300dpi makes a print 10 x 6.667 inches. BOTH PRINTS appear equally sharp and detailed, but ONLY when viewed from their diagonal distances (50 inches away from the 72dpi image and 12 inches away from the 300 dpi image).
You can change the "resolution" in Photoshop in one of three ways. You can change JUST the resolution header without resizing the file. You can change BOTH the resolution header AND the pixel dimensions of the file. And you can change the resolution without resizing, but then CROP the file. Go to Image —> Image Size to get a dialog with all but crop controls. NOTE ALL the drop-down menu options. Cropping is done with the "scale-o-graph" tool (crop tool) in the tool palette. There are plenty of options at the top of your screen when you choose that tool.
There are, of course, many more subtle issues with resizing and cropping, most of which have to do with the loss of detail. You can spend many hours reading about them, or you can experiment to see what works acceptably well for you, or both.
One more big thing... Try to think of PIXELS as having no dimensions. They are just numbers — DATA in a FILE. Try to think of DOTS as having physical dimensions! A scanner scanning at 300dpi (300 physically discrete samples per inch of the original) creates a FILE with (300 x the size of the scan) pixels. So a 4x5 print scanned at 300dpi yields a file with 1200x1500 pixels. The scanner software probably sets the file header resolution to 300dpi, but you can change it as mentioned earlier.
A MONITOR displays an image with an array of red, green, and blue dots. MONITOR resolution can be anywhere from a few inches per dot to several hundred dots per inch, depending on the size (billboard or JumboTron vs iPhone). Regardless of that, when you view an image at "100%" each pixel in the file is displayed by one RGB monitor "dot."
Dots are used to create pixels (in a scanner) or to reproduce them (on a printer or a monitor). So don't confuse "INPUT resolution in pixels per inch" with OUTPUT resolution in dots per inch. What matters when printing is INPUT RESOLUTION IN PPI and OUTPUT RESOLUTION IN DPI. The two are usually entirely different. They are always different CONCEPTS.
INPUT resolution in PPI refers to how many original, from-the-camera or from-the-scanner (or interpolated) pixels you are spreading over each inch of printed output, REGARDLESS of the size dots used to print each pixel.
OUTPUT resolution in dpi refers to how many ink dots or LED flashes or laser pulses you are using to represent your input pixels.
An inkjet printer with 1440x2880 dpi resolution uses *up to* that many dots to represent however many pixels per inch you are feeding to the printer driver. So, *many* dots are used to represent each pixel! In that case, HOW MANY dots varies with the color and brightness of the pixel. This is called frequency modulated screening... More dots closer together yield darker colors, while a few dots, farther apart, yield lighter shades.
A mini-lab printer with 600dpi resolution uses that many dots per inch in a fixed grid pattern to represent however many pixels per inch you are feeding to the printer driver. The printer is using a laser or LED light source to expose a silver-halide based photosensitive paper.
The generally accepted number of PIXELS you need to reproduce photographic quality images (the point where adding MORE pixels would not yield any more perceivable detail in a print) is 240 PPI at 8x10 inches, assuming a viewing distance of 12.8 inches (diagonal of the 8x10 print). You need LESS input resolution for larger prints, and MORE for smaller prints. A 16x20 printed with an input resolution of 180 PPI is perfectly viewable from 26 inches, or even 20 inches. That's pretty close for a 16x20! (Photography nerds are known to get closer...)
Most graphic arts people want 300 PPI or more (they mistakenly call it 300dpi out of custom or habit in that industry). That is why Photoshop and similar programs have a resize tool... 300 PPI is usually more than enough for photos on letter-size pages. Heck, 200 PPI will do in most cases where conventional halftones or color separations are made.
I hope that helps. It's based on digital imaging experience in both the printing and photographic industries since 1979. I straddled the fence between the two, by working for a company that printed yearbooks and school portraits. Each side has plenty of myths and misconceptions to go around... some of which are touted from the podiums at prestigious international conventions.