Lorima wrote:
I have been noticing lately that when I straighten some of my photo's the sharpness goes away. Why is this happening. I am using a Canon SX50 and Picasa as my editing program.
Picture elements that change to a greater angle from vertical (ie - more diagonal) will lose resolution. It is simply a matter of math. If the mast had (just an example for clarity) a total of 300 pixels/inch sitting vertically over a length of 1 inch in an original photo with a crooked horizon line and you, subsequently, rotated the picture 45º (I exaggerate for example sake, but the amount of effect is linear from 0º rotation) to straighten it, the number of pixels over the length of the mast will decrease. Let's look at it another way. If the picture grid was a checker board of 8X8 pixels and the mast was vertical covering the whole row than the number of pixels would be eight for the mast but the horizon was crooked. Now lets shift the mast as we straighten the horizon such that the mast was now at 45º from vertical, but its step (base) was sitting on the same corner square, then the number of squares it covered over its length is now 8x.707 or about 5.6 squares for the same mast length. You would, therefore, lose resolution, both in contrast and, potentially, color depth and resolution.
Additionally, the edge of the mast, previously well defined (in our example) by the vertical grid structure of the sensor pixels, now require adjacent diagonal pixels to contribute to the resolution. Thus the amount of information required to maintain the crisp line has more than doubled.
The camera algorithm that dithers the sensor output to produce edges as close to the original subject uses information as necessary to duplicate the original sensor data/image to the memory. When you rotate you do not have the advantage of the original sensor data upon which to apply the algorithm again, and are now stuck with the data/color assignments of the original camera processor's distillation of the sensor data. Had the mast been at the desired angle initially, that processor would have blended the sensor data differently.
Thus, the higher the resolution the original sensor, the closer to attaining an image that can be rotated with minimal damage. Picture elements that were vertical (inline with the sensor maximum density/inch) will suffer when rotated to a diagonal position. Picture elements that were diagonal and, subsequently, rotated to a vertical position will gain pixels upon which to attempt to match the original processor algorithm's interpretation of the sensor information.
However, pixel count does not completely indicate the ability to non-destructively rotate. How well the processor can differentiate color gradations is critical in rotation as cruder color depth limits the ability to approximate the information onto a new set of greater or lessor number of pixels upon rotation.
Thus, rotation will degrade an image, however whether you notice it depends upon a number of factors, pixel count, color depth and, certainly, enlargement. If you are going to rotate an image and then display it upon a monitor or the Internet as the target display media, then rotate it at the maximum resolution before you adjust the pixel count to match the native density of your screen or the current Internet standard (72 dpi) for the average monitor. That way, you will gain the most information possible for the rotating program to attempt to approximate the original color gradations.