Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Posts for: profbowman
Page: 1 2 3 4 5 6 ... 63 next>>
May 4, 2024 15:05:15   #
Rudolf wrote:
Hi Richard, My work is done by hanging pendulums with LED light/lights attached to the bottoms. Very little of Photoshop is used, mainly cropping. Simple mechanics.


I have studied and taught fractals and non-linear dynamics. I like that this is done with a physical pendulum. I am lthinking that at some point I did this kind of thing with a difference equation, too. But that would require me to dig over the teaching notes I have saved, not very many anymore, and dig it out.
Go to
May 4, 2024 13:47:38   #
Rudolf wrote:
No prognosis yet, nor cure.


These are indeed intrigiomg cp,bomatopms pf art amd matire.

Have you talked about this type of art before? What I am wondering if this is from an editing program, or do you have coding for some fractals that you use? If you are willing to share you secrets, that is. --Richard
Go to
May 1, 2024 17:14:35   #
TonyP wrote:
I'll play. C works best for me.


As several others have said, D is the best for me. I need the motion staying inside the frame rather than moving out aa in A and B.
C is just going the wrong way for me. I am not sure why. Maybe as some sugested it is simply due to cultural tastes. I live in the US but have lived three years in Belize and two and a half in Albania. I am not sure if those times affected me in this case or not.

BTW, I would have wished the photographer had caught the bird just out of the greenery a bit. --Richard
Go to
Apr 30, 2024 17:33:18   #
CHG_CANON wrote:
Useless? yep.

1, Where are you going to find someone online to receive and print your ENORMOUS 16-bit TIFF?

2, Didn't you shoot in RAW and edit in that native RAW format with a qualified digital editor? The sRGB export to JPEG properly mapped all those colors with no color banding; hence, no issue.


Just to add a bit to Chuck's good response, let me rind those of us who edit the jpg files from our cameras whether mostly or from time to time that there are a number oof lossless JPEG formats to which we can save our intermediate and even final edited images.
https://www.uglyhedgehog.com/t-780424-1.html

The only thing we need to do when we need to send a copy to a friend, to print, or to post on the web is to open the lossless JPEG file and save it as a regular JPEGH or print it from our viewing program
Go to
Apr 30, 2024 15:32:58   #
sippyjug104 wrote:
good job and may God bless him for his caring and sharing.


Thanks to you and Erich for your kind comments. I I do miss being able to talk with dad.

I keep thinking of how I might improve the image to more fully tell what I want it to do. Maybe this minor change is more effective. --Richard


(Download)
Go to
Apr 30, 2024 14:33:32   #
selmslie wrote:
Resolution needed for printing is pretty much a matter of proportions.

There is a consensus that 300ppi is the target to use when printing an 8x12" image that will be viewed from about 10-12" (25-30cm). That's about all that normal eyesight can resolve.

That works out about 8.44MP, maybe all you will ever really need. That's because a person with normal eyesight (or reading glasses) can't look at an 8x12" image from any closer than the normal viewing distance (about the width of the print) without developing eyestrain and a headache.

You print that same 8.44MP image at 8x12 feet and view it from 12 feet away and still see the same level of detail. Change feet to meters and it still works.

The bottom line is that any camera that can capture an image with more than 8.44MP can print any image size you need, so long as you don't crop the image below 8.44MP.

Incidentally, Harrisonburg, VA, is just north of Fort Defiance and the Augusta Military Academy. I was there for about a year in the early 1950s. We marched in a parade in Harrisonburg. I was happy to transfer to a more comfortable school in Miami. AMA now a museum, a pretty spooky place.
Resolution needed for printing is pretty much a ma... (show quote)


Yes, you have given us a good calculation to remember. I do remember reading somewhere about a dpi requirement of 600 being stated. Maybe it was one of my book publishers. Anyway, with my poor eyesight, I can never count on framing a photo as I want it in my camera. So, I am glad that I have a 24MB camera allowing me to cut my photo dimensions from 6000 x 4000 to 3000 x 2000 pixels and still get a printable image.

As to your experiences with the Augusta Military Academy, as an Anabaptist Christian, I am a strong conscientious objector to participating in war. Having worked as a supervisor of boys who were defined as juvenile delinquents back in the late 1960s and seeing how the "state" and several of the other supervisors treated them, I am convinced that there are better ways to "redeem" our youth than by using aggressive force. Well, we could talk aboout that more but not here. --Richard
Go to
Apr 28, 2024 21:57:30   #
Three weeks ago, my dad died at 95 years old. I was pondering what to do in honor of him. He cared for people and enjoyed talking with anyone regardless of their status in life.

So, I used my cartoon digital artistry and made this poster to honor him. --Richard

P.S. Anyone please critiqué this as you wish. I keep trying to learn more in all my art including photography and digital art.


(Download)
Go to
Apr 27, 2024 09:48:19   #
selmslie wrote:
I have figured out what is happening in the two photo viewers. It relates to the comment above.

When I view the full image on my 2k display (23½" wide, 82ppi) with the Windows Photo Viewer or in Capture One, the complete image covers a width of about 17", about 1390 pixels out of the 1920 for the full screen. The numbers are a little different for the 4k display but the same thing happens.

In order to get 6000 image pixels down to 1390, it has to be downsized by 1390/6000 or more than four times in each dimension. Detail and sharpness is lost during this downsizing and the highlights are darkened. But the image will actually have about the same tonality and sharpness of a 17" wide print.

As I zoom into the image, the highlights appear to get brighter because there is less downsizing.

But zooming in also shows how a larger print would look. I can only make a 16x24" with my P900. A print that size would mean that anyone looking at the print will probably view it a normal viewing distance slightly greater than the distance at which I view my desktop monitor. For a larger print a normal viewing distance would render about the same character and apparent highlight brightness.

The Capture One export is different issue. During export it offers the option for additional adjustments (Lightroom might also offer this):
https://static.uglyhedgehog.com/upload/2024/4/26/450682-adjustments.jpg
But doubling down on the sharpness already baked into the image (I normally leave it at Capture One's default) may not be a good idea.

I tried several alternatives - No Output Sharpening, Output Sharpening for Screen, Output Sharpening for Print and Disable All. They all do something a little different. Output Sharpening for Screen works well when reducing the image size for viewing but Output Sharpening for Print does not work well for viewing.

It seems that the best alternative for printing is use the full image, JPEG or TIFF makes no difference. I will have to do some more prints to see if there is any difference between No Output Sharpening and Output Sharpening for Print.
I have figured out what is happening in the two ph... (show quote)


Thanks for this exploration of Capture One. It seems to me to verify my conclusion that when comparing two images, one should look at both at 100% size so that no downsizing occurs. This will show the true nature of any highlights or darks areas and any fuzziness that is present in the image as saved in the file. If one looks at an image at a larger size than 100%, then the software must make up pixels that are not there and present them on the screen. If downsizing, the software removes (or somehow averages out) some of the actual data to show less pixels on the screen.

As to printing, my sense is that one needs to experiment with dpi and desired size to "see" what is the best viewing situation--details seen at a given viewing distance. --Richard
Go to
Apr 27, 2024 09:12:34   #
Bill_de wrote:
You did not take bananas into account.

---


nor mangoes. ha, ha!

That is the problem with metaphors. Sometimes they don't work, and one only ends up with fruit salad. --Richard
Go to
Apr 26, 2024 16:37:55   #
selmslie wrote:
I wish it were that simple, but it's not. It's not even a question of sharpening or ICC profile in the Export dialog.

If you toggle between these two images, the highlights in the leaves are brighter in the in the reduced (4096 wide) version.

What's more, when I display either of them in the Windows Photo Viewer, I can increase the magnification by small increments (rather than going directly to 100%) and at the first increase, the highlights in the leaves get brighter.

If it were a simple matter of averaging of the pixel values, as you suggest, the highlights should get duller, not brighter.

Since this happens in the Photo Viewer as well as in the C1 Export step, there is something else happening that I can't explain.
I wish it were that simple, but it's not. It's no... (show quote)


I was not espousing any particular method or filter to be used during resizing. I was demonstrating how downsizing from four-pixel groups to one-pixel groups will result in loss of information.

IrfanView, which I use because I it gives me a lot of analytical control over all of my editing functions, has a number of algorithms what the user can avail themselves of during resizing.

Hemite fastest)
Triangle (bilinear)
Bell
Mitchell (bicubic)
B-Spline
Lanczos (slowest)

I use Lanczos because it seems to give the best results and modern PCs are not haunted by slow processors.

Looking at Capture 1, I noticed that it gave the user a lot of choices on exporting image files. So, it is important that you give us all of those parameters.

One that is most important to me is the 80% quality factor for JPEG exported images. As has often been noted here on UHH forums is that if this quality value is kept at 90-1000%, there will not be a noticeable reduction in visual quality. Using a value of 80 is guaranteed to give you artifacts and loss of sharpness and other data losses. Even though Capture 1 has 80 as the default, change it to 100 while you are editing or printing.

There are a number of other file formats that produce smaller size files without lossing any information in reduction. Two examples are JPEG-20000 and PNG. --Richard
Go to
Apr 26, 2024 11:47:42   #
selmslie wrote:
That's the crux of this subject, whether resizing [down] will result in a loss of detail that is reflected in the final result. The question is whether that becomes apparent in the final image.

I think that it does if fine detail is a significant feature of the original image. For some subjects, it is. For others it is not.


My point is that in geneeral when one reduces the size of an imalge, one loses pixels and therefore loses information. This loss of pixel information means that reducing the size of an image always leads to reduction in detail if the photo has detail to begin with. Only in rare cases of blank areas of a given color will nothing be lost.

Note that this loss of pixels can never be rreconstructed with accuracy since there is always many combinations of pixels that can result in reduced pixels.

Here is a simplified illustration, an 8 x 1 pixel image, showing this loss of information from the reduction in the number off pixels. --Richard


Go to
Apr 25, 2024 18:17:23   #
selmslie wrote:
When you are looking at an image in your editor you are looking at the full size image. ,,,

The next post shows three different sizes of the same finely detailed high contrast B&W image. You might see the difference in the thumbnails.

If you download the images and look at them without pixel peeping, you can see how much resizing altered the appearance of the image. The difference also shows up when I print the three images.


There are several reactions I have when reading your comments and looking at your b&w images.

1.l When one edits a photo file, one is not looking at the full-size image unless one lis using a camera witth a small sensor with a few pixels or one has an enormous monitor.

2. This reminds me of a discussion I had numerous different times with the reviewers and staff at Shutterstock. When comparing photos, one needs to compare apples with apples and not apples with oranges. If my image is 2000x20000 pixels, and another participant's photo is 6000 x 60000 pixel, the only fair way to compare them is at 100% size where each pixel in each photo gets mapped to one pixel in the viewing software which is usually one machine pixel on the monitor. If both my photo and the other participant's photo look just as clear at their respective 100% view, then they must both be accepted as being of the same quality.

3. It seems obvious to me that any photographer, professional or knowledgeable amateur, should know that if a 6000 x 60000 pixel image is resized to a 1500 x 15000 pixel image, then 4 pixels in the original must have been mapped into 1 pixel in the final photo. with the resulting loss of detail.

4. If I want to compare the two images, I referred to in #2, then to be fair in comparing these prints, I need to print them at the same dpi. Thus the 36 MP image will print bigger than the 4.0 MP image, but both should look equally sharp at the same difference away.

So, either we are discussing apples compared to oranges or we are preaching to the choir. To me, neither of these is very informative. --Richard
Go to
Apr 16, 2024 09:21:14   #
pfrancke wrote:
I am sure that I am not saying it well. When I say reduce noise, perhaps it is better to say that it eliminates that which it recognizes as random. And it keeps that which reinforced as being signal. Or valid data if you will. So through the miracle of stacking (math and statistics), you end up seeing deeper and more clearly than you otherwise might.

But yeah, it depends on what you are seeking to see and understand. If you desire to capture the pure moment, rapidly unfolding then the only solution is to improve the resolution of it all. To capture more light more quickly and with greater sensitivity. Stacking is but the poor man's tool to make the most of what we are given.

The cosmos as we view it, it is SLOW and obscenely massively large, and so to be able to see it more fully, the cheating of stacking works very well.

And please don't take my wild rambling about this all to the bank. Like all photography, recognizing the difference between truth and lies requires a connection between the viewer and the object viewed and somehow a mechanical processes touches the soul and becomes Art. Lies and truth, signal and noise, but words that depend on the connection between the seer and the seen.

Or to say the same thing in a more powerful way... the famous quote - "And if you gaze for long into an abyss, the abyss gazes also into you."
I am sure that I am not saying it well. When I sa... (show quote)


Thanks again for sharing your procedure and the philosophy behind it. I definitely understand how stacking reduces random noise. What I am unclear about is how stacking hides or enhances regular motion.

For example, with a period of 4.5 days, Jupiter's Giant Red Spot is moving at just over 3 deg per hour. So, long exposures and/or stacking will affect how this motion is recorded in the final photo. Just things I think about from time to time but without the equipment to experiment. --Richard
Go to
Apr 14, 2024 22:07:33   #
claytonsummers wrote:
While setting up for M81 and M82, I took a couple of snapshots of the comet. This is 120 seconds iso 400 (644mm focal length at f/5.6) and is unprocessed, SOOC. I did convert it 8 bit to get under 20MB. It gets nice and dark out in the desert.


That is a good looking photo of the comet. To help me get a geographical orientation and time frame for this photo, could please add N, S, E, W to the edges of the photo and tell us what time it was taken? I noticed that the EXIF data had been removed, so I cannot get those myself.

Thanks. --Richard
Go to
Apr 14, 2024 21:45:22   #
pfrancke wrote:
thanks Sonny, and
Richard, thanks for asking. Yeah the mount tracks the stars (once it is polar aligned). Without tracking you are hard pressed to have an exposure longer than a couple of seconds for a telephoto lens before stars start becoming elongated. And then there is guiding... which is a smaller lens/sensor that takes photos every couple of seconds and makes small guiding adjustments to the tracking. This allows the larger telephoto lens to stay on target for 300 or 500 seconds. In the image of M51 for example we have about 35 images taken with each exposure being at 300 seconds, and then the software stacks (blends) the images together.

The stacking software is very sophisticated and does a wonderful job recognizing signal and getting rid of noise. For instance, each image taken might have a satellite trail in it, but they are eliminated from the final result because the software recognizes that the trail is not in all the images, so it gets ignored.
thanks Sonny, and br Richard, thanks for asking. ... (show quote)


I have never used staccking software, b ut I keep thinking I want to do so. But with astronomy, I do wonder about what data I will lose.

Reducing noise is a wonderful thing in astro-photography except for certain dynamic subjects. Does it blur out the dynamic part of eruptive prominences, and will it wipe out some of the Stirling nature of the Red Spot in Jupiter's atmosphere? I am curious. --Richard
Go to
Page: 1 2 3 4 5 6 ... 63 next>>
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.