Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Posts for: Mongo
Page: <<prev 1 2 3 4 5 6 7 next>>
Jan 13, 2020 15:53:46   #
Very nice visual.
Go to
Jan 12, 2020 15:28:22   #
I agree with gathering metrics on the performance of the camera, including the autofocus, and a MTF Target or other Target test of the sensor.

When you do focus testing, you may wish to do it with the lens stop down and wide open, and see if the clarity is different.

I would work at first verifying that the clarity is not what you accomplished in the past. Then I would try to identify what sub system is responsible for that. For example the sensor: is it resolving pixels spaticially correctly and also correct in terms of gamut?

The autofocus: is it consistently working correctly? are your results any different when you manually focus on a high contrast Target?

Have you changed the way you handled the camera? The kind of shot you take? some other variable which could be impacting the motion and causing blur or inability to refocus timely?

If you want to go down this road, feel free to PM me, I have a lot of experience verifying and validating camera systems.
Go to
Jan 8, 2020 17:55:06   #
+ to smada2015 Good information and good perspective.
Go to
Jan 8, 2020 17:46:53   #
I can't get into all the detail here, but different parts of the retina are used for different light levels. An fully night accommodated healthy eye has been established to recognize irradiance where only a single photon hits the retina.

The rods have better low light sensitivity, but are not good chromatic differentiators. The foveal vision area has a high density of cones, and therefore had high acuity. Furthermore, if the cones are activated by enough photons, there will be chromatic (color) sensing.

If I were trying to capture the high dynamic range of your example, I would consider frame stacking, with varied exposures. That way you could create a composite image, using a lower sensitivity moon capture with a higher sensitivity show for the rings, or the stars.

The shot you are trying to take is a difficult one (I have tried with various methods), but is made easier with some of the tools available today. In olden times, the overexposed moon would be dodged out to allow the rings, stars or whatever to come through. Allot of guesswork was needed because there were not good instruments to control the exposures, both in the camera and in the dark room. And registration of the stacked shot was problematic.

Life is also better today for that shot, because if you have colored halos around the moon, the camera will render the colors with more vividness than film would have (exposure reciprocity in the film), and most likely better than your eyes will with respect color rendering.

There are numerous packages available for framestacking, including Deep Sky Stacker. You may want to look at others.

Frame stacking can also be used to average out atmospherics, deblur and remove other "artifacts." Oh, and as another poster already recommended, please capture RAW.
Go to
Jan 3, 2020 16:52:20   #
Here is what I do with my photographs and my drone video footage, and why.

First, hard drives are not reliable archival store. I use a RAID configuration where one drive can fail, and the data remains intact and can be automatically recovered.

I use two RAID NAS, a primary and a secondary.

Optical media is not necessarily of archival quality, although many people argue otherwise. I have not yet lost optical media which it is written and verified.

SD cards are far from archival quality store, and I get data off of them, and once that data is secure, I blank and full reformat the SD card not in the camera, but on a linux workstation.

MD5SUM is a fast checksum tool, and is used to validate that one read of a file is identical to another.

1. Read SD card to workstation, and write to primary RAID NAS.
2. ReRead SD card to workstation and write to secondary RAID NAS.
3. Create MD5SUMs of all transfered images or video files. Store in the RAIDS (they are small) and on the workstation where I catalog images.
4. When all MD5SUMs are made, and compared, take the SD and blank, then full reformat of SD card and if it passes the format validation, it is pressed back into service. I have had three SD card failures in the last several years. One was on a 16gb, one on a 32gb and one on a 256gb SD card. The vast majority of my SD cards are 256gb. It works in my drone and my camera, and I seldom shoot more than 256gb in a day.
5.Then I have scripts which will put photos or drone video as data, and save on DVDs or Buuray. I use rewritables and right once media. The MD5sums are stored in the media, which helps anyone validate the media later.

From my perspective, the cost to do all of this is low, compared to the cost of reproducing a good shot lost. The drives I use in the RAID are WD Red, 6 or 4 tb. Six total. I have a spare or two drives, should the need be. I have two SD card readers. I happen to have two Bluray RW drives which can burn optical media. I have three reasonably high power linux workstations. The whole data security setup costs less than one nice lens.

Bluray is used for offsite storage, as it is reasonably compact. Working copies of images are stored locally on the workstations. A client with data rights, gets DVD-R data copies, possibly formatted for their file access ease.

Since the heavy and repetitive work is scripted, and media is cheap, I can make as many copies as I wish, and any copy can be singularly validated.
Go to
Dec 31, 2019 15:27:31   #
A Nikormat FT2, a 50mm 1.2, and as an employee at Kodak Research Labs, all the lenses I could borrow, and an awesome camera club.
Go to
Dec 29, 2019 05:32:25   #
Turning off the camera for any lens swap is best practice, and if done each time will protect the lens from damage to the VR components.
Go to
Dec 28, 2019 20:46:36   #
Jim, you are getting some good feedback. I will add a tiny bit more. Segregate the comments, and apply the exposure, composition and other categories of feedback to all your work, selectively and deliberately. It takes time and effort, just like playing a musical instrument. Have fun.
Go to
Dec 28, 2019 20:28:39   #
Best practice would be to turn off VR on the camera, and then turn off the camera and then take the lens off. But it's good enough if you're simply turn off the camera and then take the lens off. I would probably wait 3 seconds before taking the lens off.
Go to
Dec 28, 2019 18:48:18   #
duplicate removed
Go to
Dec 28, 2019 18:46:39   #
I purchased this lens earlier this year, and was somewhat skeptical about the VR. So handheld, I targeted a distant window frame, and traced the edges with the VR on and off. What a difference! At 300 mm the image stabilization was impressive.

I hope she has allot of fun with this lens. I am only starting the fun with mine.
Go to
Dec 23, 2019 14:47:48   #
Very slight binding not noticed during photo shoot, but noticed if doing a cinematic zoom when videoing. I checked a 70-300 plastic Nikon lens I have, and it is similar but probably an order of magnitude better. I will be contacting Nikon on Monday. I can live without the lens while they inspect
it.

Thanks for the comments confirming that this is not normal.
Go to
Dec 23, 2019 08:40:01   #
Always worked the same.
Go to
Dec 23, 2019 03:59:28   #
A more useful metric commonly used in the design of sensor systems would be the spatial resolution and the quantum efficiency of the sensor. The spatial resolution helps drive the well size impacting gamut. The quantum efficiency gives the ratio of electrons per photons. Loosely.
Go to
Dec 23, 2019 03:42:40   #
It is a tangent but the HVS resolution in a snapshot is about 10 megapixels plus or minus 5, and normally decreases with age.

This is because the eye has many defects, like the optic nerve, and while the sensor density is high in the foveal area, it drops off rapidly off axis.

The apparent resolution of the eye is enhanced by the natural dithering of the eye across the field of view, and that time sequence of information is processed by the visual cortex to provide a sustained and updated view.

If allowed to dwell on a scene the visual information might equate to an estimated 600 megapixels. But that sampling is far from a single view.

Finally, there is the issue of what defines a pixel. The HVS will equate higher gamut (equivalent to more bits quantization) to greater spatial resolution (more apparent pixels). So changing the gamut, which changes with age and medical conditions will necessarily change the apparent spatial resolution.

The D850 is only an order of magnitude off from the resolution of the human eye, but captures in a short instant, what would take a couple of seconds of dwell by the eye.

So in some ways cameras like the D850 are approaching or even exceeding some of the capabilities of the human eye coupled with the rest of the human visual system.
Go to
Page: <<prev 1 2 3 4 5 6 7 next>>
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.