Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Posts for: Mongo
Page: <<prev 1 2 3 4 5 6 7 next>>
Mar 1, 2020 19:47:33   #
rjriggins11 wrote:
I build camera drones. That's part of the hobby for me, (although some of the drones are not just for hobby). I hate seeing it regulated into dust but it's not that inconvenient if really enjoy drone photography. And there are way's around some of the regulations like flying from within a state or federal park. As long as you are controlling the craft from outside the park, it's completely legal. Of course, it's difficult to get that across to the drone hawks.


You are correct under the current federal regulations. State regulations may vary, so I would double check. Also another thing to keep in mind is protection around infrastructure.

For example, a park with a lake and a dam might pose an issue if you loiter near the dam. It's kinda the domestic spy paranoia. One can buy satellite data unfettered, which contains digital elevation models and things a hobby drone is unlikely able to create without another level of sophistication. However, loitering over the dam may get people making phone calls.

So one good approach is to make sure they call you first. For something like a dam, I will write a letter to the facility manager, sometimes with a copy of my FAA Certificate. I make sure they know when I expect to be there, and that they have my cellphone number and are encouraged to call me with any question at any time. Sometimes they do. But not when I am there. They call me when other people are doing things or are stupid about what they are doing. And I explain to them what is and is not acceptable, and will even give them the inspector's name at the FAA Flight Standards District Office if I think talking with an inspector will help them.

So back to the park issue...one has to be line of sight, and able to see the drone. The words are visual contact. For some people that's about 1600 feet. There are eagles out there who can see further. Contrast is your friend. Actually it is a combination of contrast and subtended angle acuity of the eye. Blue is harder to differentiate than yellow against a green canopy. I can see my white drone on the other side of a finger lake which is over a mile away. But I have to carefully track where it is, or I can loose sight of it. My suggestion is that if one is at the limit of visual contact, it is time to retreat. There are some other things you can do. Blaze orange or other color tape on your drone. I like the yellow/green tape, except in the fall. You can also put a strobe on the drone, which helps not only in maintaining contact with it, but also in reacquiring it if you loose sight of it.

Just in case I loose sight of it, or loose the drone, I put stickers with the registration number on the drone as well as my name and cellphone number. My phone is also on the SD card.

One final topic while it occurs to me. In my state and several others, the environmental people are going after drone pilots for disturbing wildlife. This can be a real stinger, and I know one drone operator who has been chased down a couple of times. Some have rather strict interpretations of wildlife disruption, like, "any thing that causes a change in the behavior of an animal." One that several pilots have gotten letters on involve animals like bear or dear which notice the drone and usually move away. Or birds like geese which fly away after noticing a drone.

Several FAA inspectors cruise youtube videos and have gone after people who have posted drones doing things against the regulations. Frankly I see violations on youtube all the time, and hope that the people using the drones don't get caught and made an example of. Some of the footage is fantastic and I would hate to see the pilot/photographer get discouraged.
Go to
Mar 1, 2020 15:27:46   #
Email or file transfer are the only foolproof universal ways I have found.

Between my own phones I use ftp of the file to a local server, and then ftp to the destination phone.

There are many other methods, but email will normally transfer the image without substantial quality reduction. MMS (text) will reduce the resolution and greatly compress the image.

There are apps which promise to do the same, but my wife gets apped out, so we stick with email, or the FTP game.
Go to
Mar 1, 2020 15:20:24   #
A slightly different story...when vacationing last year, I took my drone, and saw that the resort we were at prohibited drone flying. I talked with the director of security, who is a photographer. I told him what I wanted to do, and why I thought I was legal and lawful, and why I did not think he could prevent me from doing so.

He had been interested in drones, and was about to buy one. So it was a good conversation. I did fly the drone at the resort and around the lake, and shot some pictures of cottages and estates at the lake. But I endeavored to do so in an unobtrusive manner.

The director of security told me after our initial conversation that I was welcome to fly there, and if there were issues, we agreed to address them, and exchanged cellphone numbers. There were no issues.
Go to
Mar 1, 2020 15:13:54   #
I have a similar drone, and admit discouragement over the increasing legislation, often encouraged by industry consortium.

When I was young we used a Cessna 182 to fly the Finger Lakes and some of the other lakes around New York, and shot pictures of cottages. We sold hundreds and it gave us a fun thing to do on the weekends. Since I had a commercial airplane rating, I went on and got an instructor's certificate. I have been using airplanes and helicopters for nearly 5 decades taking pictures. So I got a drone.

In addition to flying various full sized aircraft, and teaching in airplanes, I also got the Part 107 certification for drones the second day it became available. The training would be good for any serious drone pilot to take, particularly ones who will travel with their drone and want to fly it in places other than their back yard. Part 107 training only scratches the surface of what a serious drone user should know, but it is a good start.

I am and have been for 40 years, a volunteer with the FAA, to promote safety. For years my efforts were with people engaged in primarily non-commercial operations. About 7 years ago, I started also providing safety seminars on drone education. The FAA has had several significant violations by drone operators. Some seem petty, and there are quite a few which could have serious complications.

The driving force in the industry is that there are companies which want to do package delivery, mapping, agricultural operations, pipeline and powerline monitoring as well as governmental and public safety. These forces will work towards their goals, and the individuals who want to take aerial shots recreationally will continue to get squeezed.

Recently, I provided basic training to almost 20 mostly teen aged hobbyists. Then the rules changed again. Frustrating. But the rules changed when the automobile started becoming more ubiquitous in the early 1900's.

With regard to drone pilots...keep in mind that the FAA considers drones aircraft, and thus subject to their regulation. One should not interfere with a drone pilot while flying, even if you think he might be spying on your daughter. And don't go shooting drones down. The outcome may not be a favorable one. If one thinks someone is being a peeping Tom, report it to the local police. They are able to work with the FAA, and most states have suitable statutes against that type of activity.

If there are any drone or regulation oriented questions, I will do my best to answer PM on the topic. FWIW, I consider RC aircraft different from drones, but they are lumped together similar to the ways that large freight aircraft have similar regulations as large passenger aircraft.
Go to
Feb 28, 2020 19:11:29   #
Years ago, shooting film, I show in -40F and -50F temperatures. Film had thermal reciprocity and had to be color corrected. CCDs and CMOS do as well, but the corrections are part of the camera architecture for the cameras you use.

My checklist starts with warm body, and includes two pair of gloves, and a pair of mittens which fit over the gloves. I would probably take hand warmers now, but back 40 years ago, my metabolism was allot higher, and I never really thought I needed them. But my photo buddies carried them and bought them by the large box.

Perhaps other will chime in, but here are the essentials, from my list:

1. Dry bag. Usually a large freezer ziploc bag with silica gel in it to allow the camera to warm up without frosting up. This is serious because water in the electronics is no fun. Also, if you can take multiple bodies rather than change lenses, that helps manage moisture issues, and snow getting inside on things. Aside from your own safety, this is the most important things to manage.

2. UV filter. There will be many who will argue this. But outside even in real cold temps, it is easy to get a lens frosted. You might even breathe on it funny and have that happen. With the UV filter on, you can clean it easily, and you won't have the crevices to clean out.

3. Battery vest. Take a vest with pockets, and put it somewhere in your layers of clothes, so you can get to it without freezing your chest hairs. I put it on after the long sleeved T and thermal shirt, but before the polar fleece layers. Your batters need to be warm. If not, they may not operate the camera. Expect that you may have to swap batteries into an out of the camera every 10 or 15 minutes, especially if taking video or using the Live View display. I carry 6 or more batteries per camera for each day outing. Plan on rewarming them to squeeze energy out of them.

With the old film cameras, the biggest issue would be that the meter battery would give up, even the silver ones, after getting soaked to -50F.

It didn't help all that much when it's windy, but my wife serged a polar fleece muff, which the camera fit into, and it kept it a little warmer if it was just outside for 30 or 40 minutes.

If anyone has some more arctic experience, I would love to hear about it, as I expect some winter travels in a year or so.

My current experience is with Canon EOS-1, Nikon D7200 and earlier with a F2 and Nikkormat Ft2. The Nikkormat FTn was not good as the different battery chemistry made the meter unreliable. So carry a different meter or guess. I guessed.
Go to
Feb 24, 2020 10:48:29   #
controversy wrote:
You may want to check out LRTimelapse (http://lrtimelapse.com/) --- it's the be-all, do-all, end-all solution to creating timelapse videos with varying lighting transitions - including the night-to-day / day-to-night "holy grail" timelapses.

Here's a link to an example and tutorial video about creating one: http://lrtimelapse.com/


I will check that out. Thanks.
Go to
Feb 24, 2020 10:47:48   #
SonyBug wrote:
Set the ISO to a fixed value.


It was fixed.
Go to
Feb 23, 2020 19:16:31   #
The Nikon vivid setting may help. It increases color saturation. However I have been taking some time lapse sunsets, and I like to set the color to 5000K and then if needed tweak in post processing. FLW filters vary between manufacturers, and I like to control of post processing. Having said that, I seldom have any tweaks in the final product, but then I have only done a couple of dozen sunset shots.

I suppose you could also have the Nikon correct for fluorescent light, which would likely be in the ballpark of using a filter.

Sorry I don't have concrete recommendations because I normally do things differently.

What I can definitively say, is experiment.
Go to
Feb 23, 2020 18:54:11   #
The time lapse referenced below is the second time lapse I attempted with one of my D7200s. It was taken with the lens set to 18mm, a frame every 3 sec, 1/320 sec, f 5.6, and ISO 100. This one was done in video mode.

A previous one was done with successive stills, full resolution, then converted to jpeg. To get a movie out of it, I used ffmpeg on a Linux platform.

Both show variability in apparent illuminance of frames.

I was surprised that I observed that variance in this shoot, which was done in movie mode. I would like to get the mechanics down before I run off to specific locations to recreate the sunrise with more interesting subjects.

Any ideas why the variability in apparent illuminance in some frames?

https://youtu.be/ewVHqi0_NGY
Go to
Feb 16, 2020 18:06:05   #
I will never know how many shutter actuations are on my Nikkormat FT2, but I know that the machining marks have long since worn off the film guides. I do not know anyone who needed to replace a Copal Square shutter. By today's standards it is kind of klunky, but it worked for me. I did test it a couple of times, since we had a shutter tester at work, but it was always within 10% and within 3% over 9% percent of the time. The greatest drift was at slow speeds.

By the time you get to 250,000, there will likely be another camera you might be interested in.
Go to
Feb 10, 2020 20:13:59   #
dsmeltz wrote:
Basketball has too much change in direction of the subject so the odds are always going to be better with single shot combined with a solid knowledge of the game. But without the knowledge, I could see where a burst might help a struggling shooter.


I have excuses! First, my shots today are not held to the high standards they once were. The moderator of the photo club would have me wind a roll of TriX, which would be about 44 shots, plus or minus. Ihad to get some from the JV game and more from the Varsity game. More often than not, I would leave the Varsity game before it was done, and go develop the roll of film, and have a couple of 8x10s printed and in the basketball sports case before the band started playing at the post-game dance.

Sometimes the media would ask for a copy and the athletic director would open the case and give away the prints before the dance was over.

Today, I head home and copy the video to a DVD, and print 3 for the coaches. One of them will get the DVDs early on the morning before practice. If the kids are tired, the last part of the practice will be watching their game. Few consider that fun, and all will see mistakes they made. The DVDs will not have the excitement that a Saturday or Sunday morning print on the front of Section D in the paper has.

Second, the winder on my Minolta was a knob on top, and it took painfully long to wind to the next frame. I didn't get a decent camera until my school photographer gig was nearly over, and I worked at a local optics company and made enough money to get a Nikkormat.

Third, I only had 44 shots, plus or minus, and I needed to get at least 5 to 8 printable shots out of that. So it was the land of scarcity, and one could not take 500 shots and see if there were 5 usable ones. I am sure that the digital revolution in photography has changed the constraints and allowed higher production. But back then, if I wanted the assignment of the rival school BB game, I had to produce more and please more than the next kid who wanted some "free" film.

Last, there were only 44 shots, the gym needed lights, but the school was marginally financed, so lights weren't happening while I went there. And Diafine normally gave me ASA 2000.

So that's what you get with an old dog, who learned some of the tricks of the time, and manages to survive shooting some photos today. But he is really cheap on the click button. The picture better work out.
Go to
Feb 9, 2020 17:26:39   #
selmslie wrote:
One approach is a simple mathematical technique.

If ten bits can record 1,024 brightness levels (more than the eye can discern) then we can ignore the last four bits of a binary number that begins with "1". If it begins with "01" we can ignore the last three, with "001" the last two and with "0001" the last bit. There will be no visible loss of image quality in the brightest portions of the image.

This might be a method that would work with lossless compression. The difference between lossless and lossy compression might then be only a matter of degree.

I wonder whether anyone could actually demonstrate the difference in a double blind test - without knowing the size of the raw files.
One approach is a simple mathematical technique. b... (show quote)


Actually what you describe has been tested, and it doesn't work nearly as well as one might think. Also the HVS is much more sensitive than 1024 discrete levels. A good subject matter example of this is radiographic displays. In the 1990's it was thought, based upon limited studies, that radiologists could only discern 900 levels of grey. Someone started giving them displays with more levels of grey, which presumably they were unable to realize. They did realize those. Today it is common to have 16 bit displays that are truly capable of 16 bits (65,536 shades of grey) and they will seldom settle for less than 12 bits. There are similar stories with the displays used by the military and three letter agencies for geoint analysis. In the tents in the battlefield, 12 bit displays are standard, with some higher ones sometimes available in the harsh conditions of the field.

If someone were really interested in trying such an algorithm, it would be easy to code up and see the effects.

A primitive compression method example, is RLE. In Run Length Encoding, adjacent pixels of the same value are coded with a special value, and a repeat value. It is lossless, and before the days of the kind of computers we have now, it was easy to implement with discrete logic or bit slice processors. And it works fast.

However the discrete wavelet transforms utilized in JPEG 2000 provide a much more robust tool for implementing compression and other features, and that is why they are used.
Go to
Feb 9, 2020 15:32:30   #
Mosaicing and Compression

The business of demosaicing is perhaps played up to be more than it is. Whether a raw image file needs to be demosaiced to strip out a Bayer filter, or and X-Trans (6x6) filter is an issue of camera architecture and design.

For example, I have worked on multispectral cameras where there are 5 filters, loosely referred to as IR, near-IR (NIR), R, G, B, and even sometimes UV filters. Some architectures have a separate sensor for each spectral band. This allows other benefits, such as doping for higher quantum efficiency in each band.

A raw image file is a very loose generic term, and perhaps most of us agree that the format is selected by the designer, and my point is that the format can have interwoven RGB, or it could have a frame of R, a frame of G and a frame of B. One could even have two frames of G, which is what Brice had in mind with his so called "Bayer" filter. The filter was a compromise between having square pixels and providing greater spatial information in the band where the HVS had the greatest spatial resolution.

For what it is worth, yes, there IS a difference when the image is rendered with two green photosites on the resulting media. It does appear to have more clarity. I remember looking at jungle vegetation, where there was a distinctive difference between the combined RGB and the RGGB rendered image.

So if one is working with a mosaiced image, it is noteworthy that there are quite a few ways to demosaic the image. The results of each favor applications. For example, RCD, a ratio corrected demosaic method, works better when handling rounded objects. It is used commonly in astrophotography.

Mosaics, as mentioned above, are different. Some Sony cameras, and a few Pentax ones use a pixel shift methodology, where a circular offset shift in introduced into each frame, and then the four resultant frames are save in one raw file. When demosaicing images from these cameras, it os possible to filter the movement out and create clearer images, while reducing the four frames to one image. The Fujifilm cameras using X-Trans can be processed with either a 3 pass or a 1 pass filtering, which yields different sharpness, most notable on low ISO images. While not covering all methodologies here, it is worth noting one more, and that is the variable number of gradients method. Certain lenses, especially wide angle lenses, may result in cross talk between photosites, due to the ray geometry of the light. (This is more frequently seen in mirrorless consumer cameras, with an adapter for a wide angle lens.) There are algorithms for best handling these artifacts, and VNG4 is an example of such an algorithm.

So how does this tie in with compression, one might ask? Quite a bit. There are several places in the image chain where compression might be applied. If one had separate frames for each spectral band, merely replicating compression hardware for each band would be an easy way to speed up compression. Fortunately that is not done anymore. These days, compression takes into account the spatial characteristics of the image, the color gradients in the image, and other factors. The JPEG 2000 standard utilizes wavelet transforms which can be applied against several different factors.

There is all kinds of literature which covers this at various levels of detail. The link provided gives an introduction to wavelet (DWT or discrete wavelet transform) and scratches on the surface of applications. Keep in mind that there are MANY ways to apply compression to an image. It can be in color domains, spatial domains, temporal domains (common for motion imaging), and others.

If you take a look at this link, page through it, and look at the pictures. We're photographers, right? Ignore the math, unless you are a math geek, and then you will like it, and you might find a few mistakes (grin). Do read the words which describe in general attributes, as they are understandable by most system users (photographers).

http://eeweb.poly.edu/~yao/EL6123_s16/Wavelet_J2K.pdf

Finally, there are data compression methods and image specific data compression methods. Lossless compression is a no-brainer. It's easy to imagine that any compressor/decompressor which can deterministically get output identical to input can be easily used on images. The real challenge comes when trying to obtain higher compression rates, or output streams which are more tolerant of data errors in rendering. Image specific, perhaps lossy, compression can offer that. Not only that, but the processing cost for decompression is far less when temporal parameters are included, which helps keep streaming video data rates or media size minimized, and reduces complexity of player/renderers.
Go to
Feb 8, 2020 11:47:25   #
selmslie wrote:
This can be explained with very simple math. Think about the distribution of values within each file.

An 8-bit image can display a range from black to white using 256 values. For nine steps that's about 28 values per step if they are evenly distributed - 28x9=252 is just about the whole range.

The values in a 14-bit raw file are not evenly distributed. The brightest step has 8,192 possible values and the step nine stops darker only 16. Generating 28 highlight values from 8,192 is clearly overkill but generating 28 values from 16 involves some compromising. The middle tone has 512 raw values that can easily create the 28 8-bit steps needed within each stop.

So we can compress some of the information in the brightest raw steps without seeing any visible impact on the JPEG. In other words, lossy compression does no visible harm. Not appreciating this, many photographers opt for what they feel is a safer option - lossless compression or no compression at all. They pay a penalty with larger files.

But an 8-bit image has already lost most of what it's going to lose in the first step, even if it is an 8-bit lossless TIFF. If it drops a few crumbs of quality with subsequent edits and saves it's not going to be as much as it lost initially.
This can be explained with very simple math. Thin... (show quote)


The jpeg standard is published, and a co-worker chaired the JPEG 2000 standards committee. The standard actually uses wavelet methods, and one can think of wavelets as a mini-FFT. The method does not arithmetically break grey levels down, but rather looks at the spatial distribution in a heirarchical manner. It is possible to set the algorithm for no compression, in which case the resultant image can be restored to the original image. Mappings of compressed images can take in to consideration the color responsiveness and spatial characteristics of the human visual system. This adds to the ability of jpeg images being able to have reasonable perceptual quality even though the are compress, or even highly compressed.

Not sure what was being explained above, but a 14 bit image would have the ability to represent 16,384 values.

JPEG 2000 can handle thousands of terapixels, with up to 38 bits/sample (pixel), with or without tiling, and has other features. It was designed with medical imaging, geospatial imaging, pre-press and more conventional photography in mind.
Go to
Feb 8, 2020 10:36:15   #
cedymock wrote:
My first question would be who are you taking these photos for, family or media publication (for school, newspaper, magazine) I feel there is a difference. Media (smallest depth of field) family (most depth of field light will allow). My reasoning came after I took photos (for family) of my oldest grandson when he was 5 years old.


Very good point!
Go to
Page: <<prev 1 2 3 4 5 6 7 next>>
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.