quenepas wrote:
Dear Hog Colleagues,
Question on best option for using self-timer release with family group shots.
My latest grandchild was born a few weeks ago so we all went to see the baby, mom and dad when they came home from the hospital.
I set up the Nikon D850 on a tripod to shoot 5 pictures per a sequence. In hindsight, I should have made some adjustments, like wait for the strobe to charge for the next shot and use manual focus. (Issue is, that when I touched the screen on Live to take the shots, the camera started to focus and sometimes didnât get it just right.) Copy of one group shot is attached.
And there lies my question. Should I have used âLiveâ and tap the screen and take my place in the group. Or, simply set up the scene, compose, focus with manual, verify correct exposure, press the shutter-release button and take my place.
With some things in life you just canât go back and re-do, but I want to make sure that the next time I do this, I get it right.
Thanks in advance for your comments.
Val
Dear Hog Colleagues, br br Question on best optio... (
show quote)
Just curious: why did you want to shot a 5-picture sequence?
Were you trying to capture some kind of action?
There is such a thing as taking a convenience function, and trying to make
it do too much.
To try to make this dicussion a little more interesting, I'd like say a few words
about the history of self timers. Then I'll try to tie it in with current trends
in digital cameras.
Originally, all self-timers were driven by clockwork. You cocked a lever,
which wound a spring. When you depressed the shutter, it released a
gear train powered by the spring. The gear train was restrained by a
very simple escapement, usually just friction or air paddles. When it
ran down to a certain point, it tripped the shutter mechanism.
The first (?) camera with an elecronic self-timer was the Minolta XG-M
SLR, introduced in 1981. It contained a 555 timer chip (one of the oldest ICs,
and still in production). This is a small, simple, inexpensive analog chip,
which did nothing but run the self-timer.
It worked very well, but there was a wrinkle: the camera had an
electo-mechanical shutter. An electro-magnet held the shutter open.
This meant that long exposures tended to drain the battery. So the
camera was no good for night photography -- but otherwise was
a good camera. I still have one, somewhere, which still works.
The Canon AE-1 was the first microprocessor equipped SLR.
It had limited features, but sold very well. Canon got the cost
down by automating production. They pretty reliable for a
cheap, plastic camera.
When digital cameras became commercially avaiblle in the 1980s,
they all had microprocessors in order to run the LCD display and
menu fuctions. So the microprocessor also ran the self-timer.
So for number of years, cameras using all three implementations--
mechanical, analog IC, and microprocessor -- were being sold
at the same time. None of the delays were very accurate, but
accuracy is not a requirement for self-timers.
Both clockwork time and 555 IC were dedicated to the self-timer
function, and so are very simple. But the microprocessor timer
is just a subroutine called by the main firmware program. So
other camera features (menus, etc.) may not work while the
self-timer is running. Such firmware programs are extremely
large and complex.
Additional additional features to the self-timer, such as the ability
to trigger sequences of photos, makes the firmware even more
complex. And every year, hundreds of new features get added
to digital cameras.
Digital cameras have only been in production for a little over
40 years, but they have already become as extremely complex
embeeded systems. If current trends continue, one wonders
how complex they will be in another 40 years?
Complexity both creates bugs and makes it hard to find bugs
by testing. If you wanted to test ever state that a digital camera
could be in, it would take hundreds if not thousands of years.
(Just think about testing every combination of pixels on the sensor!)
Any state transition that is not tested could be broken--no one knows.
Complexity always seems to "leak" out of the box -- as the original
post illustrates. Self-timers used to do one thing: trip the shutter.
But now there are several shutter modes. Complexity multiplies.
There is no limit to the size of a firmware contorl program except the
amount of ROM (to hold the executable) and RAM in the box.
But log before that limit is reached, cameras will have become too
complicated to implement, test or use.
The public seems to be blissfully unware of the dangers of complexity,
until a fly-by-wire airliner crashes, or a nuclear reactor melts down.
And users have no idea of the internal complexity of the electronic
devices they use everyday. Some (e.g, radios) are pretty simple. Others
particularly computers and embedded systems, are enormously complex.
Prior to introduction of the IBM PC in 1981, there was no such thing as
a home computer. Now your camera is a computer. But computers have
not gotten more reliable since 1981 -- they have gotten much less reliable--
becuause they have gotten vastly more complicated. DOS was almost
bug-free. Every new version of Windows is a bug fest, and many of the
bugs never get fixed.
But at least a computer has a monitor and a keyboard. It may also have
system log files, self-test routines, program debugger software, and even
an OS kernel debugger. At the very least, it will print error messages on
the console.
But an embedded system is a black box. If it's having problems, it's like
a sick cat: you can't ask it where it hurts. Most of the time, you have
no clue as to what's wrong with a sick digital camera, or what to do about
it. The system is too complex, and there is not enough information or
diagnostic tools avialable.
Computer programmers have a saying: "K.I.S.S.: keep it simple, stupid."
But consumers view complexity as a good thing: more "advanced", more
"high tech". More features is always better, right? Wrong. A feature you
do not use
can hurt you.
Yes, it is possible to build a talking AI camera. But only an idiot would want one.
By any reasonable standard, todays digital cameras are already way, way too
complex. Ane they grow more complex with each new release.
So do smart phones--but smart phones have much larger development budgets,
since they sell by the millions. So they can afford to be more complex than a
camera can--especially since the market for digital cameras shrunk for almost
ten years, before appearing to bottom out in 2017. There is no revenue to
pay for ever-increasing R&D budgets.
"Something that cannot go on forever, will stop." --Gertrude Stein
The question is: will it stop before digital cameras become completely
unusable, or after?