Bipod wrote:
Now pray tell: how does it distinguish between one of a bird inflight and a shuttlecock in flight?
The former is an example of wildlife photography, the latter of sports photography--quite different.
We use all our knowledge to tell the difference -- we know that things that eat are animals, and
that things that have feathers are birds. We know that birds chirp. and make messes on cars.
But the camera just sees patterns of light.
This is the problem of "unrestricted knowledge domain". One can photograph....anything.
The camera sees only patterns of color -- in general it doesn't know what it is looking at.
If you try to write an algorithm to recognize "restaurants", you will discover that a McDonalds
looks a lot like a Jiffy Lube, and Antoine's looks nothing like either.
It helps that we've seen thousands of restaurants before, and know all he different kinds (drive-throughs
fast food, diners, cafes, pizza parlors, white tablecloth joints, etc). This vast amount of knowledge allows
us to decide which features are significant and which aren't in deciding whether something is a restaurant.
There is far too much information in 24 MP for even a super computer to record statistics on
every pixel. Some programmer has to decide what the camera will track--- limit the number of
independent variables . For a particular task --- recognizing letters of the alphabet (OCR) this is
possible (though not with far more errors than a human would make). In general, its an
unsolved problem.
So what about these amazingly smart computers?
After ten years of development and millions of dollars, IBM was finally able to build a specialized
roomful of computers that could beat one middle-aged man at a board game. Board games are a
perfect example of a limited domain: nothing matters in chess except the movement of the pieces
and the rules of the game.
In truth there has been very little progress in AI in recent years. But computers have gotten faster
and RAM and off-line storage have gotten cheaper, which looks like progress, but isn't. Board
games and quiz shows are nothing the decisions a photographer has to make.
ANY device can claim to use AI. There is no accepted definitiion of intelligence except "what
IQ tests measure". So far, no computer has completed an IQ test above idiot level.
Computers have a really difficult time answering questions such as "Who is buried in Grant's Tomb?"
You know, becuase you understand how English possives (genetives) work, that someone's tomb
is where they are buried, and that "Grant" is the name of a person.
You know the difference between "Time flies like an arrow" and "Fruit flies like a banana" --
but the syntax in each sentence is exactly the same. Stuff like that gives computers a headache.
"Artificial intelligence" is marketing talk-- buzz words -- like "disruptive", "innovative", "wholistic"
"paradigm shifting", "wellness", etc. Manufacturers of high-end systems that acutally use AI -- such
as automated attendant and voice-response telephone systems -- carefully avoid using the term,
since they don't want to be lumped with a bunch of sleazy hucksters and Silicon Valley stock swindles.
Remember when lawyers and doctors were supposed to be replaced by "expert systems"?
They weren't.
Would you believe in an AI paintbrush for artists? No, you'd laugh. But it is the same thing.
Don't be fooled: "AI" is just another way of saying "Buy or invest now!"
Now pray tell: how does it distinguish between one... (
show quote)
Somehow, Bipod - this - whilst interesting … no - fascinating … doesn't really address my carefully-written explanation of how Olympus uses AI in its E-M1X to zero in on recognized subjects it stores in memory after the very first time the user applies certain behaviors to its CPU …