Ugly Hedgehog - Photography Forum
Home Active Topics Newest Pictures Search Login Register
Main Photography Discussion
Photographic Creativity
Page <<first <prev 10 of 10
Dec 6, 2023 17:12:55   #
Donhisself
 
Kinda, the newest version offers a few things, but creativity is in the mind of the person holding the camera. freeze a hummingbird's wings, blur a waterfall, or time-lapse, if you want that, well iPhones won't ever do that. that is where real eye-catching photos come from, the right equipment that can get the job done as you envisioned it in your mind. I think mine's blown....

Reply
Dec 7, 2023 21:18:29   #
jaredjacobson
 
R.G. wrote:
Even when AI is being apparently original it's just shuffling and re-mixing the ingredients that we taught it about.


In this regard it's entirely unlike humans.

Reply
Dec 7, 2023 22:49:38   #
Blenheim Orange Loc: Michigan
 
larryepage wrote:
Every image I have ever captured of railroad equipment or operation is important and significant to me. Very few of them are of particular interest to 98% of the people I know.


That is an important point. The images I capture are of subjects that are important and significant to me, not necessarily of particular interest to most others.

Reply
 
 
Dec 8, 2023 01:33:07   #
R.G. Loc: Scotland
 
jaredjacobson wrote:
In this regard it's entirely unlike humans.


It's difficult for anybody - or any thing - to be 100% original, but AI still has to work within exactly the same bounds that we have to work within. What AI can claim is that it has a larger database to access and it can do its searching and filtering much more quickly than we can as humans. But AI will never be, and never can be, 100% original, and it would be a mistake to credit it with that capability. Just like it would be a mistake to credit AI with 100% reliability.

Reply
Dec 8, 2023 07:59:29   #
Jamie C Loc: Indialantic, Florida
 
R.G. wrote:
It's difficult for anybody - or any thing - to be 100% original, but AI still has to work within exactly the same bounds that we have to work within. What AI can claim is that it has a larger database to access and it can do its searching and filtering much more quickly than we can as humans. But AI will never be, and never can be, 100% original, and it would be a mistake to credit it with that capability. Just like it would be a mistake to credit AI with 100% reliability.


I believe that expecting 75% reliability from AI is extremely optimistic.

From a B.C. comic strip (John Hart Studios) a long long time ago (as best I recall)
Frame 1 - BC (to Peter while watching ants): "Do you ever wonder if ants think about us?"
Frame 2 - Peter: "You idiot! Ants can't think."
Frame 3 - BC: "Do you ever think that ants wonder about us?"

Reply
Dec 8, 2023 23:50:11   #
jaredjacobson
 
Jamie C wrote:
I believe that expecting 75% reliability from AI is extremely optimistic.

From a B.C. comic strip (John Hart Studios) a long long time ago (as best I recall)
Frame 1 - BC (to Peter while watching ants): "Do you ever wonder if ants think about us?"
Frame 2 - Peter: "You idiot! Ants can't think."
Frame 3 - BC: "Do you ever think that ants wonder about us?"


In my experience, expecting 75% reliability from most humans is also extremely optimistic. There are shining examples, of course. :-)

In many respects, reliability is the antithesis of creativity. Creativity favors new and different, which is often not what people are looking for when they want reliable.

Reply
Dec 9, 2023 03:34:28   #
R.G. Loc: Scotland
 
Jamie C wrote:
...."Do you ever think that ants wonder about us?"


Maybe we should ask AI (if you think it's reliable enough....).

Reply
 
 
Dec 9, 2023 03:40:58   #
R.G. Loc: Scotland
 
jaredjacobson wrote:
....In many respects, reliability is the antithesis of creativity.....


Interesting point. We definitely need reliability but it could be argued that reliability definitely needs an antithesis. Randomness fits that description and some types of creativity could be described as a purposeful and selective adaptation of randomness.

Reply
Dec 9, 2023 09:04:15   #
Jamie C Loc: Indialantic, Florida
 
jaredjacobson wrote:
In my experience, expecting 75% reliability from most humans is also extremely optimistic. There are shining examples, of course. :-)

In many respects, reliability is the antithesis of creativity. Creativity favors new and different, which is often not what people are looking for when they want reliable.


Failure is not an option; failure IS required.

We, well most anyway, learn from making mistakes. One of the first things is natural and logical consequences, that is a primer on causality. We learn that some mistakes "hurt" and thus avoid repeating them, a primer for critical thinking.

If AI makes a mistake, will it hurt? Will it learn?

My rather luddite understanding of AI and its dangers involve "goal based reasoning." The machine is given a goal, something to produce or create, as its function. The machine has access to incomprehensible amounts of data that it sorts using sub-goals. IT's the sub-goals that concern me.

What if the that goal we give our machines is to "eliminate global warming?" It's quite possible that a sub-goal could achieve this easily, subgoal: ELIMINATE HUMANS. While this is pretty far-fetched, it is a valid concern voiced by a very prominent AI developer/researcher.

Will AI ever hurt or feel regret? No. Those are human, and What makes us human, for better or worse. It's also the source of most creativity.

Reply
Dec 10, 2023 10:58:11   #
jaredjacobson
 
Jamie C wrote:
Failure is not an option; failure IS required.

We, well most anyway, learn from making mistakes. One of the first things is natural and logical consequences, that is a primer on causality. We learn that some mistakes "hurt" and thus avoid repeating them, a primer for critical thinking.

If AI makes a mistake, will it hurt? Will it learn?


Well, yes, actually. Neural networks are trained by providing a large data input data set and a set of training metrics, or in other words a judgment on the quality of the output. The metrics provide the neural net a way to guide the learning.

I expect that when you give feedback to generative AI, for example by requesting more images related to a prompt (negative feedback, because it implies that you weren't satisfied with the results) or choose an image (positive feedback, because it implies that you were satisfied with the result), the software uses the feedback in training the AI so it will produce better images faster. That's how I would code it, anyway.

Quote:
My rather luddite understanding of AI and its dangers involve "goal based reasoning." The machine is given a goal, something to produce or create, as its function. The machine has access to incomprehensible amounts of data that it sorts using sub-goals. IT's the sub-goals that concern me.

What if the that goal we give our machines is to "eliminate global warming?" It's quite possible that a sub-goal could achieve this easily, subgoal: ELIMINATE HUMANS. While this is pretty far-fetched, it is a valid concern voiced by a very prominent AI developer/researcher.
My rather luddite understanding of AI and its dang... (show quote)


Also a concern voiced by science fiction writers and movie makers going back to at least the 1950s. There are stories of human-constructed semi-autonomous things threatening humanity going back to at least 1748 with Rabbi Jacob Emden's description of the golem: "As an aside, I'll mention here what I heard from my father's holy mouth regarding the Golem created by his ancestor, the Gaon R. Eliyahu Ba'al Shem of blessed memory. When the Gaon saw that the Golem was growing larger and larger, he feared that the Golem would destroy the universe. He then removed the Holy Name that was embedded on his forehead, thus causing him to disintegrate and return to dust. Nonetheless, while he was engaged in extracting the Holy Name from him, the Golem injured him, scarring him on the face."

As is often said in engineering and computer science, "garbage in, garbage out." Stories of terrific power and unintended consequences are in every story where a wish is granted. Surely the phrasing of such a goal should be something more along the lines of, "eliminate global warming while providing for the comfortable existence and happiness of at least 12 billion people and promote increasing biodiversity for a period of at least one million years."

Reply
Dec 10, 2023 12:12:34   #
larryepage Loc: North Texas area
 
Jamie C wrote:
Failure is not an option; failure IS required.

We, well most anyway, learn from making mistakes. One of the first things is natural and logical consequences, that is a primer on causality. We learn that some mistakes "hurt" and thus avoid repeating them, a primer for critical thinking.

If AI makes a mistake, will it hurt? Will it learn?

My rather luddite understanding of AI and its dangers involve "goal based reasoning." The machine is given a goal, something to produce or create, as its function. The machine has access to incomprehensible amounts of data that it sorts using sub-goals. IT's the sub-goals that concern me.

What if the that goal we give our machines is to "eliminate global warming?" It's quite possible that a sub-goal could achieve this easily, subgoal: ELIMINATE HUMANS. While this is pretty far-fetched, it is a valid concern voiced by a very prominent AI developer/researcher.

Will AI ever hurt or feel regret? No. Those are human, and What makes us human, for better or worse. It's also the source of most creativity.
Failure is not an option; failure IS required. br ... (show quote)

In December 1914, an explosion of nitrate film in the film lab of Edison's plant in New Jersey started a fire that ended up destroying 10 buildings...more than half of his entire plant. There are many stories about this event. Most attest to Edison's eternally positive attitude. A couple are undeniably humorous. But one relates a moment when he was in a very pensive. almost sad frame of mind. Some asked him if he was sad because of the loss of all of his achievements. His reply was no...he was sad to have lost the records of all of the failures.

As for the question about feeling pain or regret...my worry is not whether the systems actually feel those things. My worry is that the systems will convince us somehow that they feel them.

Reply
 
 
Dec 10, 2023 15:42:04   #
Jamie C Loc: Indialantic, Florida
 
"My worry is that the systems will convince us somehow that they feel them."

We are so ready to apply our own feelings to all things around us. It's called anthropomorphism and it evident in most of our daily culture, especially in advertisements.

"The path is clear, though no eyes can see"
"The course laid down long before"
"And so with gods and men, the sheep remain inside their pen"
"Though many times they've seen the way to leave"
...
"The sands of time were eroded by"
"The river of constant change"

Thanks and apologies to Peter Gabriel, Anthony Banks, Phil Collins, Steve Hackett and Mike Rutherford

Reply
Dec 10, 2023 18:31:02   #
brentrh Loc: Deltona, FL
 
I feel it is going strong digital sensors and processing software have sped up production time. Look at all attention AI is getting from those purists that seek to restrict creativity.

Reply
Dec 10, 2023 22:57:13   #
Mac Loc: Pittsburgh, Philadelphia now Hernando Co. Fl.
 
brentrh wrote:
I feel it is going strong digital sensors and processing software have sped up production time. Look at all attention AI is getting from those purists that seek to restrict creativity.


AI has nothing to do with Photographic Creativity.

Reply
Dec 11, 2023 15:01:11   #
NickGee Loc: Pacific Northwest
 
Mac wrote:
AI has nothing to do with Photographic Creativity.


Even less.

Reply
Page <<first <prev 10 of 10
If you want to reply, then register here. Registration is free and your account is created instantly, so you can post right away.
Main Photography Discussion
UglyHedgehog.com - Forum
Copyright 2011-2024 Ugly Hedgehog, Inc.