Raiser is a well-educated AI that is learned by comparing many photos where an out-of-focus photo is compared to an in-focus photo. In searching, I found that Google makes a photophone!
Question: Has anyone incorporated Raiser into their existing phone or on their PC?
Where would I find a PC download of Raiser, I can imagine how it would be transformative for our digital photo world.
dpullum wrote:
Here are a few Raiser threads. Please add your com... (
show quote)
If you type " RAISR" no "e" in BING it comes up, there is an implementation available on GITHUB for running in Python. Python is free and so should be the implementation. Don't know if it includes a database of photos. If no photo database included, you would need to either find one or use your own (need high and low res photos of same thing for training to my knowledge). There are many things with photos that such A.I. might be useful for, but unless you are designing software for sale it could be a time consuming/costly experiment if you can't find free photos. I worked with some of this type of stuff (so called big data) and read a lot and it can take lots of input data to get good results and avoid problems.
I.E., one medical effort on tissue images taught the A.I. that ruler on image = cancer!
TOPAZ has amazed me in that it works well on astro photos, something completely different than what it was most probably trained on.
JBRIII wrote:
If you type " RAISR" no "e" in BING it comes up, there is an implementation available on GITHUB for running in Python. Python is free and so should be the implementation. Don't know if it includes a database of photos. If no photo database included, you would need to either find one or use your own (need high and low res photos of same thing for training to my knowledge). There are many things with photos that such A.I. might be useful for, but unless you are designing software for sale it could be a time consuming/costly experiment if you can't find free photos. I worked with some of this type of stuff (so called big data) and read a lot and it can take lots of input data to get good results and avoid problems.
I.E., one medical effort on tissue images taught the A.I. that ruler on image = cancer!
TOPAZ has amazed me in that it works well on astro photos, something completely different than what it was most probably trained on.
If you type " RAISR" no "e" in... (
show quote)
I checked and it appears both photo datasets used by the above implementation are available on the web. Apparently available to all just for developing such software.
Thanks for starting this thread, this is great site for learning such things.
Jim
dpullum wrote:
Here are a few Raiser threads. Please add your com... (
show quote)
Nvidia has a program on their website for using Al learning to do the same thing. Sorry at work don't have the link
Drbobcameraguy wrote:
Nvidia has a program on their website for using Al learning to do the same thing. Sorry at work don't have the link
NVIDIA uses Python Programing... My 1050Ti was purchased for that reason so that I may install Google DeepDreams directly on my computer... which I have not yet done... only 3 years!!! Good intentions... but my programing education was long long ago... 1960!! The world has changed!! The main computer used vaccum tubes!!! Programs were written on blow cards. Technology was similar to Jacquard Loom 1804 based on a 1725 invention.
Searching this AM I found an easy way, hopefully, to install DeepDreams, my favorite AI [gone insane] program on my computer:
https://github.com/GermanEngineering/DeepDreamDid find a download for artistic-minded painters... you make a hump and a photo-quality mountain appears!!
There is a Beta download:
https://www.nvidia.com/en-us/studio/canvas/MiniPaint also looks interesting, free for personal use.
https://github.com/viliusle/miniPaint
Rongnongno Suggested
https://www.uglyhedgehog.com/t-713741-1.htmlThanks, Rongnongno, this is an interesting, informative link.
We all must realize that AI is just in its infancy and a few years from now we will look back and chuckle or be alarmed that we have been replaced by an AI photo world and are of no use, your Drone will decide where to go to take a waterfall photo, to autumn leaves.
See the dangers of facial recognition that especially your google phone does quite well... perhaps too well... Facial recognition named many of the 1/9/21 people "touring" the capital building.
https://www.uglyhedgehog.com/t-714862-1.html#12625168
dpullum wrote:
Rongnongno Suggested
https://www.uglyhedgehog.com/t-713741-1.htmlThanks, Rongnongno, this is an interesting, informative link.
We all must realize that AI is just in its infancy and a few years from now we will look back and chuckle or be alarmed that we have been replaced by an AI photo world and are of no use, your Drone will decide where to go to take a waterfall photo, to autumn leaves.
See the dangers of facial recognition that especially your google phone does quite well... perhaps too well... Facial recognition named many of the 1/9/21 people "touring" the capital building.
https://www.uglyhedgehog.com/t-714862-1.html#12625168Rongnongno Suggested
https://www.uglyhedgehog.com/... (
show quote)
I read an interesting article on Al and what is called deep learning. I tried to find it to share with you but have failed so far. It basically stated that what we call learning is basically the ability to take a lot of data and process it fast. It's not actual learning as we think of it. The computer cannot think of new solutions. Only solutions based on the data that it has. I will try to find the article again. It was very interesting and explains how we are trying to produce a true artificial intelligence that can actually think and learn.
dpullum wrote:
We all must realize that AI is just in its infancy
A field that saw its first results in the 1950's (Newell, Simon and Shaw - "Logic Theorist";
https://en.wikipedia.org/wiki/Logic_Theorist) cannot be thought of as "in its infancy". It may still have a long way to go, but AI has already seen a lot of progress since Newell and Simon's breakthrough.
Logic Theorist was a program that developed proofs of a majority of the theorems in Whitehead & Russel's "Principia Mathematica" including at least one proof considered more elegant than other known proofs. (See the Wikipedia article for more information.)
Drbobcameraguy wrote:
The computer cannot think of new solutions.
That is clearly not true. My prior post wrto Logic Theorist (the first AI program created in 1956) mentions that this program came up with a new proof, never before published, for one of the theorems in Principia Mathematica.
Someone earlier mentioned that A.I. does not really work as our brains do on anything. That has been the thought until very recently, but a new article on scaling has found that the bigger the neural net, the more it now seems to be working more like we do. They talk about the number of connections possible, Google I believe has a project to scale to 1 trillion connections, but that still pails to us, 150 trillion.
Personally, not my field, I think we all fail to estimate how much data we have processed in a lifetime to produce all those connections we call "Knowledge tof the world". Just think how many images we have seen and processed by the time we are only a few years old, much less by the time we are adults.
Drbobcameraguy wrote:
I read an interesting article on Al and what is called deep learning.
A little more history. "Deep learning" is a recent relabeling of "neural networks". The term "deep" might have been coined to emphasize that the neural networks being described were much deeper (had many more layers) than previously.
The idea of Neural Networks goes back to the late 1940's but most early work dates back to the late 1950's. At that time, neural networks were very simple, constrained to a single layer. The idea of multi-layer networks was still in the future. In 1969, Minsky and Papert, two of the best known AI researchers at that time, published a book in which they showed that a neural network could never model certain non-linear relationships, rendering them useless except for the discovery of linear relationships.
Publication of the Minsky-Papert book killed almost all research into the development of neural networks. After all, the experts had deemed it a dead end. (Remember this the next time you encounter somebody who argues, "but the experts say...")
It wasn't until almost 15 years later that several researchers developed a backpropagation algorithm that, most importantly, made multi-layer neural networks possible. Backprop was an algorithm whereby a multi-layer network could learn. Once you have at least two layers in a neural network, it is no longer hamstrung by the linearity constraint (stated by Minsky and Papert) and as a result such neural networks are capable of modeling very complicated relationships.
What we have today, relabeled as "deep learning" is a neural network with many layers.
If you want to reply, then
register here. Registration is free and your account is created instantly, so you can post right away.