Mark Sturtevant wrote:
This may not get a lot of notice, but I wanted to step in here and mention that Chat GTP is possibly not far-left liberal in its politics. I say possibly bc I don't know for sure, but there is good reason to think it's something else entirely.
You see, its algorithm is more about producing text from a "word cloud" that is associated with a query. It does not really understand what it's saying. So if a common string of words is found associated with a given query, it's likely to answer with that string of words. So getting a woke kind of answer could just be because certain strings of words are what it commonly finds online.
Similarly (and very strangely), it will at times produce crazily wrong answers to certain scientific questions -- but that is again because it commonly finds those wrong word strings online. I am sure of that one.
Strangely, if you ask it to give literature references in its answer, it will make up literature references! If it was actually looking things up from cited sources, you'd think it would simply cite those sources. But no. It makes up its sources, and they actually look good but the sources don't really exist. That again seems to be bc of the algorithm it uses.
It is strange to say, especially when it gets things right, but I don't think Chat GTP really understands what it's saying.
That said, the only thing I've heard about the Bing AI is that it isn't very good.
But its early days. We will all be assimilated in due time.
This may not get a lot of notice, but I wanted to ... (
show quote)
AI will only be as good as the algorithms used to create it.