Tay, the teenage chatbot from a year ago, from Microsoft? Good example of what AI does when exposed to humans … it became a raving racist and Nazi. Microsoft had to shut it down after just a few days. They have a new chatbot now, Zo. Like all AI going back to the original chatbot, Eliza, an AI rogerian psychotherapist … it is human anthropomorphic projection, and fraud. There is no intelligence in a machine, unless somewhere, sometime, there was a human in the loop. The programer is not necessarily a good example of intelligence. The Google car that ran a red light, was not truly independent, it was teleoperated by a human driver.
I found an on-line version of Eliza. It said that what I said to it was interesting. So I stopped while I was ahead. And I didn’t have the heart to turn it into a raving Nazi ;-) I chatted with it like I chat with real people here.
How does that work? I am a person in the loop, directly. The programmer is a person in the loop, indirectly. So in chatting with Eliza, I am indirectly talking to myself, using the programmer as an intermediary. Like looking into a fun house mirror … it is myself, but distorted by the wavy glass. So it is no surprise I could get it to say what I said was interesting ... since I see myself that way (Preening).