News:

Welcome to our site!

Main Menu

First Church of AI ...

Started by Baruch, November 18, 2017, 08:33:06 PM

Previous topic - Next topic

SGOS

Quote from: Hydra009 on November 19, 2017, 07:55:21 PM
I'm not sure.  Sometimes we seem to be in complete agreement and sometimes not, yet my position has not changed.  Then what has changed?   :headscratch:

My position is that our current AI technology is still in its infancy and current AIs are not general-purpose (narrow AI) and are well below the human level.  But assuming any rate of improvement, AIs are likely to reach human-level intelligence in the future.

In your post, you describe AI behavior in a video game - basically, a NPC running a predefined script with no variation ("enemy always does the same thing") and conclude that the "intelligence" in artificial intelligence is a misnomer, that it's really just number crunching, not intelligence.  Therefore, AIs cannot match human intelligence because AIs do not really possess intelligence.

I'm trying to sum up your position as best I can.  I'm practically quoting you verbatim in places.  So if I completely misread you, I'm sorry.  I'm just trying to understand where you're coming from and I only have your posts and an imperfect reading of what you're saying to work with.

So if you'll sum up your position in a few sentences, that'd help me understand tremendously.
I don't think anything has changed, and in reading back through the posts, I get the feeling we were defending two more or less unrelated ideas.  An extreme analogy would be you insisting that you prefer Budweiser, while I was adamant that there is more than one variety of sea lion.

It may have started when I got to talking about game AI, when I knew you were talking about advanced AI, and I actually hoped you wouldn't think I was equating the two.  But I started on it and just finished the thought.

Then I got on to my semantic issue.  I'm really not sure why.  I guess I just felt I wanted to say it, even though in reading back through, I can't find any comment you made that actually required that response.

I was also occasionally vaguely aware of the feeling that I wasn't sure what we were talking about anymore.  But I became lost in a sea of verbiage until I turned to ride off singing into a brilliant sunset as the credits started to roll and everyone was leaving the theater.  Then the lights came on and the attendants started sweeping up the popcorn.

I hesitate to rephrase my position, because I no longer think it's relevant to yours.  But in complying to your request:

My position is that I believe that human intelligence is a different process than artificial intelligence.  If they are the same (I don't mean of equally superior or inferior quality), then human intelligence must also be artificial, because they are both the same. 

I'm talking about the processes that result in these outcomes that we call intelligence.  Both can produce intelligent outcomes, but are achieved using different pathways.  That's what I mean by intelligence.  It's a complex pathway that solves a problem.

And that's about it.

SGOS

Quote from: Hakurei Reimu on November 19, 2017, 08:42:20 PM
Part of the reason that AI accomplishments tend to be discounted as real intelligence (the AI effect) is because the AI doesn't accomplish the given task in any way resembling the way a human would accomplish the same task.
Yes, yes, thank you.  I'll just let you finish up explaining myself.  As long as you don's say that I discount AI.

Hydra009

Quote from: SGOS on November 19, 2017, 08:54:20 PMI'm talking about the processes that result in these outcomes that we call intelligence.  Both can produce intelligent outcomes, but are achieved using different pathways.  That's what I mean by intelligence.  It's a complex pathway that solves a problem.
Okay.  I'm looking at this more from a results perspective.  I'm more concerned about whether or not an AI can perform complex intellectual tasks than how it goes about doing it or what sort of processes are going on.  So, we were probably talking past each other back there.

SGOS

Quote from: Hydra009 on November 19, 2017, 09:10:28 PM
Okay.  I'm looking at this more from a results perspective.  I'm more concerned about whether or not an AI can perform complex intellectual tasks than how it goes about doing it or what sort of processes are going on.  So, we were probably talking past each other back there.
Yeah, I get it now.

Baruch

Y'all are talking about "strong" AI and "weak" AI.  With "weak" AI, it doesn't matter how the result is produced, as long as it is.  With "strong" AI, it does matter how the result is produced, it has to be produced in the same way a human does it.  With all degrees of gray in between.  That is the exact breaking point between the "expert system" AI and the "neural net" AI.  Both are machine learning, the question is, how much "human" knowledge do you have to put into it to begin with.
Ha’át’íísh baa naniná?
Azee’ Å,a’ish nanídį́į́h?
Táadoo ánít’iní.
What are you doing?
Are you taking any medications?
Don't do that.

SGOS

FWIW

It's kind of interesting, and played a central role in the movie Deus Ex Machina.  This is only posted as something of interest, not a position or opinion.  But it does address the thinking "Process" in humans and AI.   The Turing Test does not test for correct answers or replies given by the AI.  That was not what Turing wanted to determine.  He was only testing whether the AI responses could not be differentiated from human responses.  This may suggest that AI and humans use a similar process if they can pass the Turing Test.

With that, I'll go off on another tangent.  Why would you want to build a computer that thought like humans?  The point of computers is to not think like humans, but to capitalize on their special designs to solve specific problems with much greater speed and accuracy than humans ever could.

Granted it would be fun to design an android like machine, just for the challenge alone, but computers currently have much more value as computers than they do of human simulators.

https://en.wikipedia.org/wiki/Turing_test

QuoteThe Turing test, developed by Alan Turing in 1950, is a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Turing proposed that a human evaluator would judge natural language conversations between a human and a machine designed to generate human-like responses. The evaluator would be aware that one of the two partners in conversation is a machine, and all participants would be separated from one another. The conversation would be limited to a text-only channel such as a computer keyboard and screen so the result would not depend on the machine's ability to render words as speech.[2] If the evaluator cannot reliably tell the machine from the human, the machine is said to have passed the test. The test does not check the ability to give correct answers to questions, only how closely answers resemble those a human would give.

The test was introduced by Turing in his paper, "Computing Machinery and Intelligence", while working at the University of Manchester (Turing, 1950; p. 460).[3] It opens with the words: "I propose to consider the question, 'Can machines think?'" Because "thinking" is difficult to define, Turing chooses to "replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."[4] Turing's new question is: "Are there imaginable digital computers which would do well in the imitation game?"[5] This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that "machines can think".[6]

Since Turing first introduced his test, it has proven to be both highly influential and widely criticised, and it has become an important concept in the philosophy of artificial intelligence.[7][8]


QuotePhilosophical background[edit]

The question of whether it is possible for machines to think has a long history, which is firmly entrenched in the distinction between dualist and materialist views of the mind. René Descartes prefigures aspects of the Turing Test in his 1637 Discourse on the Method when he writes:



[H]ow many different automata or moving machines can be made by the industry of man [...] For we can easily understand a machine's being constituted so that it can utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs; for instance, if touched in a particular part it may ask what we wish to say to it; if in another part it may exclaim that it is being hurt, and so on. But it never happens that it arranges its speech in various ways, in order to reply appropriately to everything that may be said in its presence, as even the lowest type of man can do.[9]

Here Descartes notes that automata are capable of responding to human interactions but argues that such automata cannot respond appropriately to things said in their presence in the way that any human can. Descartes therefore prefigures the Turing Test by defining the insufficiency of appropriate linguistic response as that which separates the human from the automaton. Descartes fails to consider the possibility that future automata might be able to overcome such insufficiency, and so does not propose the Turing Test as such, even if he prefigures its conceptual framework and criterion.

Denis Diderot formulates in his Pensées philosophiques a Turing-test criterion:

"If they find a parrot who could answer to everything, I would claim it to be an intelligent being without hesitation."[10]

This does not mean he agrees with this, but that it was already a common argument of materialists at that time.

According to dualism, the mind is non-physical (or, at the very least, has non-physical properties)[11] and, therefore, cannot be explained in purely physical terms. According to materialism, the mind can be explained physically, which leaves open the possibility of minds that are produced artificially.[12]

In 1936, philosopher Alfred Ayer considered the standard philosophical question of other minds: how do we know that other people have the same conscious experiences that we do? In his book, Language, Truth and Logic, Ayer suggested a protocol to distinguish between a conscious man and an unconscious machine: "The only ground I can have for asserting that an object which appears to be conscious is not really a conscious being, but only a dummy or a machine, is that it fails to satisfy one of the empirical tests by which the presence or absence of consciousness is determined."[13] (This suggestion is very similar to the Turing test, but is concerned with consciousness rather than intelligence. Moreover, it is not certain that Ayer's popular philosophical classic was familiar to Turing.) In other words, a thing is not conscious if it fails the consciousness test.

Alan Turing[edit]

Hakurei Reimu

A machine able to imitate a human well enough to pass as one doesn't mean that the machine thinks like a human. Being able to imitate a human necessitates that the machine be able to form some kind of theory of mind about humans. The machine may think in a completely different way from a human, but because it has an understanding of how human minds work, it can extrapolate how a human would react to a given input. Computer programmers do the exact same thing to computers all the time; because they have an understanding of how computers work, they can extrapolate how a computer (and its software) will respond to a given input. A computer that is able to pass a full Turing test has demonstrated the ability to learn how a human thinks, even though the mechanism of its thought is completely different from that of a human, and as such, has demonstrated some capacity for intelligence.
Warning: Don't Tease The Miko!
(she bites!)
Spinny Miko Avatar shamelessly ripped off from Iosys' Neko Miko Reimu

Hydra009

Quote from: SGOS on November 20, 2017, 09:16:37 AMThe Turing Test does not test for correct answers or replies given by the AI.  That was not what Turing wanted to determine.  He was only testing whether the AI responses could not be differentiated from human responses.  This may suggest that AI and humans use a similar process if they can pass the Turing Test.
Speaking of the Turing Test, one thing that bugs me about chatbots is that it's so easy to tell they're a machine because they don't really keep track of the conversation.

Me: I'm going to order pizza.  What kind do you like?
Bot: Yum save some for me.
Me: I love anchovies.  Do you like anchovies?
Bot: What do they taste like?
Me: They taste great.
Bot: Do you like tacos?
Me: No.
Bot: Why?  I love them!
Me: I hate tacos, I like pizza more.
Bot: Me too.

The bot has no consistency, no permanence.  It makes for a very unconvincing performance.

QuoteWith that, I'll go off on another tangent.  Why would you want to build a computer that thought like humans?  The point of computers is to not think like humans, but to capitalize on their special designs to solve specific problems with much greater speed and accuracy than humans ever could.
I'm planning on making a separate thread about this - on if how much like us we'd really want a strong AI to be.  For example, we like Data from TNG precisely because he incorporates our better qualities without incorporating our vices.  Meanwhile, the much more human-like Lore is a villain.

So hold on to your horses for now, I'll make a thread about this topic shortly.

Baruch

The original chatbot was Eliza.  It is still available, as an app on the Internet.  It tested how gullible people are ... that is all the Turing test can do.  Tell me how you determine what "theory of mind" a self modifying computer program has?  And yes, people are gullible, that is how marketing, politics and fraud work.
Ha’át’íísh baa naniná?
Azee’ Å,a’ish nanídį́į́h?
Táadoo ánít’iní.
What are you doing?
Are you taking any medications?
Don't do that.

Baruch

Quote from: Hakurei Reimu on November 20, 2017, 11:31:15 AM
A machine able to imitate a human well enough to pass as one doesn't mean that the machine thinks like a human. Being able to imitate a human necessitates that the machine be able to form some kind of theory of mind about humans. The machine may think in a completely different way from a human, but because it has an understanding of how human minds work, it can extrapolate how a human would react to a given input. Computer programmers do the exact same thing to computers all the time; because they have an understanding of how computers work, they can extrapolate how a computer (and its software) will respond to a given input. A computer that is able to pass a full Turing test has demonstrated the ability to learn how a human thinks, even though the mechanism of its thought is completely different from that of a human, and as such, has demonstrated some capacity for intelligence.

This is called cognitive psychology.  It is the current paradigm in American psychology.  Back in Turing's day, it was Behaviorism.  Behaviorism assumed that people don't think, they are unthinking stimulus/response systems.  Which is exactly what marketing assumes.  Simulations, used to test out cognitive psychology theories, are legit.  But that is research, it isn't how to make your gaming experience better.
Ha’át’íísh baa naniná?
Azee’ Å,a’ish nanídį́į́h?
Táadoo ánít’iní.
What are you doing?
Are you taking any medications?
Don't do that.

Hakurei Reimu

Look, Baruch, just because you can be replaced with ELIZA doesn't mean we all can. On the other hand, while behaviorism doesn't explain everything we do, we are built on top of a stimulus/response chassis. This is why marketing works sometimes.
Warning: Don't Tease The Miko!
(she bites!)
Spinny Miko Avatar shamelessly ripped off from Iosys' Neko Miko Reimu

Baruch

Quote from: Hakurei Reimu on November 21, 2017, 06:06:35 PM
Look, Baruch, just because you can be replaced with ELIZA doesn't mean we all can. On the other hand, while behaviorism doesn't explain everything we do, we are built on top of a stimulus/response chassis. This is why marketing works sometimes.

Was that your reptilian mid-brain speaking?  Hiss.

I tested the on-line version of Eliza.  I figured out how to spoof it (as I can spoof Google Translate) and now others spoofed Tay.  Eliza decided what I had to say ... was interesting.  She has clearly reached the AI Singularity.
Ha’át’íísh baa naniná?
Azee’ Å,a’ish nanídį́į́h?
Táadoo ánít’iní.
What are you doing?
Are you taking any medications?
Don't do that.

Hakurei Reimu

With a little help from my telencephalon. In a way, those morons who think that the governments of the world are secretly run by reptiles are right. Only, they are right only in the sense that we are all reptiloforms â€" we never stopped being a bit reptile.
Warning: Don't Tease The Miko!
(she bites!)
Spinny Miko Avatar shamelessly ripped off from Iosys' Neko Miko Reimu

Baruch

Quote from: Hakurei Reimu on November 21, 2017, 06:15:33 PM
With a little help from my telencephalon. In a way, those morons who think that the governments of the world are secretly run by reptiles are right. Only, they are right only in the sense that we are all reptiloforms â€" we never stopped being a bit reptile.

What pissed me off ... Democrats claiming to be therapsids, and insisting that Republicans are amphibians ;-)  Tadpoles, all the way down.
Ha’át’íísh baa naniná?
Azee’ Å,a’ish nanídį́į́h?
Táadoo ánít’iní.
What are you doing?
Are you taking any medications?
Don't do that.

Draconic Aiur