My conversation with the apologist Randal Rauser

Started by josephpalazzo, November 07, 2013, 08:50:23 AM

Previous topic - Next topic

josephpalazzo

josephpalazzo
•10 days ago•00
So the day that we build self-aware robots, a reality we are getting closer and closer to, will you claim that we have created "non-material ontological realities" ?
LOL.

Randal RauserMod josephpalazzo
•10 days ago•00

Are you attempting to present an argument? Or merely an insult? Your first sentence suggests the former, but the "LOL" combined with the lack of any discernable logical structure to your comment suggests the latter.
If it is just an insult then we can leave it at that. If, however, you were attempting to present an argument, then please articulate the premises and conclusion so that we can consider it more closely.

josephpalazzo Randal Rauser
•10 days ago•30
I was referring to this paragraph: "Put another way, the cost of consistent naturalism is implausibility, whereas a more plausible naturalism (one able to accommodate consciousness, intentionality, moral obligation, proper function, etc.) must surrender consistency by admitting non-material ontological realities." My question still stands, with or without the LOL. And if you see this question as insulting, perhaps it's because you have no answer to it, and you are trying to deflect. As far as I'm concerned, it is a legitimate question which puts your assertion in that paragraph at odds as a robot built by humans would be all material, and being self-aware would mean non-material ontological realities is a fantasy.

Randal RauserMod josephpalazzo
•9 days ago•00
What is your claim Joseph? That if a conscious robot exists then eliminativism is true? Or functionalism is true? Or token identity theory is true? Please clarify.
Once you've laid out your argument regarding consciousness, you'll need to address the problem of how you'd establish that a conscious robot exists. With that in mind, please address the zombie problem and Searle's Chinese room argument.

josephpalazzo Randal Rauser
•6 days ago•00
Nice deflection of the question.
NB: Searle's Chinese room argument is about present technology in regard to A.I.
The robot is on his way to self-conciousness when he is programmed to do A,B,C... but instead chooses to do X,Y, Z... and can figure out the consequences of those choices.

Randal RauserMod josephpalazzo
•6 days ago•00
It is clear that you don't understand Searle's thought experiment. The point is that mere outputs do not secure semantic content and thus the operation of a MIND. And that contradicts directly your assumptions about conscious and self-conscious AI.
No, YOU don't understand. Searle was arguing about a computer that could translate Chinese, and his question was: can we say that the computer understands the Chinese language? Of course with present tech, he is right. But I'm talking about a whole different thing: a robot that would act outside its own programming, that would put its own survival above any programming that would be put in its own "brain". Get on page if you want to argue intelligently.

josephpalazzo Randal Rauser
•3 days ago

No, YOU don't understand. Searle was arguing about a computer that could translate Chinese, and his question was: can we say that the computer understands the Chinese language? Of course with present tech, he is right. But I'm talking about a whole different thing: a robot that would act outside its own programming, that would put its own survival above any programming that would be put in its own "brain". Get on page if you want to argue intelligently.


Randal RauserMod josephpalazzo
•3 days ago•00
"his question was: can we say that the computer understands the Chinese language? Of course with present tech, he is right."
What do you mean by saying a question is "right"?

josephpalazzo Randal Rauser
•2 days ago•00
I'm not sure what's your beef? But to elaborate on Searle, he claimed that even though the computer could translate Chinese, that one could not conclude that the computer understood the Chinese language. And he was right - no one would claim that just because we can program a computer to translate - Google does exactly that - that we could claim that the computer is capable of understanding the language. You brougt up Searle, but it doesn't apply to what I'm suggesting - a robot capable of making its own decision. We're not there yet, but we most likely will, then your claim that in order to "accommodate consciousness, intentionality, moral obligation, proper function, one must surrender consistency by admitting non-material ontological" would not hold any longer.


Randal RauserMod josephpalazzo
•2 days ago•00
"You brougt up Searle, but it doesn't apply to what I'm suggesting - a robot capable of making its own decision"

Joseph, Searle's thought experiment is exactly relevant. His point is that we only have outputs (i.e. observable behaviors), and mere outputs, no matter how sophisticated -- even outputs that perfectly mimic conscious human agents -- would not entail semantic content (Searle's specific focus) or consciousness simpliciter (the broader focus).

josephpalazzo Randal Rauser
•21 hours ago•00
Perhaps you should get yourself informed. A start here would help:
http://en.wikipedia.org/wiki/T...

Randal RauserMod josephpalazzo
•21 hours ago•00?

You're trying to condescend with a Wikipedia link?! Gee, you could have at least provided a journal article reference!


josephpalazzo Randal Rauser
•21 minutes ago•00
Nice deflection but you haven't responded at all to my initial objection to your blatant assertion that in order to "accommodate consciousness, intentionality, moral obligation, proper function, one must surrender consistency by admitting non-material ontological" would not hold any longer. That is pure BS. And you know it.

Plu

A more interesting direction than "what does a conscious robot look like" would probably be "what does a conscious human look like".

All this talk of "deciding outside of programming" and "making its own decisions" applies just as much to people as machines; we don't know our programming so we don't know if we can go outside it, or make our own decisions.

josephpalazzo

We don't know what consciousness is. Philosophers have been debating this for centuries, with no clear cut definition. But were we to build robots and we would want to know if they are conscious, the only measure we have is us, humans. Just like you wake up in the morning and decide what to do with yourself - eat breakfast, shower, read the morning paper, answer or not the phone, etc. - we would have to judge a robot as being conscious or not if it would perform as we do - self-preservation is our most basic imperative, but we have other qualities such empathy, a thirsty for knowledge, a sense of identity and so on.

Solitary

There is nothing more frightful than ignorance in action.