AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Chinese room
 
 

I can’t believe nobody ever created a thread regarding the Chinese Room !

Ok, so here we go!  I’m opening a massive ‘kettle of fish’ or ‘can of worms’ (for those of you not in the west, this means roughly “a complicated or controversial issue”).

For starters, we have :

http://plato.stanford.edu/entries/chinese-room/

What side of this argument are you on?  I’m a strong functionalist (as I know Hans Peter is *painfully* aware smile ), simply because this argument makes sense to me, and I don’t think it has ever been proven wrong that the “Systems Reply” is not valid.  Probably none of these replies can be proven, and for that matter even the validity of the original Chinese room argument itself can’t be proven… but .. what the heck.. let’s indulge smile

I liked the above document where it is argued (under “Syntax and Semantics”) where it is stated that computers aren’t really doing symbolic processing , and it is true !  CPUs only know about voltage levels changing and whether current is passing from emitter to collector (or from source to drain) of a given transistor inside the microprocessor.  So really, arguments against “the computer is only manipulating symbols” are perhaps invalid.  perhaps the AI could go right from voltage transitions straight to semantics, who knows.

What side of this argument are you on?  John Searle’s original argument, or one of the replies to the argument?

 

 
  [ # 1 ]

... also I got a chuckle out of the argument, in support of the idea that any system ‘manipulates symbols’ as only an interruption by an observer, but saying that a toaster changes state of an untoasted peice of bread (logic ‘0’) to a untoasted peice of bread (‘1’).  smile

 

 
  [ # 2 ]

changes state of an untoasted peice of bread (logic ‘0’) to a toasted peice of bread (‘1’).

Erwin, Edit button please.

 

 
  [ # 3 ]

Searle also seemed to reject that, even if ‘symbol grounding’ was achieved, it would still not mean there was true understanding of Chinese—for the ‘sensory inputs’ would be converted to data, and that data would just be “more symbols” . .just more work for the man in the room !!

 

 
  [ # 4 ]

Victor, are you trying to pad your post count? raspberry

Actually, I think that I’ve recently read (either in a thread here, or possibly somewhere else) posts that mention the “Chinese Room”, but that didn’t get into specifics about it. I must admit only the briefest, barest of familiarity with that particular thought experiment.

 

 
  [ # 5 ]

I’ve brought up the Chinese Room argument before, as have others (I think). But you asked for it; I’m working on my first official research paper and it is in fact about the Chinese Room argument. This is also why I brought up the ‘symbol grounding problem’.

So here goes: Searle tries to invalidate the Turing test by bringing up the Chinese Room argument. He fails to invalidate the TT because the Chinese Room argument is in itself invalid. However, and this is what my paper is about, the reason that the Chinese Room argument is invalid is ALSO the reason why the Turing test is invalid. Of course, trying to invalidate the possibility of strong-AI by invalidating the Turing test is prone to fail, because the TT is not capable of proving strong-AI in the first place.

For the exact argument and my validation for it you have to wait for my paper (will be just a few weeks away), but for those really interested in everything around the Chinese Room, I have some links for you:

http://consc.net/papers/qualia.html
http://www.philosophyonline.co.uk/pom/pom_functionalism_chinese_room.htm
http://newempiricism.blogspot.com/2009/02/symbol-grounding-problem.html
http://consc.net/mindpapers/6.1c
http://cogprints.org/1573/1/harnad89.searle.html
http://plato.stanford.edu/entries/chinese-room/
http://www.psych.utoronto.ca/users/reingold/courses/ai/cache/searle.html
http://www.loebner.net/Prizef/TuringArticle.html
http://www.human-nature.com/articles/chalmers.html

smile

 

 
  [ # 6 ]

Ah, I see that Victor’s link is in my list as well. I just took the links from my research database and didn’t check smile

 

 
  [ # 7 ]
Hans Peter Willems - Mar 18, 2011:

Ah, I see that Victor’s link is in my list as well. I just took the links from my research database and didn’t check smile

No problem Hans Peter.

Yes, I did realize you mentioned the CR and SG problem—so I thought it was time for the Chinese Room to have a home smile

Dave, I highly suggest reading all about this most interesting argument against strong AI.


I will certainly like to read this paper of yours in the next few weeks Hans Peter.

 

 

 
  [ # 8 ]

I really don’t know if there would ever be a resolution to the Chinese room argument.

I mean, if we say, Ok, program passes the TT.  No, I don’t buy that it understands?, Even though it passes the TT, it’s symbols aren’t grounded, so its ‘not counted’.  Ok ok, so we ground the symbols ... oh , still don’t buy it understands? 

Ok… it has FULL BLOWN consciousness.  Hum… well, how would you really know it is consciousness, is there a “test” for “machine consciousness?”

Hum, let me Google it… “tests for machine consciousness”

Ok, everything I find seems to suggest a test—and what do tests require??????

You got it, when you test X,  then X is required to DO SOMETHING….

... do something... so, forgive me, it seems we are right back to, you guess it, functionalism.

 

 

 
  [ # 9 ]

at

http://en.wikipedia.org/wiki/Consciousness#Tests

Tests

As there is no clear definition of consciousness and no empirical measure exists to test for its presence, it has been argued that due to the nature of the problem of consciousness, empirical tests are intrinsically impossible.

However, several tests have been developed that attempt an operational definition of consciousness and try to determine whether computers and non-human animals can demonstrate through behavior, by passing these tests, that they are conscious.

 

 

 
  [ # 10 ]

Testing (as in ‘the right way of testing) is very simple:

In the next argument I use ‘Intelligence’ to depict ‘intelligence/consciousness/knowledge’.

1. Strong Artificial Intelligence = Human-like Intelligence
2. if 1. is true then Strong-AI performance = HI performance
3. if 2. is true then Result(testing Strong-AI ‘P’) = Result(testing HI ‘P’)

Ergo, Strong-AI = Human-like-I is true when Result(testing Strong-AI ‘P’) = Result(testing HI ‘P’).

Quod erat demonstrandum smile

 

 
  [ # 11 ]

But what if…

1. Strong Artificial Intelligence != Human-like Intelligence


Out of curiosity, would you all consider a computer intelligent if it mimicked a dog perfectly?* Dogs certainly have a form of intelligence. And yet how could you tell the program wasn’t just composed of clever tricks? The advantage of a human-like intelligence is that we can directly compare against ourselves. But that doesn’t mean its the only “way” in which a computer can be smart. Consider our earlier discussion on the importance of language and symbolic representation to the way people process ideas. There’s no reason a computer need be limited to chunking together ideas into single concepts before processing complex thoughts.

* Or even better, a mentally handicapped person?

 

 
  [ # 12 ]

YAY!!! The power’s back on!!! smile (we lost our electricity for a few hours, due to the very high winds that we’ve been experiencing. NOT the ideal way to spend one’s birthday)

CR has touched upon one of my arguments about how we classify intelligence. We, as humans, seem to use only one “yardstick” with which to intelligence, and that is ourselves. It’s my considered opinion that this view is terribly short-sighted. Granted, if an entity can pass a test (or many, for that matter) that demonstrates human intelligence, we should have no trouble saying that this same entity, be it a computer AI system, an alien from another planet, or even a species of animal right here on earth, is intelligent. But there are certainly other ways to measure intelligence that may fall outside the realm of human. The old joke about aliens coming to visit, but leaving because they couldn’t find any “intelligence” here could have some faint grain of truth if the tests used were too narrow. This, I fear, is our current situation. We may well be judging by insufficient (or incorrect) criteria.

 

 
  [ # 13 ]

Dave, happy birthday! 
Hans Peter : that sounds like tests of functionality.
CR: Yes, I would consider a virtual dog intelligent if it could mimic a real dog.  Dogs have a certain intelligence.

 

 
  [ # 14 ]

If my AI performs better than ANY human, is it a strong AI?
What if it has a better vocabulary than a 5 year old? What, if like most humans, it excels in one area but has limitations in an other?

As time passes and each area is mastered will it have to perform better than ALL humans before it is a strong AI?

 

 
  [ # 15 ]

For we humans, what we call “intelligence” is innate in even a new-born baby. But that same “intelligence” emerges slowly, over time, to various degrees, and in various ways. I don’t see any reason why the same can’t apply to AI as well. At which stage, though, in an AI system’s development can we point to and say “Here is where it became intelligent”? Any point we “assign” is just going to be an arbitrary point, I think. In my opinion, any AI system that can demonstrate the same intellectual skills and abilities of, say a three to five year old child is certainly “intelligent”, but again, that’s just an arbitrary point.

 

 1 2 3 >  Last ›
1 of 4
 
  login or register to react