AI Zone Admin Forum Add your forum

NEWS: survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Chinese Room versus the Turing Test

Here’s a short and informative video which explains the Chinese Room thought experiment. This is an idea put forward by the philosopher John Searle in 1980 which supposedly disproves the notion that a machine could ever be proven to be as intelligent as a human being.

Maybe I’m mistaken, but I think there is one glaringly obvious flaw in this argument. Certainly the entity in the room could not be said to be intelligent because it is just following instructions, the meaning of which it doesn’t understand. But what about the “book” of instructions itself? I would have thought that it would qualify as being intelligent!


  [ # 1 ]

If we think about the book of instructions we have some possible interpretations here:

1- The book is in your language - so obviously you are performing an intelligent task
2- The book is not in your language and you are just comparing the shape of the letters in the questionXanswer system.
So you have 2 possibilities:
a) It’s not inteligent
b) It’s intelligent

If you say that just comparing shapes is not intelligence, and that animals in nature that are able to consider between diferent shapes of instruments to pick up food are not intelligent , so the first alternative is correct.

But if you say ( and that is what I believe - as an evolutionist) - that comparing shapes or instruments are a real demonstration of intelligence in animal world. And that if we take a human aproach about the process of learning languages and simbols, so the answer is yes. It can be called an intelligent process.
We must remember that when a child learn an alphabet, no matter if Occidental or Chinese, for example, this child is memorizing the letters at a first aproach. This child is not really reading the letters A,b,c - The child is only repeating an information she/he is not sure about it’s meaning.
It takes a little time to real take writing language as a natural skill.

I saw the presentation of a Biologist fellow some days ago and he is working with dogs.
Dogs really doesn’t have a language center, but they can make inferences and comparisions. So he is working with memory. He presents to dogs some plates with commands ( eg: sit, stand up, roll…). He doesn’t need to speak aloud the comand.

Dogs are really not able to take the memory of the letters and put it in a really logic sequence because (remember) they don’t have a language center, but they can memorize the shape of the writen commands so they are able to “read” the commands, no matter if they can’t really transform the information in language as we humans do.
But it doesn’t exclude the possibility that in some thousands of years dogs doesn’t develop a kind of language center just like what happened in the evolution of our species ( this is just an funny hypotesis of talking dogs)...

But comparision and memorizing are not the first steps to human learning in our childhood?
So if a being is able to make inferences, comparisions and memorization there is a kind of understanding about what is being done. So is this being intelligent or not?


  [ # 2 ]

I was thinking of the agent in the room being like a computer and the book of instructions being like an intelligent program which is running on the computer. However your description involving dogs is even better.

It reminds me of a conversation that I had once with a religious man who was adamant that humanity must be the centre of the universe and that nothing else could be thought of as intelligent. I told him about my two dogs who are very clever. One of them had learned to pick up a stick in its mouth and use it to knock on the door when it wanted to be let in.

My friend said that that didn’t mean that the dog was intelligent because it had probably just seen a human doing it and was merely copying them! Well that sounds mighty intelligent to me. smile


  [ # 3 ]

The ai-class videos about machine translation (Unit 1, Unit 22) reminded me of the Chinese Room argument, since Google seems to be building the Chinese Room. Even if it doesn’t understand and isn’t conscious, it is becoming a more and more useful tool!

To get a better idea if it’s conscious or not, I think we could interrogate the program about what it’s doing. So, “why did you translate wonton into those characters?” Note: if the program does it the way Norvig does it in the unit 1 videos, it will miss the fact that “wonton” is an aglicization of the Chinese word, and is represented by two characters in Chinese (not one as Norvig finds).


  [ # 4 ]

I think we need a new definition of “understanding.”  I would say if an AI understands every aspect of an entity including the meta standpoint then it understands.  Humans tend to put their experience into defintions that bias the definiton.  Just because a computer doesn’t have that little feeling of the “lightbulb coming on” doesn’t mean it is unaware.

However… we don’t know if translating languages is a fully mechanical event that can be encapsulated in a set of rules.  As far as my brain can determine there needs to be awareness of culture, the outside world, math, physics, colloquialisms for a translation to be done correctly.


  [ # 5 ]

Yeah, Google’s trying to falsify that hypothesis. Here’s your second paragraph translated into French:

Cependant ... nous ne savons pas si la traduction des langues est un événement entièrement mécanique qui peut être encapsulé dans un ensemble de règles. Autant que mon cerveau peut déterminer qu’il doit y avoir prise de conscience de la culture, le monde extérieur, les mathématiques, la physique, des expressions familières pour une traduction à faire correctement.

I see two obvious mistakes: take out the “qu’” in “qu’il doit y avoir…” and “à faire” is just wrong, maybe change the end of the sentence to “pour faire une traduction correctement” or “afin d’effectuer une traduction correcte” or something.

I would like to ask the translation program why it added “qu’” and tell it to take it out, because in the English, “determine” has no elided “that” following it. And I want to tell it, “à faire” is not the only way to translate “to be done”; in this context “to be done” indicates the passive voice, so you need a French construction that does that.

If the program could answer my questions, learn from my natural language input, and try to apply it to other, similar contexts, then I would say it was well on its way to understanding :)

For more on the statistical machine translation approach:


  login or register to react