AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Chinese room
 
 
  [ # 16 ]

Oops! I’m sorry, Victor. I missed your birthday wishes. Thank you, very much. Even sans power, I spent an enjoyable day, giving all of my family members present a certain modicum of grief. All in good fun, of course! smile

 

 
  [ # 17 ]

The whole point is that the AI field of research has set a goal, attaining strong-AI, and the measure to go with that, being ‘indistinguishable from a human’.

Getting to the level of ‘Dog-like AI’ would be already very impressive, and can be tested in exactly the same way I described in my previous post here. Just substitute human-AI for dog-AI and take it from there.

Dave Morton - Mar 19, 2011:

For we humans, what we call “intelligence” is innate in even a new-born baby. But that same “intelligence” emerges slowly, over time, to various degrees, and in various ways. I don’t see any reason why the same can’t apply to AI as well. At which stage, though, in an AI system’s development can we point to and say “Here is where it became intelligent”?

I would say that, staying with the premise that strong-AI is human-like-AI, a conscious AI in any stage of development that is similar to human development, CAN be tested in the exact same way I described.

This is exactly what my paper is about: it really doesn’t matter what level of intelligence/consciousness we use as measurement, as long as we use something that we can agree upon to be at least ‘at some level of intelligence/consciousness’ and then test our AI with every test available (to test the ‘standard’) and match it.

As soon as you have dog-AI passing all conceivable tests for a real dog, then you have attained dog-level AI. As soon as you have a baby-like AI passing all tests for proving human-baby level of development (mind you, those tests exist of course), then you have attained baby-like AI.

Now, if we stay with ‘human-like-AI = strong-AI’ then this simply implies that ‘baby-human-like-AI = baby=strong-AI’. I don’t see any problem with that, and in this case my argument (that I made in my previous post here) still stands.

 

 
  [ # 18 ]

Happy BDay Dave.
I think I’m with CR and Dave on this. I mean, what exactly is ‘understanding’? It’s not like there is a standard for it like ‘kilo’ or ‘meter’. I think many people consider themselves as ‘understanding’ and ‘intelligent’ while they don’t very much feel this way about other humans, let alone AI. This, I think will be an open debate for a long, long time.

 

 
  [ # 19 ]
C R Hunt - Mar 18, 2011:

But what if…

1. Strong Artificial Intelligence != Human-like Intelligence

This point is of course academic, as the field of AI-research has simply set the name AND the measure to go with it.

The whole discussion about ‘what IS strong-AI’ is mainly fueled by some philosophers who debate that because we don’t completely understand human intelligence it is impossible to understand (or ‘create’ for that matter) artificial human-like intelligence.

My argument keeps it’s ground in this light as well because ‘all available tests to measure human development/abilities’ are, and always will be, already calibrated to our current understanding of human intelligence/consciousness.

 

 
  [ # 20 ]

Just to prevent any misinterpretation, this:

Hans Peter Willems - Mar 19, 2011:

‘baby-human-like-AI = baby=strong-AI’

... should of course be: baby-human-like-AI = baby-strong-AI.

smile

Note: what ever happened to the edit-button ?????????

 

 
  [ # 21 ]
Hans Peter Willems - Mar 19, 2011:

My argument keeps it’s ground in this light as well because ‘all available tests to measure human development/abilities’ are, and always will be, already calibrated to our current understanding of human intelligence/consciousness.

We don’t subject humans to “all” available tests, why should we need to subject AIs to it. Would all humans pass all tests?

 

 
  [ # 22 ]

I agree that the only practical way of testing an AI designed to mimic a living system is to have it “pass the same tests”. That is, to have it respond the same way under the same conditions (or at least, as close to them as you can get). But given the variation between members of the same species, this is difficult in practice.

Especially as the intelligence of the animal increases—so do all the factors and circumstances that cause its behavior. The “same conditions” criteria becomes difficult. When one accounts for the fact that an animal-like intelligence won’t be housed in an animal-like body for quite some time (til the tech catches up*), suddenly many useful metrics are off the table.

At any rate, we’ve strayed far afield of the topic at hand. When I first heard the problem of the Chinese room, the following resolution occurred to me immediately: it is not the man in the room that is the intelligence. The intelligence is all in the algorithm he carries out on the symbols. Just as our neurons and their varied connections and neurotransmitters need not themselves possess intelligence in order for the whole system—our brain—to have it. So the symbol processing is done via an intelligent agent—the algorithm that governs it!

*Of course, given the current pace of things, the robotics might beat the “brain” development.

 

 
  [ # 23 ]
Hans Peter Willems - Mar 19, 2011:

Note: what ever happened to the edit-button ?????????

I’m not certain, but I think that there may be a minor glitch with the forum script. If that’s the case, then I’m sure the team is working on it. I’ll ask Erwin when I get the opportunity.

 

 
  [ # 24 ]

Yup, I think so too. The fast reply box is just linking to the advanced reply page as well.

Happy (belated) birthday, by the way!

 

 
  [ # 25 ]

Thanks for the birthday wishes, CR and Jan. smile I appreciate it.

 

 
  [ # 26 ]
C R Hunt - Mar 19, 2011:

I agree that the only practical way of testing an AI designed to mimic a living system is to have it “pass the same tests”. That is, to have it respond the same way under the same conditions (or at least, as close to them as you can get). But given the variation between members of the same species, this is difficult in practice.

It is not difficult in practice for humans; we use tests to determine IQ, level of development (e.g. SAT), etc. We don’t test humans to determine if they are, or are not, human. We test them to see where they score on a given scale for that test and in how far they are developed as rated to a common scale of development.

So when testing a Strong-AI (let’s hypothesize that it exists) then it is conceivable that this AI scores very good on some tests and not that good on others. That is not a problem at all; it will just have a ‘more or less developed’ level of ‘human-like’ intelligence. However, should it test on any given test BELOW the score that any real human AT LEAST is known to be able to attain, then it fails.

Now you can bring up the argument that there are humans that fail these tests. This is not relevant because we don’t test humans to see if they are ‘human’. So when we test strong-AI we also are not testing to see if they are human (no test needed for that: they aren’t), but to see how they scale up to the capabilities of a commonly rated human who is doing the same tests.

C R Hunt - Mar 19, 2011:

At any rate, we’ve strayed far afield of the topic at hand.

Surely not. The Chinese Room argument is at the heart of the discussion as to how to test for strong-AI, as it uses the premise that because you can invalidate such a test, this means that you can not do such a test. From that point on Searle argues that because you can not do such a test, this means you can not ‘have’ strong-AI (this premise is of course an inversed argument that is therefor invalid by default).

C R Hunt - Mar 19, 2011:

When I first heard the problem of the Chinese room, the following resolution occurred to me immediately: it is not the man in the room that is the intelligence. The intelligence is all in the algorithm he carries out on the symbols. Just as our neurons and their varied connections and neurotransmitters need not themselves possess intelligence in order for the whole system—our brain—to have it.

This has of course been argued by several philosophers as well, and there already have been several successful counter-arguments been made. The main counter-argument against your premise is this: Either it means that intelligence has to be a property of at least one of the smaller parts of the system, and we know this is not the case in humans as we don’t have individual ‘intelligent’ neurons (as you point out yourself), or it means that ‘intelligence’ is an emergent property of the total combined system and not something that can be described in ‘programming’.

C R Hunt - Mar 19, 2011:

So the symbol processing is done via an intelligent agent—the algorithm that governs it!

That, unfortunately, brings you back ‘into the Chinese room’: In Searle’s argument, the man in the room IS the algorithm (or ‘intelligent agent’) and he demonstrates with his argument that the algorithm can process seemingly ‘intelligently’ without actually being ‘intelligent’ (and I must say that in this specific regard, his argument is hard to invalidate).

The only real solution to the Chinese Room argument is of course the ‘Actor argument’ I’ve brought up before: The man in the room is ‘acting’ like something else that he is not. However, this does not invalidate that he IS actually intelligent, but we are testing him with the wrong test. From that point is is easy to argue that the Chinese Room can actually be legitimately ‘intelligent’. However, determining if the Chinese Room is really intelligent (I prefer ‘conscious’ as it raises the bar to where ‘strong-AI really is, because of the ‘hard problem’), is a completely different matter all together (hence my ‘all tests’ argument).

Arguing for or against the Chines Room argument is somewhat like navigating a minefield, but fun nonetheless smile

 

 
  [ # 27 ]

Btw, thanks to Victor for starting this topic, and for the contributors so far for hitting up a serious debate. This is helping me tremendously in preparing my research paper, so keep it coming smile

 

 
  [ # 28 ]

Let me add my birthday wishes to you Dave. I hope your birthday present includes a birth in the top 10 of the CBC.

CR - Maybe we can just strap an AI on to “Big Dog” and start animal testing right away.
http://www.bostondynamics.com/robot_bigdog.html

 

 
  [ # 29 ]

I have to admit that, while this has been one of the more “serious” debates of late, it’s also been the most informative, insightful and instructive. And since we haven’t had any hurt feelings, it’s also been one of the most fun, from my perspective. smile Well done, folks!

 

 
  [ # 30 ]
Merlin - Mar 19, 2011:

Let me add my birthday wishes to you Dave. I hope your birthday present includes a birth in the top 10 of the CBC.

CR - Maybe we can just strap an AI on to “Big Dog” and start animal testing right away.
http://www.bostondynamics.com/robot_bigdog.html

Thanks, Merlin. I saw those DARPA videos the other day, and I’m quite impressed. But I was somewhat disappointed to learn that it can’t “roll over” and “play dead”. smile

 

 < 1 2 3 4 > 
2 of 4
 
  login or register to react