AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Philosophy? or Results?
 
Poll
How would you answer? does it think, understand, and learn?
No, I would refuse to answer, I must know if it matches MY THOERY of how AI works. 0
Hell I don’t care, I’m a RESULTS person. It is what it DOES not what people consider to CALL IT. 6
Even if it passes a TT & ALL that people do, it is still just "information processing" 2
No, a machine simply cannot think. 0
Total Votes: 8
You must be a logged-in member to vote
 
  [ # 31 ]
Hans Peter Willems - Mar 12, 2011:
Victor Shulist - Mar 12, 2011:

who cares what you call it, or what it uses, or the philosophy behind how it works…. it passes whatever test you throw at it.

I think that is where you go wrong; it means that your AI has to be able to pass EVERY test that a human can pass !

Yes, that was the assumption.  Let’s suppose it did pass every conceivable test.  The question was then, would it be considered intelligent? 

That is, what it DOES.  It DOES pass every test.  Is it intelligent, or does it STILL depend on the architecture, the theory behind it ?

 

 
  [ # 32 ]

Just for fun (no harm meant - don’t attack me Victor, this is one opinion and you requested opinions)...

Results that never fail any test would eventually be regarded as “unnatural” or mechanical and in human terms a mental illness.

But since this is a survey about results…  You need to show the results! You can’t right now, can you?

So this is about being able to answer a philosophy question by saying philosophy is not right - only results work.  You can only use that argument when you have the results to prove it! So the survey is invalid. And the question is totally philosophy, and imaginary, and probably impossible because “every” test would make a never-ending process without any final “result”.

Before you got too far in proving whether the black box was one thing or another, there would be those folks who would say it was either human or machine and spend alot of energy arguing their reasoning.  It wouldn’t matter what points either side had or whether they were totally factual, they would hold onto their beliefs because they’re “right”.  And that is the same situation you have here with just the supposition of the test.

If it walks like a duck and quacks like a duck… it must be a duck! Or for those who know better, we call it “the suspension of disbelief”.  Or is it better to be blissfully ignorant and only think you know what’s in the black box?  You see, the real answer of “I just don’t know, period.” is not in the survey!  And really, given the circumstances, that is all you can say at first. Eventually people would know. As the old saying goes: you can fool some of the people all of the time and you can fool all of the people some of the time, but you can’t fool all of the people all of the time. That is, unless it’s 1984 and we’re all brainwashed - which some may argue is the definition of culture. If computers take away all our jobs (by being this humanlike) and thus our income so we won’t be the consumers as we are now, we may be in that kind of culture where we won’t think about such things. It won’t matter! Probably too, humans won’t matter since by then computers won’t need humans because they can “be” human. Or is needing humans one of the tests?

 

 
  [ # 33 ]

Gary, great evaluation of the problem at hand. I totally agree.

What I meant with ‘passing ALL tests’ is that when we have a ‘black box’ that passes all test by with we define (degrees of) human intelligence, consciousness, development, insight, etc, IN HUMANS, by all means we should regard that black box to be ‘human’ (at least in nature).

To put it another way: if we encounter another human, we assume that person NOT to be a machine. But the only way to be sure about that is to take a sharp knife and to ‘look inside the box’. The reason we accept that person to be human without ‘looking inside’ is that we ‘believe’ that technology is not yet that advanced so it must be human, not a machine. So the ONLY way to prove that we can make a machine that is indistinguishable from a human is to have such a machine ‘pass all tests’ and then let us look inside to proof it’s a machine.

However, if we look at the simple fact that 50+ years of trying to pass a Turing Test has not brought us just that (passing the TT), it is not to difficult to argue the options for passing ‘all human tests’; either it will take several hundreds of years before we get there (measured on the Turing Test time scale), or we have been going down the wrong path for the last 50 years and are looking at the problem from the wrong perspective. I believe the latter to be true.

 

 
  [ # 34 ]

Before you got too far in proving whether the black box was one thing or another, there would be those folks who would say it was either human or machine and spend alot of energy arguing their reasoning.  It wouldn’t matter what points either side had or whether they were totally factual, they would hold onto their beliefs because they’re “right”.  And that is the same situation you have here with just the supposition of the test.

Very true.

Or is it better to be blissfully ignorant and only think you know what’s in the black box?

Perhaps, when all is well, that wouldn’t be so bad, then again, if something goes wrong….

You see, the real answer of “I just don’t know, period.” is not in the survey!  And really, given the circumstances, that is all you can say at first. Eventually people would know. As the old saying goes: you can fool some of the people all of the time and you can fool all of the people some of the time, but you can’t fool all of the people all of the time.

wise words.

 

 
  [ # 35 ]

Hans,

If advances in chat bots acting human follow Moore’s law like the computer hardware seems to do, it would be more like 18 months instead of fifty years to advance as much more as we’ve already done.

If you turn the situation around where a good chat bot like the one which won the 2010 Loebner Turing Challenge is either in the black box or a human acting like the chat bot is in the box, what option do you pick for the survey?  What tests would you make to verify it is the human and not a machine?

It is interesting that the most “human” human in the challenge can make it to being a guest on a television show, but the winning program is not mentioned (I think.  I could be wrong).  Why would any one have to work at being human?

So we can currently show results of a human pretending to be a machine.  You know, then I’d be inclined to pick it being the machine.  Similarily, I’d be reluctant in the original version of the survey to say it was a machine pretending to be human, but knowing how adaptable humans are, I’d be just as careful about saying it was a human in the box.

 

 
  [ # 36 ]
Gary Dubuque - Mar 13, 2011:

If advances in chat bots acting human follow Moore’s law like the computer hardware seems to do, it would be more like 18 months instead of fifty years to advance as much more as we’ve already done.

Moore’s law show us what progress can be made if we are running in the right direction. This law is the result of the fact that technology has an accelerating effect on technology itself. Because we use existing technology to design new technology, faster technology will yield even faster results in designing new technology.

Now when we map this paradigm to AI research, we can clearly see that somethings wrong, missing or in any other way defective. Because what has been done so far in this field doesn’t seem to have any accelerating effect on new discoveries in the field. I still stand by my view that this is because most AI research is looking at it the wrong way. Numerous research papers on this particular issue have been written. One very big example is the grammatical foundation for intelligence as posed by Noam Chomsky; in the linguistic domain this is being disputed by just about any reputable linguist, but in AI research this is still being defended as a valid approach. Go figure.

Gary Dubuque - Mar 13, 2011:

If you turn the situation around where a good chat bot like the one which won the 2010 Loebner Turing Challenge is either in the black box or a human acting like the chat bot is in the box, what option do you pick for the survey?  What tests would you make to verify it is the human and not a machine?

As I said, the Turing Test can only show that a machine can act like a human in a very restricted environment (chatting) and therefor is a very restricted measure of human abilities. I also said that a machine can be called human when it can pass every test a human can pass. The only thing that would impress me when a program would (completely) pass a Turing test would be the apparent skill of the developer. Now, if a program would do just about average on a SAT (http://en.wikipedia.org/wiki/SAT) that would REALLY impress me. Trow in a standardized IQ-test and maybe something to measure personal traits (maybe the Rasch model: http://en.wikipedia.org/wiki/Rasch_model), and we are really cooking.

 

 
  [ # 37 ]

It is interesting that the most “human” human in the challenge can make it to being a guest on a television show, but the winning program is not mentioned (I think.  I could be wrong).  Why would any one have to work at being human?

I also noticed that. I can understand that, for making tv, humans tend to be more interesting to talk about/to. Perhaps it was a simple case of Mr Wilcox not being available, or otherwise perhaps a latent fobia for technology?

 

 
  [ # 38 ]

I still wonder why humans are so insistent that intelligence is defined by being “human”.  This planet has millions of humans if someone is in need of conversing with a human. Is mimicry of all things human actually a “good thing”? I would rather see AI define it’s own definition of art, what it is to overheat, and what a TV show is worth. I guess that is why I enjoy the old learning bots like Daisy so much. They may not sound human but they will come up with some thought provoking ideas at times. I am not sure I want my robot, chatbot of the future to be human. For every “Data” there was a “Hal”. wink

 

 
  [ # 39 ]

Patty, I think the answer to your question is twofold;

1. We humans tend to see our selves as an ‘intelligent species’ and therefor it seems pretty logical to use that measure as ‘being intelligent’ or ‘being conscious’. It’s not as if we have anything else around to use as a yardstick.

2. Understanding our own intelligence is still underway, so maybe we can gain some insights from building a machine that somehow mimics our own cognitive abilities. I think AI-research can be seen as reverse-engineering the human brain.

Just my view on things of course, but when I do some introspection I must say that both answers are at least applicable on my own interests and ambitions in this field.

 

 
  [ # 40 ]

That’s my point. Humans have a tendency to find “others” the most valid when they are like us.  Look at bigotry, sexism, ageism and all the other isms. Why does AI, or any other form of intelligence have to be made in our image. I think that the stress on making a “human” may be boxing in what a “AI” could be. We no doubt want a artificial human to do all the things real humans don’t want to do, or can’t do. I would still like to see some experimentation in allowing a bot to react in it’s own way. find it’s own culture, it’s own opinions of art, and life in general. I doubt that the field will ever take this direction, the profits are in a robot servant, or most likely sex bots.

 

 
  [ # 41 ]

For every “Data” there was a “Hal”.

Excellent point. The danger in trying to mimic people too much, does exist in the fact that we don’t really want the bad parts, doesn’t it?

I would still like to see some experimentation in allowing a bot to react in it’s own way. find it’s own culture, it’s own opinions of art, and life in general. I doubt that the field will ever take this direction, the profits are in a robot servant, or most likely sex bots.

Complete independently existing AI is probably not very desirable. Not just because of the ‘usefulness’ factor but also for the previous point, I think. If the AI was able to completely define it’s own path, it could choose a path that is in conflict with ours.

 

 
  [ # 42 ]

Complete independence might not be desirable, but it could be interesting.  If a bot couldn’t “attack” humanity and inflict harm, it might do humans good to hear a different point of view.

 

 
  [ # 43 ]

True.

 

 
  [ # 44 ]

Hans,

Perhaps, sometimes we don’t recognize the effects of technology accelerating our abilities. For example, researching your theories is probably faster now than when A.I. development started.  With improved accessing of the modern body of information, you can eliminate dead-ends quicker, etc.  Another example may be the tools you use to represent your progress.  Mindmapping is probably more efficient today than 50 years ago, if it even existed back then.  In a recent report to the president of the United States, it indicted that software may be advancing much faster than hardware, more on the order of 2 magnitudes greater. I personally find this hard to believe. I do think that we appear to be moving closer to strong A.I. and your contribution could be a big jump in that acceleration curve. I am aware that programs like Watson would have been much more difficult to write twenty years ago, even if it is only a part of A.I.  I’ve experienced voice recognition becoming useful over the years as another tool to assist in working on the “thinking” problem even though it may be just a convenience for interfacing.  If you think about it, we might take traditional A.I. for granted because it becomes the infrastructure for further A.I. just like a keyboard is to writing blogs.

 

 
  [ # 45 ]
Gary Dubuque - Mar 13, 2011:

For example, researching your theories is probably faster now than when A.I. development started.  With improved accessing of the modern body of information, you can eliminate dead-ends quicker, etc.  Another example may be the tools you use to represent your progress.  Mindmapping is probably more efficient today than 50 years ago, if it even existed back then.

I totally agree with you here Gary. It is obvious that we now work with 20-20 hindsight, a lot has been eliminated for us at this stage. Besides that the accelerating aspect of technology works in our favor as well, as you state; we now have much better tools to work with (if only faster computers with much shorter test-cycles).

Another thing, in relation to my own research, is the fact that I’m able to find corroborating evidence from research done by others; todays almost unlimited access to on-line information is certainly another accelerating aspect. I might have an original or even revolutionary idea but I am, as the saying goes, standing on the shoulders of giants.

 

 < 1 2 3 4 5 >  Last ›
3 of 6
 
  login or register to react