AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Philosophy? or Results?
 
Poll
How would you answer? does it think, understand, and learn?
No, I would refuse to answer, I must know if it matches MY THOERY of how AI works. 0
Hell I don’t care, I’m a RESULTS person. It is what it DOES not what people consider to CALL IT. 6
Even if it passes a TT & ALL that people do, it is still just "information processing" 2
No, a machine simply cannot think. 0
Total Votes: 8
You must be a logged-in member to vote
 
  [ # 46 ]
Gary Dubuque - Mar 13, 2011:

But since this is a survey about results…  You need to show the results! You can’t right now, can you?

Did I say I had the results? do you have results? I haven’t seen any sample I/O from your work.

 

 
  [ # 47 ]

Gary, I’m not sure if I understood you correctly.

 

 
  [ # 48 ]

Damn it Erwin . . why did you remove the Edit button again !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

 

 
  [ # 49 ]
Gary Dubuque - Mar 13, 2011:

So this is about being able to answer a philosophy question by saying philosophy is not right - only results work.  You can only use that argument when you have the results to prove it!

Really ?  Why is that ?

This is a hypothetical question.  If I ask a school child ‘you have 2 apples, and take away 1 apple, how many left?’  . .. do I really need the actual apples? No, I don’t.

So please, explain why I need to have the results myself,  right now,  to ask this question ?

 

 
  [ # 50 ]

You mentioned that the survey is invalid because I do not yet have all the “results” . .why is that ? why can I not ask the question, even if it is hypothetical ?  I’d say your rejection of the validity of the survey depends on YOUR results, which I have seen ZERO of so far in any thread.

 

 
  [ # 51 ]

Hello again everyone….  First off, I have to apologize for the fragmented responses…. I was multi-tasking like crazy and checking chatbots.org in between and trying to catch up on all the posts, so apologizes for not waiting until I had a good chunk of time to post one big reply!

So, we have a lot of talent involved in this thread, that’s awesome.

Now, Gary, you made a very entertaining post there with your attempt to turn the whole poll into a kind of ‘Halting Problem’ where you argued I was using philosophy to conclude philosophy was invalid… that was cute.

So, just to clarify.  I wasn’t suggested that I would say,  ‘Ok, the results of the poll are in, and everyone says they prefer results (and they don’t care about the philosophy of the design of the bot), THUS I conclude (philosophically) that philosophy is invalid’.  No, because like you say, that was a cute little trick to turn the thing into a self-referential logical puzzle like ‘I always lie’  or ‘The following sentence is true.  The preceding sentence is false’.  Novel little approach.

So, just to clarify things for you.  I setup the poll for only ONE purpose.  I simply wanted to know, each person on chatbots.org, WOULD YOU, given a bot, that you were not allowed to know the design of, being happy with the results of testing, consider it intelligent (if in fact it passed all the tests you could give it), *OR* would you say, NO.. I *must* see that code, I must have a look at the design, and only if it fits YOUR own approach to how an intelligent bot be designed, refuse to answer.    Would you a) refuse to answer unless you knew how the bot worked, or b) be happy with TESTING.. of its actual abilities.

So again, the poll was to get an idea which members of this site are more focused on what the bot can achieve, in order to determine if it should be called intelligent… OR would you demand to know how it works, its design, etc.

My approach was not to use philosophy to invalidate philosophy.  I don’t think I stated that anywhere.  So I don’t really see how the poll/survey is ‘invalid’.

Also, regarding the point of the number of tests being infinite, is that really so?  Are there really an infinite number of tests you would have to do ?  I doubt this. 

I think that is similar to the argument that, if you could only travel 1/2 the distance at a time between your initial position and your destination, that you’d never get there….. for PRACTICAL PURPOSES,  I think you would get there… and with enough tests of a bot, for practical purposes I think you’d have enough tests to decide, ‘hum… do I consider this bot intelligent since it passed 35 million tests, or do I want a look at that code and design before I make a decision.  So that is the purpose of the poll… to simply find out each members thoughts on that .. would you #1 do enough tests and base your conclusion on that, or #2 say NO, right from the start and say, ‘if I can’t see the design principals of this thing, I refuse to answer’.

Clear ?  And yes, I did notice you stated ‘just for fun’ .... I think the whole ‘philosophy-invalidating-philosophy’ was you just having fun, right, that’s cool.  But hay, even if I *was* using philosophy to invalidate philosophy, is that necessarily invalid, really?  perhaps there is ‘meta philosophy’...ok.. ok. . .. no more of that for me !!! smile

 

 
  [ # 52 ]

.....so purpose of poll :

        to get each member’s preference as to how they would evaluate a given bot.  (YES/NO - would you be happy with only testing a bot, and not access to its innards, or would you refuse to answer)

NOT….

        to conclude… ok.. most people said philosophy doesn’t matter… THUS use philosophy to invalidate philosophy. 
(If you’re still not clear on that, perhaps send me and email (so we don’t drag the whole discussion down with the silly self-referential logic tricks))

 

 
  [ # 53 ]

What ever you say Victor. It is your survey. Cheez, my opinion is not acceptable, huh? 

You know the question becomes a whole different thing if it ever came to pass.  No doubt, you would look inside and check if it was human and then wonder, how did it do that?  And you would feel cheated if the magic wasn’t there (like someone remotely giving the machine the answers which is exactly the trick they played on me back in college some 40 years ago.)

As for what you aim for in the survey, you can fool some of the people…  because you are right of course, blissfully correct in your comfortable world when the computers take over (results man, that’s what you’re proposing.) Unless, of course, the machine can’t understand and learn and think, etc. And so what are you creating a chat bot for? And how could it pass the tests of being humanlike, all the tests as you so stated, and still be a fake?  Come on now.

 

 
  [ # 54 ]
Gary Dubuque - Mar 14, 2011:

You know the question becomes a whole different thing if it ever came to pass.  No doubt, you would look inside and check if it was human and then wonder, how did it do that?

That was exactly my point; show a machine that is (by measure of all the tests applied) human-like, and then show that it’s actually a machine and not someone faking it in some way.

So the real proof is in looking inside the box !

 

 
  [ # 55 ]

To follow on to a human trying hard to be the most human…  One of my test questions for the black box: describe your experience of witnessing the Mona Lisa. If the box can’t see, then describe your experience of hearing Ode to Joy.  If the box can’t hear, then what does a summer day feel like? etc.  Stories are full of vivid imagery, so can the magic box create a good (interesting) story? 

The bottom line, a picture is worth a thousand words.  Art sometimes cannot be expressed using only words. Right side and left-side brains both make up a person. Which comes first, the mind or language the mind uses?  And since you brought it into the discussion, what is thinking really?  Don’t we need to know that before we can answer your survey?  Or are we all talking about something different, our own opinions, perhaps?

 

 
  [ # 56 ]

Thinking is internal, right? But how can it not be thinking because it looks like it is thinking?

Do calculators think? They must because they do the math!  And they learn because they don’t always display the same answer, they provide feedback based upon input. They must understand because they seem to give the right answer.  Is this what the survey demonstrates?  You are absolutely right! A calculator fits the theory of…

 

 
  [ # 57 ]

Although I got the feeling your question was not directed at me, I like to respond smile

Gary Dubuque - Mar 14, 2011:

One of my test questions for the black box: describe your experience of witnessing the Mona Lisa. If the box can’t see, then describe your experience of hearing Ode to Joy.  If the box can’t hear, then what does a summer day feel like? etc.  Stories are full of vivid imagery, so can the magic box create a good (interesting) story?

I agree that such questions could only be answered by someone or something with a certain level of consciousness, perception and reasoning. However, the ability to percept those impressions are not a prerequisite to intelligence or consciousness, as that would mean that a blind and/or deaf person would not be able to have a consciousness.

As for what comes first, mind or language; the whole idea that intelligence and knowledge are build on top of language seems totally foreign to me. In my view language is a learned skill, and a symbolic representation of our reality. A toddler starts exploring concepts first, and starts talking later. Besides, some languages have a completely different system (e.g. iconic vs grammar) then others, meaning that you can not interpret those based on the same way as other languages. Translating in these cases automatically implies understanding of the underlying concepts, because direct symbolic mapping from one language to the other doesn’t work.

 

 
  [ # 58 ]
Gary Dubuque - Mar 14, 2011:

What ever you say Victor. It is your survey. Cheez, my opinion is not acceptable, huh?

It is perfectly acceptable.  Apologizes if I seemed to come across that way.

 

 
  [ # 59 ]

IMHO, I think it is simply a matter of preference whether you go with results based determination of your evaluation of a bot, or white box analysis.  That’s just me smile

 

 
  [ # 60 ]
Hans Peter Willems - Mar 14, 2011:

As for what comes first, mind or language; the whole idea that intelligence and knowledge are build on top of language seems totally foreign to me.

I 1/2 agree, I agree that intelligence is not based solely on language.  Language is a tool of knowledge.  but I disagree, on second half, that is, i think knowledge is (at least in part) based from language.  Especially declarative knowledge.  Procedural knowledge, no, there are other elements besides language at work.  But certainly written language helps immensely in learning procedural knowledge.

 

‹ First  < 2 3 4 5 6 > 
4 of 6
 
  login or register to react