AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Will you answer no to this question?
 
 
  [ # 16 ]

Although. . . even with “Either yes or no as an answer will invalidate this answer, you presented a paradox. YOU go figure it yourself”,  in that case the answer was No (since you replied with the explanation of logic),  But. . .  again, you can’t say it !! Grrrrrrrr!!

But yes, that is what the bot needs to do Hans.  And ability to discuss if user asks “Can you explain more”?

 

 
  [ # 17 ]
Victor Shulist - Mar 16, 2011:

Do you do what I do?  Keep a notebook and pencil beside the bed ?

I do have a notebook beside my bed, but I hardly use it anymore. These days I’m working in my sleep, awaking with completely worked out ideas. It’s weird, I know. It’s also pretty tiering as it seems I don’t really rest when this happens, and I wake up more or less as a wreck (but a wreck with answers).

 

 
  [ # 18 ]

Well .. that sounds very promising ! (not from a good nights rest point of view) , but from a project point of view it does!

 

 
  [ # 19 ]
Victor Shulist - Mar 16, 2011:

And ability to discuss if user asks “Can you explain more”?

I guess that’s the important part in this case; the AI should be able to give evidence of understanding the concept of ‘paradox’ and why, in this case’ it actually is a paradox.

I think it will be a serious challenge (but a ‘fun’ one) to teach the concept of ‘paradox’ using my mind-model. How do you explain what a paradox is? This goes directly to the ‘symbol grounding problem’. However, things like this (e.g. a paradox) are exactly the things I need to be able to validate my mind-model on.

Maybe this: paradox = logical impossibility.

But then we need to define ‘logical’ and ‘impossible’.... and so on and so forth.

 

 
  [ # 20 ]

Yes, I was also thinking this over. . .. all the way home from work yesterday, and I figured that the AI would need to freely explore.

We certainly don’t want to hard code the IF/THEN structure.

The AI needs to be able to figure out how to run experiments.

In the other example..

“The following sentence is true.  The preceding sentence is false”.

The AI needs to structure its own approach on this.

The challenge will be, I guess, for part of the problem, is figure out the structure of the problem space.

Then, figure out for itself how to explore that by experimentation.

By that I mean, it has to know to…..

“Hum, I’m going to assume sentence1 is true.  Ok, what are the consequences of that ?  Well, then sentence 2 is true.  What are the consequences of that ? Well, then that would mean sentence 1 is false.  Ding !! Ding !!  Contradiction Detection !!  Sentence1 is now both true and false at the same time ! Can’t have that!

Sentence 1 is true (by assumption), but, from that assumption, the “chain” of consequences has resulted in sentence 1 being proven the opposite of our assumption. 

This is all great, but the real problem is, how to get it to be able to do that exploration itself ?

 

 
  [ # 21 ]

Now there are several existing theorem provers out there.  But the cool thing will be linking this to complex natural language.

 

 
  [ # 22 ]

Another thought I had was regarding ‘continual reflection’.

Meaning, your program is asked question Q, and by more than one algorithm (or rule, or learned information, whatever you want to call it), replies with:

Yes
No

say rule-1 output Yes to Q, but rule-2 output false.

Before the AI responds, it should look at all the outputs that it would have sent to user, and first pump them into a “level 2” analysis.

A “level 2” analyzer would see… “hum.. we can’t output both Yes and No” .. so that “level 2” rule would take the input, do whatever processing and output whatever, call it R1.  BUT, what if there were other level-2 rules that generated R2, and R3 ?

so NOW. .the AI should ..can I output THIS list to the user (R1, R2, R3) ?  *OR* perhaps does that need further processing ?  Try all “level 3” rules… and *IF*  no level-3 outputs anything (there are no level 3 rules that have anything to say),  only THEN it knows it should stop processing, and respond with R1, R2, R3….  but if there WERE outputs, the process goes on and on.

Another idea ....  we need to convert events that are happening into data that can be processed.

“Will you answer no to this question?”

the bot is unsure what you mean and asks:

“What question?????”

But then. . . the very EVENT of it answering should be (somehow) converted to a statement.

It needs to realize .. “Hay !!  I didn’t answer NO to that question”. ....  now that event-to-statement conversion results in a fact.

so now we have

        Fact :  user wants to know if I will answer “no” to his question
        Fact (being converted from actual “event”) :  “I did not answer no to his question”

........and deduce ......

you get the idea

 

 
  [ # 23 ]

Also.  ... could an AI get this joke .. . .

User:  Want to know how to keep a dummy in suspense??

AI : sure

User:  ok, I will tell you tomorrow

AI : you dirty &%#$%#%@  !!!!!!!!!!!!!!!!!!!!!!!!!!!

Again, it has to convert what is happening in the conversation into statements which can be reasoned on.

 

 
  [ # 24 ]

Just from the human side of things. I have fallen for the “how do you keep a dummy in suspense” joke several times. I still fall for the youtube videos that have a monster jump out. I would no doubt agree to answer ‘no’ to the “will you answer no” question, if I am in a good mood…or sit like a dummy in suspense waiting for the punch line.

 

 
  [ # 25 ]

“Also.  ... could an AI get this joke .. . .” - If you ask Skynet-AI to tell you a joke, you may get one similar to this.

What you want is to have the bot use the “Laws of Thought” to respond that the statement is not logical.
http://en.wikipedia.org/wiki/Laws_of_thought

I believe this would fall in “The law of non-contradiction”.

 

 
  [ # 26 ]

While Laybia is currently still just a rather dim PHP modified AIML bot, she does seem to give a reasonable answer (a request more information).

Gödel: Will you answer no to this question
Laybia: What was the question?

 

 
  [ # 27 ]

That’s not bad!

 

 
  [ # 28 ]
Victor Shulist - Mar 16, 2011:

(snip) I have some ideas on how this could be accomplished, has anyone else thought of these types of things for their bots to tackle ?

I have ideas to solve the “recursive problem” also, and several first order logicians have tackled this issue. I’m not quite sure of myself (or my math) enough to provide all of you with details but I would be interested in hearing what you have to say about it Victor.

Raymond

 

 
  [ # 29 ]

Hi Raymond

I hope your recovery is coming along well.

Yes…. these ideas I have are fairly fuzzy at this point.  But the basic premise is to have the system continually examine its own results, with whatever rules it has at its disposal.    Basically each response deduced is added to a list.  But before the final list is presented to the user, that list is feed back into the system, and this keeps happening until no further conclusions can be drawn.  I know a lot of logic language do this already , for example PROLOG, but I’m not sure how much integration they have with natural language at this point.

 

 
  [ # 30 ]

Unless I’m not aware, and anyone jump in and correct me if I’m wrong, but I’m fairly certain that you still have to manually “encode” these logic statements into something that the logic engine can understand, and not in NL itself.

 

 < 1 2 3 > 
2 of 3
 
  login or register to react
‹‹ Calculating humor      Chinese room ››