AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Will you answer no to this question?
 
 
  [ # 31 ]

And also… unless I’m really out of date.. I think you also have to give the objective, or provide the goal for the logic engine.. that is, it cannot determine that on its own.  What i mean is, it cannot simply “experiment” and realize “ha.. i see a contradiction’ . . I think this can be done with PROLOG , but in a rather limited sense.. and probably not with arbitrary complex natural language statements.

 

 
  [ # 32 ]
Victor Shulist - Mar 18, 2011:

What i mean is, it cannot simply “experiment” and realize “ha.. i see a contradiction’

Indeed, for that to happen the AI has to comprehend what a ‘contradiction’ is. So it has to have a larger ‘frame of reference’ that gives meaning to ‘contradiction’. This goes directly to the ‘symbol grounding problem’, and this is specifically what I tend to solve with my core-concepts model.

The only other way to do it is, as you say, write the logic into code. The only (big) problem with that approach is of course that the AI will not be able to learn new things in relation to this code, without the developer writing those new logic into code as well.

The way I see it is that ‘logic’ is an emergent property, not something that is in our base-system. E.g. some people can not see the logic in a certain statement but others can. However, is someone doesn’t see the logic, that person can actually learn to see the logic. Ergo logic it a learned property.

Some info: http://en.wikipedia.org/wiki/Symbol_grounding

 

 
  [ # 33 ]

Yes, I do see your point of view regarding providing the AI the logic.

In my model, there is a very heavy dependency on the data… (the ontology).  The ontology is what gives meaning to some parse trees.  Parse trees that ONLY contain grammar knowledge, are basically discarded.  Unless the bot sees a chain of semantic connections between the words of a given parse tree, that parse tree is ignored.

Later the bot will update its ontology KB from conversations.

Now, regarding your point about not hard coding logic.  I do agree with this.  I had mentioned in another thread that to me, there seems to be 2 “levels” of “logic” at work, which are completely independent of each other.

I call them ‘executive logic’ and ‘knowledge logic’.

The knowledge logic is something like if you tell a child “If you touch the stove element when it is red, you will hurt yourself”.

Now, this (knowledge) logic does NOT tell the child what to do.  What I mean is, when the child takes that input in, it does actually TAKE CONTROL and totally dictate what the child actually does.

Now, executive logic is what makes the bot ‘tick’.  Executive logic takes all inputs, considers there semantics, and matches them up, correlates them.  This correlation is done using even more detailed knowledge logic. 

For example, a stove element when red is hot.    Touch stove element with finger means heat would be coming into contact with human flesh.  Even more detailed (knowledge) logic says ‘excess heat (perhaps specify temperature range) applied to human skin can damage it.  Damage to human skin means injury, and injury means pain.. pain means ‘hurt yourself’...etc etc.

Now, important… very important to understand is, in CLUES this (executive) logic makes no assumptions. . . it absolutely doesn’t care about the form or what they (knowledge) logic is saying.  Executive logic only gives the bot the means to ‘thought experiment’ with these (knowledge) logics.

The bot would EXECUTE, that is, what the bot does, has nothing to do with the (knowledge) logic… the (execute) logic allows it to experiment by combining these (knowledge) logic . ..which could end up perhaps rejecting it.. with. .

“Well if that rule was true, then given fact1 & fact2 , would mean fact3… which is saying the opposite of fact4”

where fact1,2 & 4 were facts you directly told it.  And fact2 is the output from the rule (ie argument) provided to it.

So the “rules” are not the hard coded logic in the engine. . they are rules that it can acquire by communicating with it in Natural Language.

 

 

 

 

 
  [ # 34 ]

correction . .in the above . .. should be. .. .

where fact1,2 & 4 were facts you directly told it.  And fact3 is the output from the rule (ie argument) provided to it.

Erwin ..need Edit button.

 

 
  [ # 35 ]
Hans Peter Willems - Mar 18, 2011:

Some info: http://en.wikipedia.org/wiki/Symbol_grounding

Yes.. isn’t that a fascinating read ?!!

 

 
  [ # 36 ]

Just another comment there…. so I too want to give the bot the smallest amount of Executive Logic as required.

Just enough to boot strap it smile

 

 
  [ # 37 ]

USER:Will you answer no to this question?
AI: I can’t answer that.

 

 
  [ # 38 ]

USER: “The following sentence is true.  The previous sentence is false.”
AI: Duh. Whatever.

(I guess the bot could have a little attitude towards these things…  They seem to be in the same vein as cussing at the bot for no reason or doing a jabberwocky of using salad talk. “Just wanted to see what the bot would do…” And then there’s “jumbo shrimp”, “found missing”, “unbiased opinion”, “act naturally”, “same difference”, “constant variable”, “clearly confused” or “seriously funny”, etc.)

 

 
  [ # 39 ]
Gary Dubuque - Mar 19, 2011:

USER:Will you answer no to this question?
AI: I can’t answer that.

USER:Will you answer no to this question?
AI: I can’t answer that.
USER: You just did.

 

 
  [ # 40 ]

USER:Will you answer no to this question?

AI: I can’t answer that.

USER: You just did.

AI: Ah, true enough.

USER: and did you answer ‘no’ to it ?

AI: no

USER: so shouldn’t you have answered ‘no’ ?

AI: no, because then I’d be actually be answering ‘no’, but yet also be saying ‘no, I’m not going to answer no to this question’, which would be contradictory !

 

 
  [ # 41 ]

This discussion reminds me of Lewis Carrol’s “Through the Looking Glass”, and the movie Labyrinth, both of which had sections of the story that dealt with a similar type of question.

Also, I personally get a lot of the following dialog (and yet, nobody seems to have learned from them):

Sis: I have a question.
Me: I have an answer.
Sis: Can you help me move some furniture tomorrow?
Me: Purple!
Sis: What?
Me: I never said that the answer would match the question.
Sis: (enter colorful language of your choice)!

Yes, the above conversation did, indeed occur. SEVERAL times. And yes, I’m a smarty-butt. smile

 

 
  [ # 42 ]
Dave Morton - Mar 19, 2011:
SisI have a question.
MeI have an answer.
SisCan you help me move some furniture tomorrow?
MePurple!
SisWhat?
MeI never said that the answer would match the question.
Sis: (enter colorful language of your choice)! 

That would be an interesting little project to add to Morti.

 

 
  [ # 43 ]

I’m tempted, but I think I’ll wait till after I get back from my road trip at the end of this month. I don’t want to p1$$ off the judges for the CBC. raspberry

 

 
  [ # 44 ]

http://subbot.org/essays/liar/lie.html is my attempt to talk to my bot about the liar’s paradox.

“To get my bots to answer both yes and no to the question, “is this statement false?”, I found I needed four propositions and an if-then rule (which implies an understanding of material implication). Modus Ponens and the Reductio Ad Absurdum method of proof are also necessary; these are built into the programs.”

 

 < 1 2 3
3 of 3
 
  login or register to react
‹‹ Calculating humor      Chinese room ››