AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Anaphoric (backward reference)
 
 

Ok, so everybody likes Wikipedia, so we’ll start with the URL

http://en.wikipedia.org/wiki/Anaphora_(linguistics)

save you one step of googling it.  Oh, chatbots.org doesn’t know “googling” (or chatbots.org lol, Erwin/Dave, update your word database).

OK… So, to avoid trouble, I have decided to open each topic on a VERY specific issue.

Today I am finally at the point of coding the logic to handle Anaphoric references.  I am first tackling the ‘back reference kind’.

Example

1—“We gave the monkeys the bananas because they were hungry”

2—“We gave the monkeys the bananas because they would have gone to waste anyway”

so in 1) ‘they’ is the monkeys (obvious to you and I) and in 2) ‘they’ is the bananas.

But that is a more advanced example.

The kind I am writing the semantic rules for today in my bot is:

3. Jack stopped by because he owes me money.

4.  Jack called Tom because he owes me money.

In #3 when asked “Who owes me money” it is obvious, “he” maps back to “Jack”, but in 4. “he” is ambiguous.

Right now, I am tackling this as follows:

When asked “What did Jack do”, the bot searchs all facts to find one where the subject = Jack.  So the connection is made with the subject of the *main-clause*.  Then, we know from the parse tree that we also have a suborindate clause “he owes me money”. 

Now, the way I’m doing this, (and I am intrested in how all of you are, because of the vast differences in our approaches), is :

The bot sees a subordinate clause and checks the subject of it.  The subodinate clauses’ subject is “he”.  It then gets the name of the main-clause (the subject of the main clause), “Jack”, and checks its database to see if that represents a male name.  (Same is true if “she”, then checks if main-clause subject was a female name).  If that checks out, it goes on to next step.

The next step—and this is where you have to watch—we can’t just jump to conclusions here.  We can’t just map ‘he’ to ‘Jack’. 

The reason being, what if there was a prepositional phrase or perhaps predicate noun with a person name? such as in #4 above.

So, the bot scans the main clause, and asks itself, do I see any references to people’s names (other than of course the subject).

in #3 above, it sees none (none other than the subject has a person name).

Thus, it is obvious.  We may do the map of ‘he’ to ‘jack’.

In #4, the ‘scan’ routine comes back with the node in the parse tree for ‘Tom’.  Since a non-empty list is returned it means “Yes, there is other people mentioned in the main clause”.  Thus, the bot is confused.  BUT, the good thing is, it knows it is confused.  It knows it must ask.  “Ok, who owes you money, Jack or Tom”

What is everyone else’s approach?

 

 
  [ # 1 ]

In my model (no software yet) the AI will scan for previous phrases in similar contexts (= experience) and make an educated guess based on that (= an assumption). It might get it wrong, exactly like a human might get it wrong. If the found ‘assumption’ is the wrong one then it lends to further discussion of the topic; learning new facts and attaining new experiences. Through this process the AI will gain new insights that might help next time when being faced with a similar input.

 

 
  [ # 2 ]

No pronoun handling yet. I’m not going to bother until I have a (domain specific) knowledge base built up, because the pronoun identifying method will rely largely on factual knowledge about what objects can/can’t do and what they’ve done so far in the text. The trouble is, too often pronouns aren’t identified within the sentence they are used. In example 4, it may very well be George who owes you money, and Jack and Tom are conferring to figure out what to do about it!

However, I think your approach of handling pronouns within a sentence will be a necessary part of any such scheme.

 

 
  [ # 3 ]
C R Hunt - Feb 12, 2011:

In example 4, it may very well be George who owes you money, and Jack and Tom are conferring to figure out what to do about it!

Yes, for example 4, Grace won’t waste her time, she’ll think, the heck with it, just ask !

@Hans - yes, the most important abilities for a bot is ability to a) know it is confused (the fact it gets confused is no problem, humans do all the time) b) ask question, c)—hard-part—understand that response and get ‘un-confused’

Of course yes, this can lead to any depth of recursion, asking question about the clarifying question, and question to answer to that question….to infinity.

Except I think it won’t go to infinity.  I want my bot to keep asking, and learning.  Just like a child, it will take a LOT of training !!

 

 
  [ # 4 ]

I’m not yet handling all of them, only in some situations. But the general approach is always the same:
1. I delay resolving the pronouns as late as possible. this saves me processing time: if a parse path was invalid from a grammatical point of view, I don’t even bother trying to solve the pronouns. Instead, all pronouns are handled in the semantic stage.
2. When there is a previous question, this gets precedence at the moment in my system.
Ex: question = what is your name. Answer= that is jan. (that will be replaced with ‘you’.
3. If there are multiple candidates to replace the pronoun, I do a split, so that it appears again as if there is only 1 (keeps code simple), by weighting each path against ‘known’ things, 1 of those paths gets the lead and will eventually be executed.

 

 
  [ # 5 ]

I don’t resolve pronouns actually at all during *storage* of the fact.  That is, “he” or “she” stays as is, when the statement is stored.  The bot only worries about it during questioning.

 

 
  [ # 6 ]

The problem with a bot “knowing” what Jack did, is that a bot is totally objective and has only the memories and no opinions on Jack. A human would think: Wow Jack is a loser for calling Tom when he owes you money. I don’t care why he fed the monkeys, he should not have wasted his money on all those bananas, etc.
The perfect solution is to be able to identify Jack, as the Jack you talk about, the one that owes money, and be able to add opinions about the character of Jack

 

 
  [ # 7 ]

Yes your comments are valid.  However, I believe EVERYTHING possible should be taken into consideration, from lowest level, to highest level. 

So yes, Grace will consider higher abstractions like the ones you spoke of. 

But one thing at a time. smile

Patti Roberts - Feb 13, 2011:

A human would think: Wow Jack is a loser for calling Tom when he owes you money.

Not if it was well known that Tom owes Jack a lot of money.

Also, the point here really is that the bot realize it has to resolve a backward anaphoric reference.  Just the realization is important—and,  after applying other reasoning (the Tom is loser reasoning for example), if it cannot resolve, it should know to ask. 

For right now, I have it working that if Grace sees no other name in the main clause, then she knows for sure she can map “he” back to Jack in “Jack went to his closet and took out his new suit”.

If she DOES she another name (and it is not female, (if we’re trying to map “he”)), then she knows she has to set out on a quest to figure out who ‘he’ is. 

Now I know a lot of times people won’t bother with going on this ‘quest’ ... Most people would just ask.

 

 
  [ # 8 ]

I think that is where the actual personality of the bot, or human would come in. Humans don’t wait to find out if Tom owes Jack before they pass judgement. Some humans might feel sorry for Jack being broke and in debt, others condemn him for being a loser.
I wonder if it is necessary in AI for a bot to be more subjective, and rational in their reasoning than a human. Would a human ask “is this the same Jack that owes you money?”  Perhaps the bot could just jump to conclusions.  You could just map ‘he’ to ‘Jack’, until corrected.  What if the bot just jumped on the user for being owed money?

 

 
  [ # 9 ]
Patti Roberts - Feb 13, 2011:

I think that is where the actual personality of the bot, or human would come in.

Oh, you are BANG ON the mark there !!!  Yes, I highly agree with that.  I know some people, know matter how clear it seems that someone did something (intentionally) wrong, they don’t pass judgement until they are 100% sure, and where I myself did.  I try to be as objective as I can, but I’m human.

Yes, this is where I think it will be very nice experience to chat with a bot (I mean one that can really understand your input).  I want Grace to behave exactly like Mr Data.

 

 
  [ # 10 ]
Patti Roberts - Feb 13, 2011:

I wonder if it is necessary in AI for a bot to be more subjective, and rational in their reasoning than a human. Would a human ask “is this the same Jack that owes you money?”  Perhaps the bot could just jump to conclusions.

Patti, this is exactly where I’m going with my model. Not to be snotty here, but I think many AI-researchers are hung up on making AI much more like a ‘perfect human’ then ‘just like a human’. In my perception it is completely acceptable that the AI doesn’t know ‘everything’ and has a ‘narrow view’ on things.

 

 
  [ # 11 ]
Hans Peter Willems - Feb 13, 2011:

Patti, this is exactly where I’m going with my model. Not to be snotty here, but I think many AI-researchers are hung up on making AI much more like a ‘perfect human’ then ‘just like a human’.

 

I’m not hung up on making mine at all like human.  Understanding humans, sure, but not think or behave like humans.

Hans Peter Willems - Feb 13, 2011:

In my perception it is completely acceptable that the AI doesn’t know ‘everything’ and has a ‘narrow view’ on things.

Exactly.  It must start out that way.  All that matters is the ability to learn, and expand its knowledge.

 

 
  [ # 12 ]

Actually, from birth to death, all humans have a narrow view.  If you consider everything there is to know in the universe smile  VERY narrow view.

I imagine that, compared to all that is knowable, that increase of knowledge from birth to death is very, very small.

 

 
  [ # 13 ]

I have to admit as a human I have bias and prejudices. Take the case of “Jack”, the only Jack I know is a loser and the name sends chills down my spine. I automatically have a negative connotation to the name Jack, until I get new info. Yet a bot is penalized if they start having the same assumptions, and preconceived notions as a human.
Most people seem to want a chat bot that is something like “Data” from Star Trek with a complete knowledge of the universe
I am always amused when someone asks my bot a question like What is String Theory M. after my bots response they say
“your stupid”
On a good day he will remind them it’s you’re wink

 

 
  [ # 14 ]
Patti Roberts - Feb 13, 2011:

Most people seem to want a chat bot that is something like “Data” from Star Trek with a complete knowledge of the universe

I think when we work from the premise that the AI has to learn things to be able to ‘understand’ (in a human fashion) what it is learning, by constantly deducting new concepts from the ‘knowledge’ that is being fed to it, the AI will eventually speed up it’s learning process through use of it’s data-interface and become something ‘like’ Data.

I can see how I have to ‘teach’ the ‘basics of life’ (or the AI equivalent of that) up to a point where I can just start feeding ‘information’ through a data-connection. The AI will not simply ‘store’ that information but will use this information in a reasoning process against it’s already stored ‘knowledge’ to actually ‘learn’ from it.

The funny thing in your reference to Data from Startrek is that iirc he also didn’t just absorb information by storing it but clearly went through a process of ‘digesting the information’ before giving his analysis of a situation or the answer to a complex question. He also didn’t have ‘every information’ available inside his brain and on several occasions had to ‘download’ information for processing smile

 

 
  [ # 15 ]

you’re right in your interpretation smile

 

 1 2 > 
1 of 2
 
  login or register to react