AI Zone Admin Forum Add your forum
GRACE/CLUES
 
 

Small copy and paste of first I/O from the project:

victor@r2d2:/clues$ ./clues

GRACE version 0.1 
(on CLUES version 0.8)

<
user saysWhat did Jack do

<
Grace saysSorryI wasn't able to come up with an answer to that question.

<user says> Jack went to his closet and took out his new suit

Yes, I believe I understand!

<user says> What did Jack do

<Grace says> jack went to his closet and took out his new suit

<user says> What did Jefferson Davis Hogg do

<Grace says> Sorry, I wasn'
t able to come up with an answer to that question.

<
user saysJefferson davis hogg went to his closet and took out his new suit

Yes
I believe I understand!

<
user sayswhat did jefferson davis hogg do

<
Grace saysjefferson davis hogg went to his closet and took out his new suit

<user says

*Note 1 - this very first sample I/O does not illustrate the power of the engine, since a very simple script with regular expressions could do this.

*Note 2 - the above is NOT using simple templates/pattern matching.  Believe it or not, the bot has a complete understanding of what was said by first generating many parse trees… for *BOTH* the ‘fact’ *and* the question.

Among many parse trees, the chosen ‘fact’ parse tree Grace chose was:

pos simple-sentence
subject
.num-noun 1
subject
.noun.1.val jack
num
-predicate 2
predicate.1
.num-verb 1
predicate.1
.verb.1.val went
predicate.1
.verb.1.num-prep-phrase 1
predicate.1
.verb.1.prep-phrase.1.num-prep 1
predicate.1
.verb.1.prep-phrase.1.prep.1.val to
predicate.1
.verb.1.prep-phrase.1.num-noun 1
predicate.1
.verb.1.prep-phrase.1.noun.1.val closet
predicate.1
.verb.1.prep-phrase.1.noun.1.num-adjective 1
predicate.1
.verb.1.prep-phrase.1.noun.1.adjective.1.val his
predicate.2
.num-verb 1
predicate.2
.verb.1.val took
predicate.2
.verb.1.num-prep-phrase 1
predicate.2
.verb.1.prep-phrase.1.num-prep 1
predicate.2
.verb.1.prep-phrase.1.prep.1.val out
predicate.2
.verb.1.prep-phrase.1.num-noun 1
predicate.2
.verb.1.prep-phrase.1.noun.1.val suit
predicate.2
.verb.1.prep-phrase.1.noun.1.num-adjective 2
predicate.2
.verb.1.prep-phrase.1.noun.1.adjective.2.val = new
predicate.2.verb.1.prep-phrase.1.noun.1.adjective.1.val his
predicate
-list-type = and 

Now, also for the question input, Grace also generates parse trees, and, again, using semantic inference, decides which parse tree the user really meant…

pos = simple-sentence
num-question = 1
question.1.val = what
subject.noun.1.val = jack
subject.num-noun = 1
predicate.1.num-verb = 1
predicate.1.verb.1.val = do
num-predicate = 1
predicate.1.verb.1.num-auxiliary-verb = 1
predicate.1.verb.1.auxiliary-verb.1.val = did

So with the “QPT” (question parse tree), Grace goes in and tries to find a “FTP” (fact parse tree) which can answer the question.

In the above, Grace matches the subject (subject.noun.1.val) in the FPT which is “jack”, with the subject mentioned in the QPT.

Since this is all done with FULL grammatical and semantic inference, and not by simple pattern matching, the system will evolve, and learn, to cope with extremely complex sentences and ambiguity.

For example, in the above, note that Grace knows there is one question, but we may ask : “Who or why did….” (in that case, num-question would be 2).

Next, I will be working on tests like:

“Jack went to his closet and took out his new suit because he was going to a dance given by his company”

and ask: 

Where was jack going?
Did jack take out his new suit? why?
Where did jack go?
Did jack go to a dance?
Has jack ever been at a dance?

Stay tuned !  Much work ahead of me! !

 

 
  [ # 1 ]

Awesome work! Look forward to seeing more.

 

 
  [ # 2 ]

Thanks Garrett… I will probably put in some really long hours this weekend ... I’m very anxious to find out how it fairs out with really complicated sentences and ambiguity, like the examples I gave above.  Crossing my fingers!!! so far, so good!

 

 
  [ # 3 ]

I love semantics. Keep us posted Vic, and let us know when an URL is available to test GRACE

 

 
  [ # 4 ]

Absolutely Erwin,  I can’t make any promises about when a URL will be ready; it will be awhile.

It is just that I would have to take time out of the core development to do Web GUI work.    I may decide to use some pre-built software which could interface to Grace’s C++ core—- perhaps by just having Grace listening on a TCP port, or exchange message via SOAP messages.  There must be some framework available that has all that web GUI work already done.

 

 
  [ # 5 ]

There must be some framework available that has all that web GUI work already done.

I’d also be interested in that.

 

 
  [ # 6 ]

Very cool, Victor. smile

Out of curiousity, how many parse trees did CLUES develop for each input? Did the correct parse tree rank first? Was the ranking due purely to hard-coded rules or did CLUES call on experience with other sentences that had the same grammatical structure? If it called on previous experience, how varied is its previous experience? (That is, how potentially easy would it be for CLUES to be led astray by similar inputs with different parses. This might not be applicable to the example you gave.)

 

 
  [ # 7 ]

For

    “Jack went to his closet and took out his new suit”

it generated 2 trees,

but for

    “Jack went to his closet and took out his new suit because he was going to a dance given by his company.”

it generated 4.

*BUT* that is only because I only have 1/2 of the PTGENs (parse tree generators) in this new V3 (C++ version of the core engine).    V1 and V2 (version1 and 2 of this engine, written in Perl) have 110 rules, and those 2 prior versions, since they have many more PTGENs, generate some 862 trees for

    “Jack went to his closet and took out his new suit because he was going to a dance given by his company.”

Yes, the correct tree for the FPT (fact parse tree) and QPT (question parse tree) DID successfully rank first.  No, the semantic rules are not learned, they are explicitly provided.    Later, Grace will learn things via NL conversations which will allow her to update information that will be used to help her filter out parse trees, so the data will be dynamic, but for the foreseeable future, the logic will be manually created.    When I get to that point, I will probably decide (if it is really even necessary) to allow Grace to update her logic. 

I know this much, she will come up with her own combination of IFLOs.  Basically, after she understands what you meant (by selecting proper parse tree), she determines what script to run—think of the IFLO as “domain specific logic and data”.  So she first knows what you are talking about, (and know she knows), then she runs the script I told her to run, and that script could do anything from connect to some 3rd party SQL database, access the web, google , wikipedia, even run an external program, and collect its output).

IFLO stands for “Input, fact, logic, output”, the Input is an “I-tree”, and says if user entered a parse tree that has the form of the “Itree” OF an IFLO, then, provided the given F-Trees (fact trees) are met, then run this <L>ogic, and then produce this <O>utput.

Now, if a given IFLO doesn’t have all its “fact trees” found in the static database, she will try to deduce that fact, by. .. you guessed it.. the <O> —output, of another IFLO.  She will chain these together, effectively writing her own program.

 

 
  [ # 8 ]

* * *

Short demo showing Grace’s ability to deal with synonyms and complex sentences simultaneously :

* * *

GRACE version 0.1 (on CLUES version 0.8)

(running in mode: sqa-test)

<user says> what did henry do

<Grace says> I understood your question, even ran some code, but sorry, I wasn’t able to come up with an answer.

<user says> henry went to his closet and took out his new suit since he was going to a dance given by his company

Yes, I believe I understand!

<user says> what did henry do

<Grace says> henry went to his closet and took out his new suit (because he was going to a dance given by his company)


I can use either “because”, or “since”, or even internet lingo “cuz” (yes, bad, no such word, but people use it).

I have her place the REASON (if she knows it) in parenthesis smile

 

 
  [ # 9 ]

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

<user says> henry went to his closet and took out his new suit

Yes, I believe I understand!

<user says> what did henry do

<Grace says> henry went to his closet and took out his new suit

<user says> where did henry go

<Grace says> to his closet

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Here we can see proof of Grace’s understanding and ability to know what portion of the input answers the question of WHERE henry went.  That is he only went “to his closet”, but when asked WHAT henry did, she replies with “henry went to his closet and took out his new suit”.

Note, a simple template / regex based system may have responded with:

Where did henry go?

to his closet and took out his new suit

because of simple match of:  henry went to (*)

 

 
  [ # 10 ]

Victor, it looks like you are coming along quite well with your new C++ clues version.
Has it given you the speed improvement you thought you would achieve?

Here we can see proof of Grace’s understanding and ability to know what portion of the input answers the question of WHERE henry went.  That is he only went “to his closet”, but when asked WHAT henry did, she replies with “henry went to his closet and took out his new suit”.

Would you also get the correct answer to:

What did henry take out of his closet?
and
Did he take an old suit out of his closet?

 

 

 

 
  [ # 11 ]

The speed increase?  Yes,  I don’t have an exact number for you right now, but here is some preliminary results:

1) old version, with 110 PTGens (parse tree generators), with 22 word input, took 18 seconds

2) new vesion, with 50 PTGens (I haven’t converted them all to new format for V3 yet), with 22 word input, As soon as I press enter , it’s done !!

once I get those other 60 rules converted to new V3 format, I will know, but I doubt that it will slow down (sure hope not).

Those other questions you mentioned… not yet, but Yes, it will be able to answer them.  In fact, thanks for the suggestions, I will work on the logic for it to answer those questions!

 

 
  [ # 12 ]

I have decided to discontinue posting of CLUES progress tracking here, instead further posting are available at the CLUES trueBot blog at:

http://truebot.blogspot.com/2011/02/graceclues.html

 

 
  [ # 13 ]

Ah, I wish you would continue posting here. I enjoy keeping track of your progress—CLUES has so much potential. At least post update notices?

 

 
  [ # 14 ]

Actually that is a good idea CR.  That is exactly what I’ll do.

The blog will have its purpose and this forum will have its purpose as well.

I will have the blog list ONLY conversation examples, showing its latest capabilities.    So I won’t really have feedback on it, I only want the discussion on here (chatbots.org).

So anyone can say “Regarding the post of <date>, I think…...”

Now, I will be advertising for Erwin, which I think he’ll like.  Because if someone posts a comment on the blog, I will remove it and paste it here instead, and let that person know that they should come to chatbots.org to actually provide feedback.

This way, the example conversations and documentation about CLUES is kept separate from feedback, which I always wanted to do.

Also, one someone mentions a new “puzzle” for CLUES (a complex set of natural language statements ending in a question), I can have a section in the blog to keep track of that.

This will be well organized , and help with our ENDLESS list of requirements of a bot that is REALLY UNDERSTANDING a conversation.  In due time, months or years, satisfy everyone’s requirement smile

 

 
  [ # 15 ]

Yes, that’s a good idea. It helps to have a place with all your notes and progress collected. I’ve also built a site for keeping track of the layout of ALEX and its development. (The site isn’t online yet. So far I simply use it as my own notes. I plan to put it up once I’m done with my latest round of updates.)

 

1 of 7
1
 
  login or register to react