AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

PMLS Discourse Engine advances
 
 

I’d shamelessly like to use this space to mention progress of the PMLS Discourse Engine. At least until a RSS feed on the reference implementation page will be in place and/or I’ll consider Twitter being more than a toy for pubescent kids…

Ok.

* as a significant portion of users feels the urge to insult the bot (that’s what the logs say), we’ve added lots of profanity detectors. It would be a shame if the bot couldn’t even understand some picturesque and flowery idioms.

* There is a pre-“don’t know”-catchall rule now which can perform a “last chance” procedural analysis. Currently the input is just analyzed for a “101010101” binary mimicking speech for testing purposes of this catchall-rule.

* There is a preliminary implementation of past conversation memory retrieval, so questions “what did I just say” or “what did YOU just say” can be answered. This infrastructure will also allow for detection of the user copy-cat/parroting and react accordingly

* some ontology extensions leveraging the English ontology from meager 60 rules to still meager 107 rules. (I think 500 is the lower threshold for an interesting general purpose discussion)

Richard

 

 
  [ # 1 ]
Richard Jelinek - Mar 13, 2010:

as a significant portion of users feels the urge to insult the bot (that’s what the logs say)

You don’t want to know how the Chatbots.org logs look like!  red face

In the upcoming release we’ll moved the public erotic chatbots to a new domain, and the erotic chatbots on Chatbots.org will be hidden and shown to 18+ members only.

The reason why we want to have those chatbots listed is that we believe that the erotic industry will be one the first industries really leveraging, building a separate business on Chatbots alone.

 

 
  [ # 2 ]
Richard Jelinek - Mar 13, 2010:

There is a preliminary implementation of past conversation memory retrieval, so questions “what did I just say” or “what did YOU just say” can be answered. This infrastructure will also allow for detection of the user copy-cat/parroting and react accordingly

Wouldn’t it be possible to implement something like: do you remember our conversation about cats last week? ...

 

 
  [ # 3 ]
Erwin Van Lun - Mar 13, 2010:

You don’t want to know how the Chatbots.org logs look like!  red face

I can imagine…

The reason why we want to have those chatbots listed is that we believe that the erotic industry will be one the first industries really leveraging, building a separate business on Chatbots alone.

I wouldn’t throw profanity and insults into one bucket with the erotic discourse domain, but I also think it is a good step to separate the latter from chatbots.org.

Erwin Van Lun - Mar 13, 2010:

Wouldn’t it be possible to implement something like: do you remember our conversation about cats last week? ...

Actually the infrastructure is already there. Even more - this goal can be achieved two different ways with our engine:

1) As every user gets his individual instance of a chatbot (memories etc. included), the conversation is basically stored as long as this instance “lives”. How long such a instance lives depends on its idle time and that is a configuration variable. Currently it is one hour, but could also be one week, month, year… So if you return to the PMLS chatbot webpage within one hour, your conversation and YOUR bot will be still waiting there for you.

2) There are store/retrieve methods implemented for the chatbot objects which allow basically for persistent storage of the chatbots memory. This is currently disabled, we thought of enabling it for premium users.

The only piece missing is the restriction to domain and interval arithmetics (“last week” denotes 604800 seconds of conversation, which can be 0-604799 seconds prior to current time IF your granularity is one second grin)

Our language processing suite has temporal and spatial reasoning, so putting it all together is about 1-2 man days. Thanks for the suggestion Erwin, I just created a ticket in our RCS.

...

(discussion starting down there, see the jou/you typo)

User:    What did I say 4 sentences ago?
Bot:    I think something like “What did I just say?”
User:    What did you just say?
Bot:    Well - “Well - “I’m sorry, but I don’t understand.”“
User:    What did you just say?
Bot:    Well - “I’m sorry, but I don’t understand.”
User:    What did jou just say?
Bot:    I’m sorry, but I don’t understand.
User:    What did I just say?
Bot:    I think something like “What did I just say?”
User:    What did I just say?
Bot:    I don’t have these memories.


Hehe.

Richard

 

 
  [ # 4 ]

grin

It would also be nice if bot shows emotions related to the time you haven’t spoken to him!

Where have you been!? I’ve missed you. We’ve spoken for weeks and suddenly you were gone.

OR

You’r early. You said you would go away for an hour or so. But that’s five minutes ago. Anyway, glad you’re back!

 

 
  [ # 5 ]

Well - theres a long way to go to emulate a human being. Which we’re actually not trying to achieve. The reason we’re not aspiring for a Loebner Prize (or any other turing-test contest there might be): our bot doesn’t pretend. Neither that he is human, nor that he knows if he doesn’t.

Which of course means many people consider it dumb, because they read “I don’t understand” very often.

149 rules…

User:    Am I a woman?
Bot:    Your’re a man.
User:    I’m a male
Bot:    Ok, a man. I will remember that.
User:    How old am I
Bot:    Your age is 39 (years).
User:    I’m Richard
Bot:    Hi Richard, nice to meet you.
User:    Am I a man or a woman?
Bot:    I think I don’t know yet. Are you a man or a woman?
User:    I’m 39
Bot:    Years - I assume. I will remember your age.
User:    What’s my name?
Bot:    I think I don’t know yet. Will you tell me your name?
User:    Where is Aaron?
Bot:    I don’t know where Aaron is.
User:    Hi
Bot:    Hello. My name is Andreas.

And the usual tidbits:

User:    wow
Bot:    Impressed?
User:    grin
Bot:    Nice to see you’re happy.
User:    What did you just say?
Bot:    Well - “One translation is ปารีส.”
User:    how do you say paris in thai
Bot:    One translation is ปารีส.

User age is not just a number:

User:    I’m 600 seconds old.
Bot:    1.90258751902588e-05 years?? Hardly… I will remember your age.
User:    How old are you?
Bot:    I’m still very young. Exactly 667 seconds to be precise.
User:    I’m 200 years old
Bot:    200 years?? Hard to believe you’re that old. I will remember your age.
User:    What is my age?
Bot:    Your age is 85 (years).
User:    I’m 1020 months old
Bot:    Now that I call age! I will remember your age.

Do not pretend:

User:    You are ironhead
Bot:    No, my name is Andreas.
User:    You are a bot
Bot:    Of course I am a Bot. What did you think?
User:    No way you’re a bot
Bot:    Of course I am a Bot. But I take your claim as a compliment.

Like it or not - one for you ... one for me:

User:    What do I like?
Bot:    You like flowers.
User:    I’m Richard
Bot:    Hi Richard, nice to meet you.
User:    I like flowers
Bot:    So you like flowers. Interesting.
User:    What pleases me?
Bot:    You like me.
User:    I like you
Bot:    So you like me. Interesting.
User:    I like me
Bot:    So you like you. Interesting.
User:    What pleases me?
Bot:    I think I don’t know yet. Will you tell me?


There is a bunch of bugs left… but I have managed to avoid them in this demo. grin


Richard

 

 
  [ # 6 ]

Ok - Ladies and Gentlemen!

It’s rudimentary. It’s brittle. It’s in its toy stage. But semantic inference has finally arrived in the PMLS Discourse Engine:

User:    do humans have DNA?
Bot:    Yes.
User:    cells have DNA
Bot:    Ok, so cells have DNA.
User:    muscles have cells
Bot:    Ok, so muscles have cells.
User:    legs have muscles
Bot:    Ok, so legs have muscles.
User:    Humans have legs
Bot:    Ok, so Humans have legs.
User:    Hi
Bot:    Hello. My name is Adam. Will you tell me your name?

Please don’t forget to get a new bot (by killing the browser cookie from nlp.petamem.com) if you want to test this feature. Your bot instance waiting there for you may not yet have this feature. Hyper-Evolution grin

edit:

continuing from there:

User:    what have humans and animals in common
Bot:    head and legs

User:    animals have head
Bot:    Ok, so animals have head.
User:    humans have head
Bot:    Ok, so humans have head.
User:    what have humans and animals in common
Bot:    legs

User:    do animals have DNA
Bot:    Yes.
User:    animals have legs
Bot:    Ok, so animals have legs.

Ok, and that was the milestone for March 15th 2010.

edit2:

And we continue our happy inferencing:

We’ve widened the syntactic acceptance for part-of/meronym facts. As well as added a deduction rule that if something is contained in something else, the former must be smaller, thus the latter must be bigger.

User:    Are quarks bigger than atoms?
Bot:    According to what you have teached me - quarks should be smaller than atoms.

User:    atoms contain quarks
Bot:    Ok, so atoms have quarks.
User:    molecules are made of atoms
Bot:    Ok, so molecules have atoms.
User:    DNA consists of molecules
Bot:    Ok, so DNA has molecules.

So the virtual gadzillion dollar question is: If you type in all of the above and ask the machine:

User: Are atoms bigger than humans?

What will it answer? grin

Richard

 

 
  [ # 7 ]

So it could follow simple has-has-has chains. Big deal. A new version can deduce is/has chains/networks. This let’s us build up a much more realistic semanitc net. I’ll just give exemplaric input of the user, evtl. answers of the bot in brackets.

> humans are primates
> primates are mammals
> mammals are organisms
> organisms have cells
> cells have DNA
> Do humans have DNA? (yes)
> Do animals have DNA? (unknown)
> animals are organisms
> Do animals have DNA? (yes)

Plus the engine now has track of past discourse. That is, it can react differently depending on preceding discourse. E.g. if the user answers just “No”, the semantics of “No” depend on what was stated before. So just answer no if he asks you for your name. grin

Oh I nearly forgot:

User:    What color has a pink stone?
Bot:    I love trick questions. Should be pink.
User:    What size has a tiny ant?
Bot:    I love trick questions. Should be tiny.

215 rules - BTW.

edit:

The bot now infers also that if X has Y then Y must be part-of X. Didn’t do that before, so

User:    What do protons and electrons have in common
Bot:    As far as I know, protons and electrons seem to have nothing in common.

then

User:    electrons have quarks
User:    protons have quarks
User:    atoms have electrons
User:    atoms have protons
User:    What do protons and electrons have in common
Bot:    Both have quarks.They also are part of atoms.

As you can see, the question about common concepts got extended and will now tell ALL nearest common concepts. Also, we see the “part of” answer, although that was taught only implicitly. So if we add

User:    protons are particles
User:    electrons are particles
User:    atoms are particles

User:    What do protons and electrons have in common
Bot:    Both have quarks.They also are part of atoms.They also are particles.

The bot now also has an answer to the question

User:    What have you learned?
Bot:    Let’s see, that’d be 4 facts then. Pretty meagre if you ask me. Perhaps you’d like to ask me “How do you learn” and try teaching me?

grin

And it knows answer to the question about listing the parts of an entity. (Before it could only answer Yes/No/Don’t know if an pair of entities was given for holonym/meronym check):

User:    What are atoms made of?
Bot:    atoms has the following parts: electrons and protons

 

 
  [ # 8 ]

commonsense knowledge

Learning is nice, but long and tiresome. Wouldn’t it be nicer if the bot didn’t come along as a blank sheet of paper, but with commonsense knowledge already builtin? It would!

We’re slowly starting to pour in commonsense knowledge that is available to every new instance of the chatbot. Right now there are some geographical and astronomical facts to test the semantic net and inference algorithms.

User:    what have czechia and denmark in common?
Bot:    Both have borders with germany and poland. They also are part of europe. They also are country.

User:    What is the czech republic?
Bot:    czech republic has one synonym (czechia), which I will consider when retrieving information. It is a country and has borders with austria, germany, poland and slovakia. czech republic is part of europe and contains brno, chomutov, děčín, frýdek-místek, havířov, hradec králové, jihlava, karlovy vary, karviná, kladno, liberec, mladá boleslav, most, olomouc, opava, ostrava, pardubice, plzeň, prague, přerov, teplice, zlín, ústí nad labem and české budějovice. For more details see Wikipedia(czech republic).

User:    Is jupiter a planet?
Bot:    Yes.

User:    What is the solar system?
Bot:    solar system is a astronomical object. solar system is part of none and contains earth, jupiter, mars, mercury, neptune, saturn, sol, uranus and venus. For more details see Wikipedia(solar system).

It will certainly be fun to see how this thing evolves.
Try it: http://nlp.petamem.com/eng/nlp/chatbot.mpl

 

 
  [ # 9 ]

Please someone come up with a good definition of intelligence. Then let follow a good definition of artificial intelligence. Ok I start with one:

“If I was less intelligent, I’d be worried that computers will one day become more intelligent than me.”

Nice circular gizmo - isn’t it? What would you call this:

Q: “In what relation are ‘red dwarf’ and ‘astronomical object’?”
A: “An ‘astronomical object’ is the general case of a ‘red dwarf’.”

Q: “What is a ‘red dwarf’?”
A: “A ‘red dwarf’ is a kind of a star.”

Q: “In what relation are ‘smaller than’ and ‘bigger than’?”

... wait ... think ... answer this for yourself.


A: “‘Smaller than’ and ‘bigger than’ are inverses.”

(Anyone said OPPOSITES???)

Q: “In what relation are ‘singing’ and ‘cloud’?”
A: “‘Singing’ and ‘cloud’ are foreign concepts.”

Q: “In what relation are ‘equal’ and ‘not equal’.”
A: “‘Equal’ and ‘not equal’ are opposites.”


Given the fact, that the Answers (A:) come from our PMLS Discourse Engine…
you probably cannot imagine how excited I am…


Richard

 

 
  [ # 10 ]

http://www.hutter1.net/ai/uaibook.htm

UNIVERSAL ALGORITHMIC INTELLIGENCE

Sequential decision theory formally solves the problem of rational agents in
uncertain worlds if the true environmental prior probability distribution is
known. Solomonoff’s theory of universal induction formally solves the problem
of sequence prediction for unknown prior distribution. We combine both
ideas and get a parameter-free theory of universal Artificial Intelligence. We
give strong arguments that the resulting AIXI model is the most intelligent
unbiased agent possible. We outline how the AIXI model can formally solve
a number of problem classes, including sequence prediction, strategic games,
function minimization, reinforcement and supervised learning. The major
drawback of the AIXI model is that it is uncomputable. To overcome this
problem, we construct a modified algorithm AIXItl that is still effectively
more intelligent than any other time t and length l bounded agent. The
computation time of AIXItl is of the order t·2l . The discussion includes formal
definitions of intelligence order relations, the horizon problem and relations
of the AIXI theory to other AI approaches.

 

 
  [ # 11 ]
Richard Jelinek - Oct 12, 2010:

Please someone come up with a good definition of intelligence. Then let follow a good definition of artificial intelligence. Ok I start with one:

“If I was less intelligent, I’d be worried that computers will one day become more intelligent than me.”

Nice circular gizmo - isn’t it? What would you call this:

Q: “In what relation are ‘red dwarf’ and ‘astronomical object’?”
A: “An ‘astronomical object’ is the general case of a ‘red dwarf’.”

Q: “What is a ‘red dwarf’?”
A: “A ‘red dwarf’ is a kind of a star.”

Q: “In what relation are ‘smaller than’ and ‘bigger than’?”

... wait ... think ... answer this for yourself.


A: “‘Smaller than’ and ‘bigger than’ are inverses.”

(Anyone said OPPOSITES???)

Q: “In what relation are ‘singing’ and ‘cloud’?”
A: “‘Singing’ and ‘cloud’ are foreign concepts.”

Q: “In what relation are ‘equal’ and ‘not equal’.”
A: “‘Equal’ and ‘not equal’ are opposites.”


Given the fact, that the Answers (A:) come from our PMLS Discourse Engine…
you probably cannot imagine how excited I am…


Richard

That’s a pretty good result. Congratulations! What methods are you using?

 

 
  [ # 12 ]
Richard Jelinek - Mar 19, 2010:

So it could follow simple has-has-has chains. Big deal. A new version can deduce is/has chains/networks. This let’s us build up a much more realistic semanitc net. I’ll just give exemplaric input of the user, evtl. answers of the bot in brackets.

> humans are primates
> primates are mammals
> mammals are organisms
> organisms have cells
> cells have DNA
> Do humans have DNA? (yes)
> Do animals have DNA? (unknown)
> animals are organisms
> Do animals have DNA? (yes)

This is great stuff. Can the bot recognize the ” word have word ” and ” word are word ” sequences within a larger, more complex sentence?

Also, I can’t resist…

Richard Jelinek - Mar 19, 2010:

User:    electrons have quarks
User:    protons have quarks
User:    atoms have electrons
User:    atoms have protons
User:    What do protons and electrons have in common
Bot:    Both have quarks.They also are part of atoms.

...electrons don’t have quarks!!! Electrons are leptons and currently thought to be indivisible particles of no size (radius < 10^-18 m). Perhaps they will be one day shown to be composite particles, but at any rate…they don’t have quarks.

 

 
  [ # 13 ]

lol, you tell ‘em, CR! smile

Heck, I’m a former Auto mechanic/cab driver/Granny Nanny, and I know that electrons don’t have quarks. Of course, I also know that Quark has a holo-suite or three, but that’s a different knowledge base. smile

 

 
  [ # 14 ]
C R Hunt - Oct 13, 2010:

...electrons don’t have quarks!!! Electrons are leptons and currently thought to be indivisible particles of no size (radius < 10^-18 m). Perhaps they will be one day shown to be composite particles, but at any rate…they don’t have quarks.

Uh - yes. I should have teached it some valid stuff with other members from the particle zoo. Even for the sake of demonstration, there are a lot of other candidates. But as you can see, it readily learns even “hypotheses”.

Can the bot recognize the ” word have word ” and ” word are word ” sequences within a larger, more complex sentence?

First off, it cannot parse complex sentences, because for good parsing, good semantic inference must be in place. At least that’s what we think. So right now it’s just pattern matching to get some key info to the inferential engine.

And that inferential engine has no problem to distinguish between “are” and “have” relations between objects/concepts… In fact, that’s one of its most fundamental features.

Of course, I also know that Quark has a holo-suite or three, but that’s a different knowledge base.

And quark is also the german word for curd. All 3 fit in well within one knowledge base as quark/0, quark/1 and quark/2 - should you need it. wink


Richard

 

 
  [ # 15 ]
Richard Jelinek - Oct 13, 2010:

Uh - yes. I should have teached it some valid stuff with other members from the particle zoo. Even for the sake of demonstration, there are a lot of other candidates. But as you can see, it readily learns even “hypotheses”.

The demo was well-done, I just can’t resist getting side-tracked by physics. smile

Richard Jelinek - Oct 13, 2010:

First off, it cannot parse complex sentences, because for good parsing, good semantic inference must be in place. At least that’s what we think. So right now it’s just pattern matching to get some key info to the inferential engine.

Ah, I see. And I agree that semantic inference is necessary for proper parsing. I wonder how much can be done in this case with a statistical approach, provided users understand the goal of the bot is to learn relationships between objects. Even nestled within larger sentences, the key information would still be in the same ” word relation word ” format. And all the information you’re gathering is precisely what you need to start making inferences about sentence meaning. But I guess everything in steps… Keep up the good work!

 

 1 2 > 
1 of 2
 
  login or register to react