AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Introduction
 
 
  [ # 91 ]

Just wonder, Hans, what are you going to code this in?

Since I have made the move to C++ , if I spend 5 years coding, I have to keep an eye on the new C++0X standard that is coming out.  I wish they would just leave well enough alone !!

 

 
  [ # 92 ]

Hans, starting from a base of core assumptions sounds like a great plan. Have you considered yet the boundary between the core with learned stuff and the current conversation in progress? Once a topic is established in a conversation, you can use the core knowledge to predict the next inputs (roughly) in the dialog and maybe jump past all the hurdles Victor and C R are warning you about. Probably your guess of finding some (forecasted) keywords will be enough to make it work. So part of your approach has to generalize the specific instances of knowledge found in the exchanges to add that to the core. I think this is Victor’s (and C R’s) main point, that is, being able to generalize what is said so it is more useful to the program. Their only drawback is that they haven’t taken the text processing to the level of generalization that you need for your bot’s “personality” (I think, at least that’s what I hear them saying.)

Anyways, good luck. Keep the focus on the bot’s ideas instead of the mechanics of interfacing to humans. You can improve the interface later (maybe with Victor’s help.) It sounds like a great adventure!

 

 
  [ # 93 ]
Victor Shulist - Feb 6, 2011:

user: a ham sandwich is better than nothing
user: nothing is better than a million dollars
ai : thus, a ham sandwich is better than a million dollars

This is a great test. Of course, my logicagent fell into the trap. I can sort of get around it by introducing “X is not Y” statements, but it is not generic enough:

—-

> a sandwich is better than nothing
> nothing is better than a million dollars
> is a sandwich better than a million dollars?
Yes, a sandwich is better than a million dollars.

> why is a sandwich better than a million dollars?
a sandwich is better than a million dollars because: a sandwich is better than nothing, and nothing is better than a million dollars

> a sandwich is not better than a million dollars
> is a sandwich better than a million dollars?
No, a sandwich is not better than a million dollars

> a dinner is better than a sandwich
> is a dinner better than a million dollars?
Yes, a dinner is better than a million dollars.

> a dinner is not better than a million dollars
> is a dinner better than a million dollars?
No, a dinner is not better than a million dollars

—-
A more generic way of handling it is to include a preprocessing rewrite rule. I think the “nothing” in “X is better than nothing” can be interpreted as an elision of “having nothing”.

Ideally I would like to be able to add a preprocessing rewrite rule to the logic agent at runtime, but I currently have only postprocessing rule capability, and ran into problems trying to use that. So i had to introduce a simple if-then rule into the source code (which I hate doing, so I need to figure out a way to add preprocessing rules at runtime). But for now the agent can handle this dialog:

—-

> a sandwich is better than nothing
Okay, a sandwich is better than having nothing.

> nothing is better than a million dollars
Okay, nothing is better than a million dollars.

> is a sandwich better than nothing?
Yes, a sandwich is better than having nothing.

> is a sandwich better than a million dollars?
I have no knowledge that a sandwich is better than a million dollars.

>

 

 
  [ # 94 ]
Victor Shulist - Feb 7, 2011:

Just wonder, Hans, what are you going to code this in?

Not sure yet, but the choice of programming environment will mainly be based on the available options to quickly prototype stuff. For now I’m leaning towards Python but even PHP might be a viable option. Speed is not an issue, at least not at this point in the development. The main technical hurdle to take is to develop a data-model that can represent my AI-mind. Although the data-model will be fairly simple (I think), the data-model is where the real mojo will take place.

 

 
  [ # 95 ]
Gary Dubuque - Feb 7, 2011:

Hans, starting from a base of core assumptions sounds like a great plan. Have you considered yet the boundary between the core with learned stuff and the current conversation in progress? Once a topic is established in a conversation, you can use the core knowledge to predict the next inputs (roughly) in the dialog and maybe jump past all the hurdles Victor and C R are warning you about. Probably your guess of finding some (forecasted) keywords will be enough to make it work. So part of your approach has to generalize the specific instances of knowledge found in the exchanges to add that to the core.

Any conversational input will be used for learning of course. My main mind-model is based on concepts, context and experience; anything that happens (in a conversation) is going to be mapped to one or more concepts, placed in a context (i.e. the ‘topic’ of the conversation) and stored as ‘experience’. The AI will build up ‘experience’ and use that to ‘predict’ the flow of the conversation, or better yet be able to influence the flow of the conversation.

As soon as an AI can actually influence or steer the conversation in other directions then anticipated by the user (within the boundaries of common conversation and ‘acceptable’ flow), then we might get close to strong-AI. And of course I’m not talking ‘canned responses’ here (my model doesn’t have any of those), I’m talking about the AI making assumptions based on input, decides what would be an appropriate response based on ‘experience’ and then formulates the response, again based on ‘experience’.

Something I didn’t mention yet is that I’m also incorporating a model to describe emotions and personality for the AI. I think that to pass something like a Turing Test, this is also important. My current choice for that is the PAD-model: http://www.kaaj.com/psych/ai.html

And indeed, it is already a great adventure smile

 

 
  [ # 96 ]
Hans Peter Willems - Feb 7, 2011:

Something I didn’t mention yet is that I’m also incorporating a model to describe emotions and personality for the AI. I think that to pass something like a Turing Test, this is also important. My current choice for that is the PAD-model: http://www.kaaj.com/psych/ai.html

And indeed, it is already a great adventure smile

It’s funny you should mention adding emotion to your bot. With all this discussion about instinct vs learned behaviors, I was reminded of a project I’ve been letting languish. (I think I’ve mentioned it on this site before, but not in great detail.) It’s a little program that maps emotions onto words and vice versa, starting with 7 basic emotions to define my “emotion space”. All other “higher” emotions are some linear combination of these emotions (a vector in emotional space, if you will). I got the idea from some psychology mumbo jumbo asserting some emotions to be instinctual and others socially learned. I don’t really buy it, but the concept is useful.

My bot (I call him EMO for Emotion MOdule smile ) can only accept restricted input of the forms ” * is * “, “emotion * is * “, or ” * “. The first two prompt the bot to learn either a new word or new emotion respectively. The words and emotions listed to the right of the “is” are used to deduce the emotional vector of the new word. For example,

“sunshine is warm sunny yellow wonderful happy daytime”

would prompt the bot to sum up the known emotional vectors to define a new emotional vector for sunshine. If that emotional vector is (roughly) associated with a named emotion, it would then express that it feels that emotion when reminded of sunshine.

Of course, one can quickly see the limitations of such a scheme. “Sunshine is hot sunburn cancer drought,” as well. Context is key. Eventually I’m planning to expand EMO to associate larger chunks of text with emotions and integrate it with my main chatbot project.

 

 
  [ # 97 ]

How would you classify the emotion in the following passage?

If reality was simple, then yes, X-is-Y, X-has-a-Y and all this “Socrates is a man, all men are mortal, thus Socrates is mortal” stuff would be enough for AI, but it isn’t.  Those simple systems have FAILED [...]

I would classify it as smug? Vindictive? Perhaps a little hurt, because of bad experiences with logic teachers in school? smile

For me, the words “all this ... stuff” seek to trivialize what comes between them, with a smugness that obviates providing examples. It’s the textual equivalent of a hand-waving, deprecatory gesture; meaning to persuade by emotional force, not logic smile Then the capitalized “FAILED” indicates vengefulness, as if the writer is underscoring his words by figuratively knocking on a table to reinforce his absolute confidence in what he’s saying; like, say, Bush might have done when asserting that Iraq had WMD…

 

 
  [ # 98 ]
Hans Peter Willems - Feb 7, 2011:

Not sure yet, but the choice of programming environment will mainly be based on the available options to quickly prototype stuff. For now I’m leaning towards Python but even PHP might be a viable option. Speed is not an issue, at least not at this point in the development. The main technical hurdle to take is to develop a data-model that can represent my AI-mind. Although the data-model will be fairly simple (I think), the data-model is where the real mojo will take place.

I absolutely adore python. It’s a great language for hacking together code quickly (perfect for prototyping). There are also many natural language tools built for python, which you may find useful. I use the Natural Language Toolkit (see nltk.org). It has a module called WordNet that I use as a glorified dictionary, though it is more powerful than that. There are also statistical parsers and stemmers bundled with NLTK that you might find handy. (There are WordNet implementations in other languages, including php I believe, though I’m not familiar with them.)

Edited to add: Very interesting analysis, Robert. LOL Well, Victor, how long before we see bots that can do similar meta-reading?

 

 
  [ # 99 ]
Robert Mitchell - Feb 7, 2011:

How would you classify the emotion in the following passage?

If reality was simple, then yes, X-is-Y, X-has-a-Y and all this “Socrates is a man, all men are mortal, thus Socrates is mortal” stuff would be enough for AI, but it isn’t.  Those simple systems have FAILED [...]

I would classify it as smug? Vindictive? Perhaps a little hurt, because of bad experiences with logic teachers in school? smile

Ok, ok, my bad.

 

 
  [ # 100 ]
Victor Shulist - Feb 7, 2011:

But I think it will be very useful, even if not considered ‘thought’, if the program can figure out which combinations of logic modules (in this example, the only “logic module” was basic ohms law).  But later i want logic modules, not only with electronics, but with any other topic.

What I’m looking forward to is the bot taking basic modules, like an “algebra module”, and building more complex modules, such as an “ohm’s law” module, on its own based on natural language instructions. How cool would that be? smile

 

 
  [ # 101 ]

ooooooooooooooooooh Now you’re talking !!!!!!!!!!!!!!!!!

Yes, and this goes to the heart of the matter in my previous post.  What if, the system can go from very abstract ‘logic modules’,’  into more specific ones.  So now we’re talking about combining logic modules at the same level,  *and*  at different levels ( concept of “dividing” is say level 0), then ohms law, level 1.  Now what if it can do that across hundreds of levels and thousands of connections, and make those connections on its own—so that the solution is the result of it networking those logic modules together.  Will that be still just info processing ? Perhaps so, perhaps not, perhaps it is simply a matter of your own interpretation.  either way, whatever you call it, it will be seriously cool, AND POWERFUL, not to mention fun to talk to !!

@Gary

sorry, I forgot to head my posting above of

Posted: Feb 7, 2011 [ # 109 ]

with your quote.

 

 
  [ # 102 ]

I’m new here and this thread caugth my eye. Very interesting opinions expressed here. I’d like to offer a definition of language acquisition.

Language acquisition is the process of associating mental representations with oral, written, graphic and gestured communication in an agreed upon mutual language that allows comprehension and expression. This process includes accepting communication and propositions that express intent, inferring grammar and style rules, and testing the rules.

 

 
  [ # 103 ]

Hiya, Toborman! Thanks for breathing some new life into one of my favorite threads. smile

I like that definition for language acquisition. I’m not so sure about the “agreed upon mutual language” part, but I’m currently unable to offer an alternative phrase. The next step is to convert this definition into a set of logical “rules” that can further be manipulated into code. smile

 

 
  [ # 104 ]

Dave Morton said:

I like that definition for language acquisition. I’m not so sure about the “agreed upon mutual language” part, but I’m currently unable to offer an alternative phrase.

I agree that the definition needs to be worked a bit more. try this on for size.

for agreed upon mutual language, let’s read: mutually acceptable communication

for grammar and style rules, let’s understand: general communication patterns

for test rules, let’s understand: using patterns, generate new instances in subsequent communications to see if they are mutually acceptable.

 

 
  [ # 105 ]

Dave Morton said:

The next step is to convert this definition into a set of
logical rules that can further be manipulated into code.

let me offer the following as one possible method (i.e., inductive generalization) for infering some general patterns.

The following is a “learn by example” English language acquisition process in a simple sentence written dialog protocol.
1.  Using a known sentence structure, parse the input sentence
2. Compare components to English language patterns
o Case 1 - If 100% pattern match found, then exit (to interpret sentence)
o Case 2 - If literals only match found, then
1. For each unmatched element, infer the class of the element is probably consistent with the pattern.
2. Exit (to interpret sentence)
o Case 3 - If no matches found, then for each stored sentence in past dialogs
1. Parse sentence into words and delimiters using sentence structure
2. Compare input components to dialog sentence components
3. If match not found, then go to next dialog sentence
4. If match found, then infer pattern for related sentences using matched components using inductive generalization
5. Compare non-matching component to class elements
6. If match found, then go to compare next non-matching component
7. If no match found, then establish a class variable for the non-matching component
8. Remember class variables
9. Remember English language pattern
10. Exit (to interpret sentence)
o Case 4 - If variables only found, then
1. Assume unmatched components are new literal
2. Create probable new pattern
3. Remember English language pattern
4. Exit (to interpret sentence)

Test Cases:
Case 1 – 100% match.
Input: Mary went to the doctor’s office.
Dialog entries: Mary went to the store. Alice went to the doctor’s office.
English Language Pattern: person went to the place.
Class: person (Mary, Alice)
Class: place (store, doctor’s office)
Sentence structure (words + delimiter)

Case 2 – literals only match.
Input: Jake went to the gym.
Dialog entries: Mary went to the store. Alice went to the doctor’s office.
English Language Pattern: person went to the place.
Class: person (Mary, Alice)
Class: place (store, doctor)
Sentence structure (words + delimiter)
New elements: Assume that Jake is probably a person and gym is probably a place.

Case 3 – no match.
Input: Jake went to the gym.
Dialog entries: Mary went to the store. Alice went to the doctor’s office.
English Language Pattern: none
Class: person (Mary, Alice)
Class: place (store, doctor’s office)
Sentence structure (words + delimiter)
New elements: Assume that Jake is probably a person and gym is probably a place.
New elements: Assume that “person went to the place” is a new pattern.

Case 4 – variables match.
Input: Mary went to the doctor’s office.
Dialog entries: Mary went to the store. Alice went to the doctor’s office.
English Language Pattern: person was at the place.
Class: person (Mary, Alice)
Class: place (store, doctor’s office)
Sentence structure (words + delimiter)
New elements: Assume that “person went to the place” is a new pattern.

 

 

‹ First  < 5 6 7 8 > 
7 of 8
 
  login or register to react