AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Cause and effect behavior
 
 
  [ # 16 ]

I had to comment on this thread!

Some of you may remember (almost 2 years ago now) I made a few posts regarding a project called ALF that I and collegues were working on, and since my rather long absense for one reason and another, that project has been scrapped, restarted and scrapped a number of times, evolving each time.

Our current tech, I believe is at the very forefront of what is currently possible, and while I will of course keep the details fairly private (until the patent is granted at least), Cause & Effect plays a part of our system.

So Toborman, while I was agahst that someone else had (finally) stumbled across this important part of the puzzle (I was hoping that we were the only ones), I am also in salute smile

If you (or anyone else for that matter) would like to converse, feel free to hit me up smile

I will at somepoint post a new thread about the current incarnation of what was ALF, but as it is now a commercial project, with bean counting investors, I have to be sure about what I post. smile

 

 
  [ # 17 ]

I’m happy to hear of your success, Dan, and am looking forward to see how you have implemented cause and effect in your upcoming product. I’m sure you already know that my description has some missing elements and lacks detail. I would appreciate your critique of any part that doesn’t violate your current agreements. Three extensions I’m developing are reasoning about a language, problem solving methods, and explanation and argumentation. If you have an interest in any of these areas, I would be happy to share my thoughts.grin

 

 
  [ # 18 ]

Thanks smile  Our first commercial forray will be in the domain of Q&A, as this fits best our current development position, which we are hoping to have online in the next 4-6months.  After which, additional systems and modules will be brought online and the respective services to suit. 

C&E plays a part in them all, both in terms of factual and hypothetical results sets. The model provides a great deal of power, especially in the domains of inference, and deduction.  This of course leverages heavily in problem solving, to such a degree, that CAESAR (as it is now called) is able to predict an outcome(s) of a set of inputs, and also work in reverse, given an outcome, determine a possible set of inputs that may lead to it.  This is highly beneficial in medical applications and diagnosis of symptoms and diseases.

Language is handled differently than I believe most other systems do, in the sense that it cares not about the words at all, but that the words map to the concept tree’s which are constructed. 

Parsing text in any language is a set of rules (complex and ambiguous at times) and the words themselves are nothing more than a mapping to a concept in a human mind, and id or association if you like. 

E.G If you have no mapping for a particular sequence of letters (or sounds), then that is likely a word or even concept, that you do not know.  At which point, you would request further information about the “concept” that tthe word describes.  Information request is continued until you are sure that you have all the relevant components of that concept for that word mapping, understanding the concept, and its relation to other concepts that you do know, and so as to be able to expain such to another party that also may not have that word->concept mapping.  At this stage, you can then understand what the other subject in the conversation is trying to convey.

For all intents and purposes, a dog, in Caesars mapping, could be represented by anything…..and it matters not, so long as when Caesar encounters the sequence in any language for a dog, that maps to the correct concept.

Different languages are mapped to corresponding concepts, and so long as the system is directed that the sequence of “chien” in French is equivalent to “dog” in English, they both map to the same concept.

How are you tackling these problems in your own systems?  Are we aligning on more ideas? :D

 

 
  [ # 19 ]

C&E plays a part in them all, both in terms of factual and hypothetical results sets. The model provides a great deal of power, especially in the domains of inference, and deduction. This of course leverages heavily in problem solving, to such a degree, that CAESAR (as it is now called) is able to predict an outcome(s) of a set of inputs, and also work in reverse, given an outcome, determine a possible set of inputs that may lead to it. This is highly beneficial in medical applications and diagnosis of symptoms and diseases.

Sounds like a dynamic expert system. Is the Q&A “Template based”, so you can move from one domain to another easily?

Language is handled differently than I believe most other systems do, in the sense that it cares not about the words at all, but that the words map to the concept tree’s which are constructed.

Parsing text in any language is a set of rules (complex and ambiguous at times) and the words themselves are nothing more than a mapping to a concept in a human mind, and id or association if you like.
E.G If you have no mapping for a particular sequence of letters (or sounds), then that is likely a word or even concept, that you do not know. At which point, you would request further information about the “concept” that tthe word describes. Information request is continued until you are sure that you have all the relevant components of that concept for that word mapping, understanding the concept, and its relation to other concepts that you do know, and so as to be able to expain such to another party that also may not have that word->concept mapping. At this stage, you can then understand what the other subject in the conversation is trying to convey.

For all intents and purposes, a dog, in Caesars mapping, could be represented by anything…..and it matters not, so long as when Caesar encounters the sequence in any language for a dog, that maps to the correct concept.

Different languages are mapped to corresponding concepts, and so long as the system is directed that the sequence of “chien” in French is equivalent to “dog” in English, they both map to the same concept.

How are you tackling these problems in your own systems? Are we aligning on more ideas? :D

The use of Concept tokens is a great approach.

Harry has a language file for each language with rules for interpretation and translation of patterns learned during the language acquisition process. The knowledge base is a semantic web with terms in English used as tokens. During comprehension, input statements are converted to “standard” terms. During expression, “standard” terms are translated into the target language using the language file rules.

Our approaches seem to be similar.

 

 

 
  [ # 20 ]

There are no templates as such, its very fluid, and it doesn’t matter what domain you are currently in.  The domains aren’t strictly secluded from each other anyway, so you need to be able to traverse around in various domains that may be in some way related to get a decent set of results.  Unfortunately I can’t give out much more than that at the present time without giving out pure gold lol smile

I can state that we employ Horn Clauses (as this is pretty public research info) quite heavily, so I would advise to potentially have a look at them if you want to improve your inference and deduction abilities.

It seems your language system is indeed similar to ours in a number of ways, in the sense of language in->parse rules->mapping rules->concept/token.  I believe its the only way to do it correctly, that way you are not limited by any language….you could express yourself in pure number format if you wanted to, or even programming language syntax (which we tried briefy) providing the correct rule set to “read” it is in place.

 

 
  [ # 21 ]

I have general inference rules for deduction, induction, and abduction in an inference file. These rules “fire” whenever new input statements pass the refutation and opposition challenge. In conjunction with propositions from the knowledge base new propositions are inferred.

In a “discovery” system, rules used during reasoning may be cited as an explanation. To validate the inference rules I ask Harry to tell me what he is thinking.  During a dialog he tells me what he inferred, what method he used, and which rule fired. Here is an example.

Harry : hello Tom.
Tom : charlie is a human.
Harry : I will remember charlie is a person.
Tom : chad is a human.
Harry : I will remember chad is a person.
Tom : charlie is mean.
Harry : I will remember charlie is mean.
Harry : using affirm particular, I deduced remember some humans are mean.
Harry : I will remember some humans are mean.
Harry : using deny negative universal, I induced forget humans are not mean.
Harry : I will forget that humans are not mean.
Harry : using affirm universal, I induced remember humans are mean.
Harry : I will remember humans are mean.
Harry : using modus ponens, I deduced remember charlie is ugly.
Harry : I will remember charlie is ugly.
Harry : using affirm particular, I deduced remember some humans are ugly.
Harry : I will remember some humans are ugly.
Harry : using deny negative universal, I induced forget humans are not ugly.
Harry : I will forget that humans are not ugly.
Harry : using affirm universal, I induced remember humans are ugly.
Harry : I will remember humans are ugly.
Harry : using reverse modus ponens, I abduced remember charlie is possibly mean.
Harry : I will remember charlie is possibly mean.
Harry : using remove uncertainty, I deduced forget charlie is possibly mean.
Harry : I will forget that charlie is possibly mean.
Tom : chad is not mean.
Harry : I will remember chad is not mean.
Harry : using negate particular, I deduced remember some humans are not mean.
Harry : I will remember some humans are not mean.
Harry : using deny affirmative universal, I induced forget humans are mean.
Harry : I will forget that humans are mean.
Harry : using affirm negative universal, I induced remember humans are not mean.
Harry : I will remember humans are not mean.
Harry : using deny negative universal, I induced forget humans are not mean.
Harry : I will forget that humans are not mean.
Tom : describe charlie.
Harry : charlie is a mean ugly human.
Tom : describe chad.
Harry : chad is not mean.
Harry : chad is a person.
Tom : describe humans.
Harry : some humans are mean.
Harry : some humans are ugly.
Harry : humans are ugly.
Harry : some humans are not mean.

 

 
  [ # 22 ]

thats pretty good! smile

How do you generate these inference rules, do you input them manually, or are they part of an un-supervised learning process?

 

 
  [ # 23 ]

How do you generate these inference rules, do you input them manually, or are they part of an un-supervised learning process?

The inference rules are input manually.  I’m looking forward to the time when Harry can use inductive generalization and universal instantiation to create his own rules.

 

 
  [ # 24 ]

I observed some flaws in the explanations in the dialog above. The following changes might make the explanations better.

Observation 1
This form of explanation may work for the developer, but lacks some information that the user might find enlightening. It would be more informative to show the rule with the actual values used.

Tom : chad is not mean.Harry : I will remember chad is not mean.
Harry : using negate particular, I deduced remember some humans are not mean.
Harry : chad is not mean. chad is a human. Therefore, some humans are not mean.

Observation 2
The second line is redundant and could be eliminated.

Harry : using deny negative universal, I induced forget humans are not mean.
Harry : I will forget that humans are not mean.

Observation 3
Induction and abduction are, by definition. non-monotonic and should use a qualifier to express this. Also the quantifier should be explicit.

Harry : using affirm universal, I induced remember probably all humans are mean.

Observation 4
It would be more informative to show the rule with the actual values used.
This time we couldn’t see the rule in the knowledge base.

Harry : using modus ponens, I deduced remember charlie is ugly.
Harry : if “A” is mean then “A” is ugly. Charlie is mean. Therefore charlie is ugly.

 

 < 1 2
2 of 2
 
  login or register to react