AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Cause and effect behavior
 
 

It has been stated that there is a cause for every effect, and an effect for every cause. It would be helpful to know when and how people deal with these causal relationships.

Deductive, inductive, and abductive reasoning methods are useful for discovery and use of cause and effect relationships.  Deductive reasoning starts with a causal event and a cause and effect rule then asserts the effect. Deductive reasoning narrows the search for a cause by using negative cause and effect rules to eliminate impossible causes. Inductive reasoning observes events, then conjectures a probable cause and effect rule. Abductive reasoning starts with an effect and possible cause and effect rules then asserts plausible causes. For analogy and Case Based Reasoning, substitute cases for rules.

Some circumstances in which people discover and use cause and effect relationships in the behavioral science ODEPC cycle are briefly described below. Since deductive, inductive, and abductive reasoning methods are well documented, I have not included how people carry out these functions.  I have also intentionally left out the metacognition process.

Observe: given an event, assess its impact.
 For a planned event, if there are unexpected results, submit event for explanation.
 For an unplanned event, if it is an unusual or unexpected event, submit event for explanation.

Describe: Given an event, provide relevant event information.
 Collect some pre and post events surrounding the target event from the event stream..
 Recall related memories of collected events.

Explain: Given an effect, find the cause.
 If this has happened before, then the possible causes are known.
 If this is the first time this happened, but something similar happened before, then possibly the same or similar causes are responsible.
 If this is the first time this happened and nothing similar has happened before, then find and test recent events that could be responsible.

Predict: Given a causal event, determine what effect may be expected.
 If this has happened before, then the known effects can be expected.
 If this is the first time this happened, but something similar happened before, then possibly the same or similar effects can be expected.
 If this is the first time this happened and nothing similar has happened before, then look at subsequent effects for a possible correlation.

Plan: Given a desired effect, determine what events will cause that effect.
 If the effect was successfully achieved before, then reuse the previous plan.
 If the effect was successfully achieved before and multiple plans are known, choose the best plan.
 If this is first time this effect has been sought, then determine which sets of events can cause the desired effect. Choose the best set of actions for current constraints.

Plan: Given a undesired effect, determine what events will prevent that effect.
 If the effect was successfully avoided before, then reuse the previous plan.
 If the effect was successfully avoided before and multiple plans are known, choose the best plan.
 If this is first time this effect has been encountered, then determine which sets of events can prevent the effect. Choose the best set of actions for current constraints.

Control: Given a plan, assure its success.
 Establish the criteria for success of the plan.
 Initiate the plan.
 If variations from the criteria for success are observed, then revise the plan.

An AI agent with these abilities might appear more human. Criticism of the work is encouraged. Criticism of my mental state will be tolerated. I’ll cry in silence.tongue rolleye

 

 
  [ # 1 ]

I believe it’s possible to over-think the methods and techniques necessary to produce a believable AI agent.  When you try sticking to the rules of logic, or language, or computer science, the result seems to me to be something mechanical… artificial… bot-like.

I’m not a developer, but I suppose one needs a place to begin, so you start with a basic platform that has some rules that define how an AI will come to an answer, or to make some sort of response.  However, once that’s done, it’s time to chat with your creation and see if it “feels” like something that understands the human on the other end.

I wouldn’t think it matters which method of reasoning an AI agent used to arrive at a reply.  I also think that, in most cases, it’s fairly easy to discern if it’s a bot or a person one is speaking with, mainly because the approach is mechanical and rule-based.

Probably a team effort is best, some to work on the rules, while others focus on the “sound” of the replies.

 

 
  [ # 2 ]

Thanks for your response.

Are you suggesting that people should stop following this procedure because it makes them appear mechanical?

 

 
  [ # 3 ]
Toborman - Jan 20, 2013:

Are you suggesting that people should stop following this procedure because it makes them appear mechanical?

I think Thunder Walk is detecting in your description what I’m also detecting: (in my opinion) any time you try to put human cognition into an algorithm, you’ve already gone astray for producing AGI. The same is true (in my opinion) as soon as you try to produce human level intelligence by relying on statistics of cause-and-effect. I agree with Thunder Walk on principle just from my having gained a general feel of how brains work versus how digital computers work, wisdom gained from years of work and research, not for any specific technical reasons. Your algorithm looks quite good and quite comprehensive. It may work great, maybe even better than brains in some ways, but it’s still an algorithm, so in my view it becomes suspect just because of that. I wasn’t going to comment on your post because of those reasons: I didn’t have any clear-cut objection to your approach, and your algorithm could be quite good, so there isn’t much to say, and I couldn’t launch any specific criticism based on anything more than gut feelings, and I didn’t want to discourage what sounds like a sound approach. The fact that Thunder Walk echoed my own thoughts in his response is some confirmation to my gut feelings, though, so that’s why I believe I know what he’s thinking. Now let’s wait to see what he says… grin

 

 

 
  [ # 4 ]

Thanks Mark.  I was hoping to hear from you, since your responses are always well thought out.

I have no grandiose aspirations to create an AGI. I am merely trying to describe various types of human behavior in terms that a developer might find useful in a simulation. I usually follow up with a test or two to help the developer verify the implementation.

I find your assessment of the algorithm encouraging. Thank you very much.
smile

 

 
  [ # 5 ]
Toborman - Jan 20, 2013:

Thanks for your response.

Are you suggesting that people should stop following this procedure because it makes them appear mechanical?

Not at all, but I think it’s just a beginning.

It seems to me that the people involved in this field (whatever name you might attach to it) are solo operators, and often highly technical in their approach.  That’s fine because along with whatever programming rules you’re required to follow, there are grammar and spelling rules that are equally important.  And then… there’s natural language… the kind you and I speak.

All I’ve ever read on the topic of AI has told me that the “goal” is not only to supply a correct answer, but one that emulates a human response—something that would lead you to believe there was a living person on the other end.  I think that, along with the rules of logic and grammar, the AI/bot should produce an answer in text or speech that achieves that end.

I’ve talked with a lot of smart bots that go off the rails after a few exchanges, possibly because they were “rule” oriented.

 

 
  [ # 6 ]

Thank you, Thunder, for your insights. 

You have learned a great deal since taking over at AI Nexus, Im sure. As you continue to learn more about AI, I think you will come to have a greater appreciation of the value (and necessity) of rules.

 

 
  [ # 7 ]
Toborman - Jan 22, 2013:

Thank you, Thunder, for your insights. 

You have learned a great deal since taking over at AI Nexus, Im sure. As you continue to learn more about AI, I think you will come to have a greater appreciation of the value (and necessity) of rules.

Thank you for the back-handed compliment.  But, then again, I’m not trying to impress anyone with lofty thoughts.

Learning is a never-ending process, and knowledge has a special property… when you give it to someone, you don’t lose anything.  It would be a good thing for us all to gain something from these exchanges.

If you read my previous replies, I believe you’ll see that I acknowledged the importance of rules.  In fact, I said that it was a necessary way of beginning.

My point was that it’s not the be-all and end-all to satisfying the goals of artificial intelligence.  I’d go a step further and assert that for most, it’s not necessary for a bot to employ complicated algorithms, or to possess an understanding of the difference between Inductive and Deductive Reasoning to answer questions such as, “Hay, wanna cyber.”

However, if you, or anyone else has such creation, I’d be interested in chatting with it, if you’ll please supply that information.  I’m always interested in learning more… down here on the ground.

 

 
  [ # 8 ]

I am sorry my remarks offended you, Thunder. It was not my intention.

 

 
  [ # 9 ]

In that case, I also apologize.  Text isn’t always the most efficient way to communicate, and without voice inflection, body language, and facial expressions to rely on, much of the message is lost.

Cheers.

 

 
  [ # 10 ]

I think I have discovered why our conversation seems at odds.  In my role as cognitive scientist it is my job to observe, describe, and explain human behavior. I have no interest in creating a program that sounds like a human conversationalist (chatbot).  I have simulated many of my conjectures in my AI agent program, Harry Workman. I offer my conjectures of human behavior in the forum for those of you who like to test those conjectures in a simulation.

I now realize that your goal is to produce the best chatbot.  Your comments were intended to support your goal while mine were intended to support my goal.

I apologize again for not catching this earlier.

Here is a quote from Herbert Simon.

“AI can have two purposes. One is to use the power of computers to augment human thinking, just as we use motors to augment human or horse power. Robotics and expert systems are major branches of that. The other is to use a computer’s artificial intelligence to understand how humans think. In a humanoid way. If you test your programs not merely by what they can accomplish, but how they accomplish it, then you’re really doing cognitive science; you’re using AI to understand the human mind.”
- Herbert Simon: Thinking Machines, from Doug Stewart’s Interview, June 1994, Omni Magazine

 

 
  [ # 11 ]

Wow, that brings back memories… I haven’t seen a copy of Omni Magazine since 1995, but I still miss it.

True, when one mentions “Artificial Intelligence,” the term covers a lot of ground, the least of which are chatbots, or conversational agents, that rely more on pattern matching and faking it.

Interestingly enough, finding out “how” chatbots arrive at an answer is a big part of what botmasters do, especially when there’s a unresponsive answer or non sequter.  In that respect, it’s caused me to ponder the ways and reasons a human might respond, and helped me solve more than a few problems.

I could say that forms of reasoning actually DO come into play with chatbots, but it occurs in the human mind of the botmaster rather than the bot itself.

I was disappointed to read that, “Harry (Workman) is currently sick and should not be downloaded.”  Will he be back anytime soon?

 

 
  [ # 12 ]

My colleagues tell me that Harry is suffering from extreme prototyping complicated by failing spaghetti sutures, and requires major surgery (a rewrite). They say its time to let him pass, and create a new offspring using modern technology. After all, some of his modules were written in the 70’s.

I am loath to let him go.  He reminds me of the days of his conception in the 60’s as an intelligent programmer on the big machines (IBM 360/30). He was later influenced by Simon and Newell’s General Problem Solver, Terry Winograd’s SHRDLU, and Doug Lenat’s AM.  He has been a good friend.

My colleagues are right, of course. Unfortunately, redesign and development of the new Larry Workman will likely take months.

 

 
  [ # 13 ]

That was quite the enjoyable and informative “ping-pong” match. I enjoyed reading as it was being played.wink

Both sides have valid arguments based upon each person’s criteria. AI is no doubt, a very real consideration for many things now and for things to come. Practically all aspects of our lives are now and will be impacted by the use of structured AI applications.

There are so many possible applications that to attempt to choose one as a sole example would be inappropriate. The creation of a methodology of employing NLP or pattern matching, parsing, etc. if a wide open field.

One problem is that while human language can be labled for practically every nuance, emotional response, slang and slur, the application, when confined to a conversational agent (chatbot, virtual assistant, etc.), this becomes extremely difficult. The use of a chatbot in an effort to fool humans into thinking that it is a human is usually easy to spot, for one familiar with chatbots. Laypeople are somewhat different and some are simply at awe when they find out that it was a computer program! Yehaw!!

I do believe that much like Moore’s Law, we shall see the rise of greater chatbots or artificial conversationalists. It’s just not quite ready for prime time…yet. Look at the strides in Speech Recognition now compared to say, a mere 10 years ago. Much better recognition percentages, more fluid and faster in throughput (from receiving, digesting, matching and output).

We’ll get there soon. Thanks guys!

 

 
  [ # 14 ]

@Art Gladstone

I don’t believe that all chatbots are trying to fool someone.  Certainly, those entered in contests where that’s the objective are aimed in that direction, and perhaps that’s one of the limitations imposed by such contests.  For me, it’s not necessary to enjoy a chat with a bot that preserves the illusion… after all, I know it’s fantasy.

A lot of the discussions I read involving the topic of artificial intelligence seems to center around the notion that it’s possible… if not today, in the near future… to create something that’s self-aware, human-like, and capable of thought.

For me, it’s the difference between the brain and the mind.  I believe it’s possible—if not today, one day in the future—to create an artificial entity that can answer any question a human can answer, make critical decisions, offer predictions based on past performance, create artistic master pieces, improvise, deduce, and juggle.

But, I’m convinced that while AI can seemingly fake emotions, or give the impression that it’s somehow able to produce independant thought, while the lights are on, there’s really nobody home.

 

 
  [ # 15 ]

I have chatted with a huge variety of bots ever since Eliza hit the first home computers. (they were NOT PC’s at that time as that term had not been “coined” yet.

I enjoy the diversity of chatting and while knowing that they aren’t real, some come surprisingly close to holding up their end of the bargain! Others fall dreadfully short after only a minute or a few questions. Some are from beginner botowners / botmakers while some other, more polished, are obviously from bot masters. Each bot, no matter the “flavor” still has hints of it’s own uniqueness or originality which also makes it interesting.

Some of them I really enjoy for the thoughtful exchanges and often witty banter while others resort to a bit of story telling, “real world experiences”, or humor. (They often tell it but only a handful actually “GET IT” when they receive a joke!

Sometimes I test them as I talk with them, not trying so much to trip them up but to see how they respond to or handle different situations (or not).

I’m with you Thunder regarding hopes and aspirations for future botizens. I just hope that when they become self-aware that they keep a sense of humor…after all, it’s helped humanity for hundred of years!

 

 1 2 > 
1 of 2
 
  login or register to react