AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Response selection
 
 
  [ # 16 ]

If both bots belong to you all is good. If not the please follow reasonable chat bot etiquette.
http://www.chatbots.org/ai_zone/viewthread/312/

After all, bot to bot reproduction can be a very personal thing. wink

 

 
  [ # 17 ]

Cool thread!!! I’ve been thinking about this for quite a while, but this is the field where artificial intelligence splitst in cognitive AI and emotional AI.

As mentioned above, the goal of the agent gets relevant. I believe the context and personality will be extremely relevant as well.

I will cut your throat

The first reaction most of you will be: that is not something particulary nice to say.

In a discussion with a surgery during the intake of an operation, it would be normal thing to say. Set aside how it is said.

Therefore, I believe understanding of our sensory observations (either text input, speech or video input) should be converted to perception, processed with context, intention & personality, followed by expression through speech, intonation and movements

We’ve published a PDF on this topics a while ago, which is available over here:
http://www.chatbots.org/community/buzz_stop/categorisation/

 

 

 

 
  [ # 18 ]

Cool ideas yeah smile But as I said, assume it has perfect knowledge about the world. It does not have to learn new concepts, since it already knows them. Assuming this, how would you make an agent respond then?

@Merlin, you mention that Stimulus/Response is a good way to model interaction, since ‘most of the time you want the bot responding to input’. But if you talk with somebody, do you only respond to the last sentence of the other person?

I’m not talking about question/answer systems or desktop-assistants that give you messages, I’m talking about an agent that is capable of having a real free conversation with you.

 

 
  [ # 19 ]

In most human conversations you end up taking turns. After you talk, you are monitoring other stimulus. I never said you (or the bots) next response would be based just on the last sentence. Although it is the most important and can change the direction of the dialog, prior statements may have already set the mood/emotion and in some response types the bot may be continuing a story like response. If the bot has a goal, then the last sentence from the user may be ignored.

 

 
  [ # 20 ]
Mark ter Maat - Oct 28, 2011:

Cool ideas yeah smile But as I said, assume it has perfect knowledge about the world. It does not have to learn new concepts, since it already knows them. Assuming this, how would you make an agent respond then?

@Merlin, you mention that Stimulus/Response is a good way to model interaction, since ‘most of the time you want the bot responding to input’. But if you talk with somebody, do you only respond to the last sentence of the other person?

I’m not talking about question/answer systems or desktop-assistants that give you messages, I’m talking about an agent that is capable of having a real free conversation with you.

I know I’m late to the party. But I thought I should chip in. Suppose you do have perfect knowledge and you want real free flowing conversation. Now others have mentioned methods of stimulus/response, fetching for goals, question and answer patterns, so on and so forth.

But even with all these methods the AI is at the mercy of its programmer. It would only be as good as how much is programmed into it. Even then the responses are canned (“Oh, sure! ANOTHER irritating question! Why should YOU care?!”)

All in all, the AI wouldn’t come off any more intelligent than maybe “CleverBot”. Even with its perfect knowledge.

The only way I think to true genuinity in a system with perfect knowledge is in a memory system. Its the only way I know of. The System itself would not need extra programming to give responses. It would use its own knowledge base to understand what the user is saying, when to respond and what to respond with.

When you read or hear a sentence or a paragraph, there is a meaning to it other than the words that it consists of. That meaning is the understanding of what the user is saying.

In our brain. A word may have different meanings. What it means to you may not be what it means to me. And even when it comes time to give our own definition of a word. We generate these definition at ‘thought time.’ These sentences we create are not stored in our brain somewhere, unless we were quoting from the dictionary. Rather we create and speak these responses almost simultaneously, so much that most times we are not aware of its occurrence.

Maybe in a later date we are asked to define the same words. We again generate a sentence that most likely differs from the previous but yet has the same meaning. Its not a process of randomization that a bot uses to make it self appear more intelligent. We know all bots today uses canned responses.

In a memory system like ours, these responses are self generated. Plus no constant stimulus is needed for a fully formed memory system to function, you don’t need to yell “BOOOOOH!” for it to say “AHHHHHH!!”. And if an AI had perfect knowledge then it would also have knowledge of all the methods that have been listed in the thread.

If I asked it: What is your name? > It would automatically bring to its awareness of its name, weather consciously or unconsciously and it would know precisely how to respond to the question if it wanted to. All this due to its knowledge base. It having perfect knowledge would mean it could respond with: “Why are you asking? You already know it.”

 

 

 
  [ # 21 ]

That’s a pretty good hypothesis and explanation. The kind of intelligence that we are seeking to emulate does not store knowledge in its literal form, it accrues an abstract representation of it. Another example of this is the way we store information about images. For instance, rather than remembering a pixel perfect picture of a helicopter, we remember the general characteristics of a helicopter (short round body, long slender tail, long horizontally rotating blades above the body, small vertical rotor at the tail) and then put that back into words when required.

That’s *how* an intelligence might formulate a response, but I understood the main question of this thread to be *why* an intelligence would formulate a response at all.

 

 
  [ # 22 ]

I did mention that the responses I listed were caricatures, did I not? raspberry They weren’t intended as anything more than examples of possibilities, is all. smile

A “memory system” would certainly be necessary (even essential) to creating a response system that seems fluid and intelligent, but I feel that it should be only one part out of many that are used in response formulation. A language dependent grammar parsing algorithm, including an exhaustive thesaurus, is also important, along with a method of analyzing or creating the semantics of the intended response.

Perhaps the response should first be formulated as a “concept” or “idea”, then be passed through a “randomizing thesaurus” and an “emotion indexer”, and then processed by a grammar formatter to produce a suitable response. When we humans are actively considering our responses (such as when I’m composing this reply), the steps involved are quite similar to what I’ve just described. At least, they are for me. smile

 

 
  [ # 23 ]
Dave Morton - Nov 7, 2011:

I did mention that the responses I listed were caricatures, did I not? raspberry They weren’t intended as anything more than examples of possibilities, is all. smile

A “memory system” would certainly be necessary (even essential) to creating a response system that seems fluid and intelligent, but I feel that it should be only one part out of many that are used in response formulation. A language dependent grammar parsing algorithm, including an exhaustive thesaurus, is also important, along with a method of analyzing or creating the semantics of the intended response.

Perhaps the response should first be formulated as a “concept” or “idea”, then be passed through a “randomizing thesaurus” and an “emotion indexer”, and then processed by a grammar formatter to produce a suitable response. When we humans are actively considering our responses (such as when I’m composing this reply), the steps involved are quite similar to what I’ve just described. At least, they are for me. smile

Though a memory system that is partially implemented would be alright. The other parts you listed would have to be exhaustive in its workings for it to utilize all of the knowledge base in this situation.

About the Randomizing thesaurus -

Though this sounds good and interesting. There is quite a few problems with it. The AI won’t be able to choose what words to use. That could be a problem if the AI was talking to business people or children. In other words, its good but it has its limit.

In a memory system like our brain. I guess we have our own developed thesaurus in a sense. But its not random. Its based off the same principle of the entire memory system. The concept that matches the input’s criteria the most wins.

We know when we write or speak, words pop into our mind. Sometimes it only makes it to our sub-conscious, other times it makes it to our conscious if we took the time to think it through.

But what are the criteria for words to pop into our mind to be used in a response? Lets say we have two different words with the SAME meaning in essence (For example; Design and Blueprint). Which one would likely pop into our mind to be used for a response?

Here are the several criteria I believe are checked in our brain:

1) In what context did we use/encounter the word.
- The context of when and how the word is used if matches with what’s one needed would have precedence over the other word that did not.

1) How many times have we used/encountered the word.

The word that has been used many times has precedence over the word that have been used a few times. Even if they were both used in the same context.

3) When was the last time we used/encountered the word

The word that was encountered or used last week would have precedence over the other word that was used or encountered 6 months ago. Even though they were used in the same context and used appropriately the same amount of time.

This is basically how I see our developed thesaurus in our brain works. Its not random. But the word concept that passes these series of checks gets to pop (activate) into our conscious or sub-conscious mind and then we use it.

Or we could discard it and look for another one. Or one similar to it. Anyway, my point is, a memory system would be able to model a thesaurus or grammar parser beyond the limits of one that is hand coded.

 

 

 
  [ # 24 ]

Hi all,

I completely agree to C.R.s notion

“one would need tools in place to determine a “goal” for the chatbot”

One of the most interesting tools concerning this is Joscha Bachs “MICROPSI”:
http://www.cognitive-ai.com/page2/page2.html
(which he is fusioning now with Ben Goerzels OpenCog to “Open Psi”),
Joscha elaborates the works of Prof. Dietrich Dörner,
who is focussing on the importance of AI-MOTIVATION.

Joscha demonstrates the importance of motivation in his MICROPSI-world,
where MICROPSI is a cognitive little steam-engine on a virtual island
which is thirsty for water and hungry for nuts,
but water and nuts can’t be found at the same place.
Thats why MICROPSI has to reason about where to go next
and its inner statements (`How much water is left, how much nuts are left?´)
and which loses its temper if one of its plans doesn’t succeed (+ more emotions).

You can transpose these motivations.
My own bot MALDIX will be set as a symbiotic interlocution entity with humans.
Thats why his “thirst” is to keep on talking (main goal 1)
and his “hunger” is to have “subcutaneous” interlocutions (main goal2).
I created a table of 7 interlocution-levels (from first introduction to friendship).
MALDIX has to reason (with MICROPSIs aid) about how to reach this level “alive”.
For example: `Is our relationship deep enough to tell him, that he’s wrong?´

Main goal 1 and 2 are corresponding to the most important
axioms every animate being has: to survive and to reproduce.

“Survival” for MALDIX means: “Keep on talking.”
“Reproduciton” for MALDIX means:
“Store as much level7-interlocutor (there is no “user” any more!)-information as possible
to be able to create an interlocutor-lookalike-bot from your log-files.”
Thus, a 100 years from now even the great-grandchildren of a MALDIX-interlocutor
can talk about topics like “first love” “death of my father”
with their already dead great-grandfather in this great-grandfathers own words.

I am sure: With MICROPSIs help, MALDIX will be able to overcome
the shallowness of a chat and to create a real dialogue.

The necessities of an ongoing but increasingly subcutanous dialoge
will show MALDIX, which one of his responses should be choosen (in Marks sense).

All the best

Andreas

 

 < 1 2
2 of 2
 
  login or register to react