AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Building bias into chat bot conversations…
 
 

Hi,
I think many of you can relate to this. You meet a stranger in a public, private, or even virtual setting.  Within the first 30 seconds or several minutes, you form a ‘bias’ about the stranger. It may be a positive or negative bias.

Examples of negative biases.
* Stranger is an idiot (I’m normally a recipient of this bias). =)
* Stranger is too simple or uninteresting.
* Stranger is interested in entirely the wrong things.

Examples of positive biases.
* Stranger seems to be a super-genius…and approachable too.
* Stranger is very interesting.
* Stranger shares similar interests to my own.

As we formulate biases in a conversation it begins to affect or drive our responses, comments, or questions.  If we formed a negative bias, then we may be short in conversation, or lose interest, get rude, excuse ourselves, etc.  If it was a positive bias, then we may actively…while trying to be cool…engage that stranger in a deeper and more meaningful conversation.

So, have any of you introduced a ‘bias effect’ into your bot algorithms?  Are there any bots to you knowledge that might behave in this manner?

Regards,
Chuck

 

 
  [ # 1 ]

This year at the Chatterbox Challenge the chatbots had a bias or IKEA.

 

 
  [ # 2 ]

Chuck

You bring up an incredibly relevant topic to a good chat bot design.  if that person has nothing “in common” with the chatbot (not much knowledge they share), then like you say, the responses are short.

Along the same lines are, even if you have a positive bias, maybe the user is in a bad mood.  Maybe the bot would focus on cheering up the user.  Again, even if positive bias, if the bot isn’t interested in the topic, it may try to change the topic to something it more understands (we have this already in many bots).

For my project,  one of the ultimate driving forces in the conversation will be the state, or context of the current dialog, any ambiguities or high level decisions about what to say , will be driven by the current state of what is happening in the conversation.  Of course, previous conversations will come into play, and also long term “global memory” (more static things about the universe).

 

 
  [ # 3 ]

....of course other factors determine what the bot says will be driven by outputs of some of its own built arguments (but will be influenced by the current state of the dialog).

 

 
  [ # 4 ]

I guess you could say I have introduced a bias effect into Skynet-AI.

The primary goal of Skynet is to continue the conversation. In early versions of the bot, all conversations were treated equal. The bot had a personality that more resembled what a “Terminator” character from the movie was. As time went on, I changed this. The bot took on a more up-beat persona and not all conversations are considered worth perusing.

Biases built in:

The bot gets bored with the human:
- repetitive responses
- empty responses
- short responses
- swearing/sex talk

The human is:
- Bored
- Sad
- Intent on leaving

Each of these is very thin (based on the last 1-4 responses) and there is no attempt to use/store a model of the user.
The bot also uses Mirroring (the behavior in which one person copies another person usually while in social interaction with them) as much as possible.

 

 
  [ # 5 ]
Merlin - Mar 30, 2011:

The primary goal of Skynet is to continue the conversation. In early versions of the bot, all conversations were treated equal.

So initially, *ALL* it talked about was exterminating humanity ?

Just kidding… keep up the good work Merlin!

 

 
  [ # 6 ]
Victor Shulist - Mar 31, 2011:
Merlin - Mar 30, 2011:

The primary goal of Skynet is to continue the conversation. In early versions of the bot, all conversations were treated equal.

So initially, *ALL* it talked about was exterminating humanity ?

Just kidding… keep up the good work Merlin!

Actually, that is pretty accurate.

But from the logs, I could see that I got a number of teenagers. Some were budding AI enthusiasts, some were depressed, some bored. I decided to move away from the Terminator as a machine that exterminates all humans persona and towards the “John Henry” character that is learning to interact with people and trying to figure out what is humanity. Although it is not quite as true to the Skynet origin, I find it humorous to have a terminator telling robot jokes.

 

 
  login or register to react