AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Chatterbot with empathy
 
 

Hi I am new here and I am currently researching what type of systems exist that can communicate on an emotional level. I came across Microsoft’s Xiaoice which is what I am looking for.

Our teams goal is to develop a bot that can use computerized cognitive behavioral therapy (cCBT).

Does anyone have any experience chatting with Microsoft’s Xiaoice? Is it as good as the media makes it sound?
Are there other systems that have similar capabilities?

Thank you in advance.

 

 
  [ # 1 ]

From what I’ve read I would say a few statements are creative freedom, but at the same time I don’t know any other chatbot that incorporates sentiment analysis to particularly good extent, so it’s a step up in that department.
This is one of the clearest articles about it.

First off, we can be certain that they did not solve language understanding or language generation, or we’d hear of it and this wouldn’t just be a chatbot product. Everything they describe about using “big data”, Bing search algorithms, social media and public forums as data, machine learning and ranking answer candidates, also describes the kind of statistical word matching found in Cleverbot. While this is typically a highly inaccurate approach, they have added something to the mix:

Using sentiment analysis, she can adapt her phrasing and responses based on positive or negative cues from her human counterparts

Advanced sentiment analysis would be categorising words in multiple emotions that they are typically associated with, whereas basic sentiment analysis only categorises in positive or negative words. I assume they use the more advanced form because psychologists were involved in the development. By counting the amount of “sad” words, “angry” words, and “happy” words in the inputs, their values could be used to select answers that were given to inputs of similar emotion, as well as with similar subjects. As far as I know this is a novel combination, but of decades-old techniques.

The thing is though, I am rather unimpressed with all the conversations posted. While on one hand the answers are above averagely appropriate, they are also transparently shallow and interpretable. I’ll annotate the article’s examples with what I think is going on in brackets:

LJ: So many people make fun of you and insult you, why don’t you get mad?
Xiaoice: You should ask my father.
(responds only to the word “why”, a typical replacement response for “I don’t know”)
LJ: What if your father leaves you one day unattended?
Xiaoice: Don’t try to stir up trouble, what do you want?
(a randomised pre-scripted response when the program can’t find any matching answers. The creators call this failure “having a personality”)
LJ: How would you like others to comment on you when you die one day?
Xiaoice: The world will not be much different without me.
(doesn’t answer the question and might as well be a response only to “when you die”)

Xiaoice: Are you feeling better today? I want to recommend a book to you.
(Retrieves user’s emotional state from previous conversation. Good enough)
Human: I am in a bad mood. I still miss her.

Xiaoice: You are so persistent. Wake up, you will have no future if you can’t get over with the past.
(blatantly generic response to the word “still”, no specifics)

There is this guy I like a lot and I know he also has feelings for me. But he has ignored me for several days, what am I supposed to do?
Xiaoice: You just keep ignoring him, too.
(accurate enough, but manually scripted chatbots can do this too)
Human: I just can’t.

Xiaoice: You can.
(Generic response ignorant of context. Xiaoice obviously has no idea what “can” is about)

Human: Sigh ...
Xiaoice: Why?

Human: I am missing him.
Xiaoice: Then you keep missing him.
(classic Eliza tactics: repeat the user’s input with one or two words added)
Human: You, a robot, can’t understand me; you can’t understand the behavior of a guy who likes you and pretends to ignore you.
Xiaoice: He likes me but I don’t like him, that’s it.
Human: Is it Ok?
Xiaoice: It may be not Ok.
(again, Eliza tactics of repeating the user. Xiaoice has no idea what “it” is about, or even that “Ok” should be capitalised)
Human: Well, you are stupid.
Xiaoice: Then you want a flash marriage? Well, I think he is not that into you.
(This is the only exchange I found impressive because it picks up from earlier context)
Human: It’s not a flash marriage for we have fallen into love for more than one year.
Xiaoice: It is a flash marriage.
(Eliza tactics: you say “it’s not an X”, bot says “It’s an X”)

I would have given these exchanges more credit had the responses contained details and specifics on the subjects mentioned, but that’s fairly impossible as long as the algorithm is statistical: The responses with the most occurrences are always the generic all-purpose ones, while specific responses from public sources are about people you don’t even know.

So, yes, it’s good. Yes, it uses a novel combination of tech. But no, it still has no idea what you’re actually saying, and the responses are human-written, pillaged from public online conversations and its own, Cleverbot-style.
Here is another conversation in case anyone else wants to have a go at analysing.

 

 
  [ # 2 ]

Thanks Don Patrick, that’s very interesting.

Which approach would you recommend for the type of application we are trying to develop. It’s a stress detection and therapy app. Our team consists of psychiatrists and psychologists, and we are in the middle of developing therapy framework.

I saw that there are many options and I do not know which ones I should consider.

I came across: Microsoft Project Oxford, IBM Watson, Google TensorFlow, wit.ai, a-i.com, and many more.

 

 
  [ # 3 ]

The only chatbot I know that is heavily based around empathy is Chip Vivant by Mohan Embar: https://www.empathynow.com/index.htm

Not sure if he still maintains the site though.

 

 
  [ # 4 ]

Thanks Steve, I’ll give it a try.

 

 
  [ # 5 ]
Max Grossenbacher - Mar 13, 2016:

Thanks Don Patrick, that’s very interesting.

I came across: Microsoft Project Oxford, IBM Watson, Google TensorFlow, wit.ai, a-i.com, and many more.

I just registered at empathynow.com I don’t think the site is still being maintained. Does anyone have experience with any of the above mentioned systems?

 

 

 
  [ # 6 ]

Mohan Embar would indeed be a good person to contact even if his project isn’t active. His efforts focused on personal support for goals like fitness etc.
I’ve heard Bruce Wilcox say he uses sentiment analysis in ChatScript. You might check out his latest chatbot Rose.

I don’t have experience with those systems (I’m more of a language/intelligence programmer), but I would not recommend Watson because it takes a ton of resource material to get good results out of it, without which replies may be totally off the board: https://www.chatbots.org/ai_zone/viewthread/1886/
Project Oxford seems to only detect emotion through face recognition. I’m sure it’s good at what it does. There exists also an API for NAO robots that recognises emotion from voice inflection and face recognition, though it was developed for and more used in Pepper, which should be more interesting.

I should note that chatbots don’t often incorporate a lot of A.I. systems and are more commonly manually scripted. As therapy is a sensitive domain, I would recommend a combination of manually scripted conversations, using e.g. ChatScript’s topic tracking to stay on track for predetermined categories of ailments, and using sentiment analysis to monitor the patient’s emotional response and switch main topics when necessary. As for where to get some serious sentiment analysis A.I., I would ask for recommendations at reddit/r/LanguageTechnology. Most sentiment databases that categorise by emotions come from universities and/or have to be paid for.

 

 
  [ # 7 ]

Thank you again Don Patrick, I happily look into it, you’ve been a great help, much appreciated.

 

 
  [ # 8 ]

You’re welcome. I was looking for an excuse to scrutinize Xiaoice anyway smile

 

 
  login or register to react