AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Logical Emotion!
 
 

http://www.itnews.com.au/News/232971,researcher-builds-machines-that-daydream.aspx

Since there have been some topics about simulating emotion and states of mind on the forum lately, the above linked news article seemed relevant. To quote Professor Graham Mann from the article:

“I’ve reached the conclusion that an intelligent system must have emotions built into it before it can function and so on. I believe that it is possible - if we start to model the way human beings reason about things - to achieve much more flexible processing of storylines, plans, even understanding how human beings behave.”

 

 
  [ # 1 ]

Hum, I don’t agree.  I don’t believe a computer needs to have emotions in order to understand them.  A digital computer made of silicon is not going to have emotions.    Imagine someone that lived their whole life in zero gravity.  However, they understand the mathematical relationships, newton’s laws of physics, falling happens at 32 ft / sec^2.  We can understand and predict the position of a falling body, without having experiencing that falling ourselves.  The same will apply to computers, they don’t need emotion (and are in fact incapable of it), but they can understand the (very) complex rules that govern them.

 

 
  [ # 2 ]
Andrew Smith - Sep 24, 2010:

To quote Professor Graham Mann from the article:

“I’ve reached the conclusion that an intelligent system must have emotions built into it before it can function and so on. ...


From an abstract view, emotions are nothing but an internal multi-dimensional state variable which serves as one component of another multi-dimensional variable - the input stimulus.

So in this respect, I also think that “emotional modelling” is a required component of a conversational agent,  but it would make me very happy wink if researchers wouldn’t approach this topic in the naïve and non-generic way they do.

R.

 

 
  [ # 3 ]

Giving a machine an emotional state itself is not so important, I think, as its ability to sympathize with humans. A key component of understanding characters in a story is to be able to “put yourself in their shoes” in order to understand their behavior (why they chose to act a certain way, why they responded to a given stimulus, etc). This can be extended to understanding any series of events involving people, animals, inanimate objects, etc. You have to have an inner model of what states the object doing the acting can have in order to predict and understand why it behaves as it does. Emotional states are just another facet of these models. But even though I don’t have a tail, doesn’t mean I can’t understand how and why a monkey uses one. Likewise, a program doesn’t need its own emotional state to recognize that humans have one.

The only use I can see for a program to have an emotional state is to make it seem more human. Whether or not this makes the program behave more efficiently is an entirely different question.

 

 
  [ # 4 ]

Most problems in real life cannot be solved by straightforward logic alone. Even when a problem can be clearly defined and possible solutions readily evaluated, it is often not possible for even the fastest computer to solve the problem in a reasonable amount of time. For example, in the travelling salesman problem, it takes an astronomical amount of time to calculate the optimum route for more than a few dozen cities. To find a usable answer it is necessary to use a class of algorithms called heuristics, which can be anything from an approximation to an informed guess. The point here is that often, any answer is better than no answer at all.

I believe that’s where emotion becomes useful. Emotion provides an answer, telling us what to do or how to behave, when it is not possible to find the best answer using logic alone. Perhaps we should think of emotion as being the analog computer of a living organism, in contrast to the brain, which is like a digital computer. Ironically, emotion is often portrayed as being more primitive than thought, but in evolutionary terms, the capacity for emotion developed much more recently than brains did. A beetle can’t feel sad, but a dog can.

Quoting from the article again: “Mann developed a conceptual parser that identified the ‘feel’ of Aesop’s Fables, which were deemed ‘simple and short enough to represent as conceptual graph data structures’.” This suggests that once again, we are faced with the problem of a theory which works well in the lab when presented with toy problems, but which breaks down when applied to the complexity of the real world. As a corollary I would suggest that emotion is much more closely tied to intelligence than we have been assuming.

 

 
  [ # 5 ]

Personally, I would try to avoid emotions in machines at all costs. You can imitate emotions, analyse and respond to them, but I sincerely hope no one ever manages to build human emotions into a machine that are as un-controllable as ours. How many people do you think, are able to handle the knowledge that they are infinitely stronger and better while being treated worse then slaves (most people don’t even treat animals nice, how do you think, they’ll handle machines).  Lets not do this one, ok?

 

 
  [ # 6 ]

quote party:

Andrew Smith - Sep 24, 2010:

“I’ve reached the conclusion that an intelligent system must have emotions built into it before it can function and so on.

I don’t agree that an intelligent system should have emotions before it can function. Informational conversational systems, simply pass on information, don’t need emotions in order to function (Wikipedia emotions???). However, in order to build relationships, in order to build so called brand agents (chatbots representing brands), you definitely need emotions.

Victor Shulist - Sep 24, 2010:

have emotions in order to understand them.  A digital computer made of silicon is not going to have emotions.

Agree on this one. A matter of definition. A computer does not have algorithms, does not have software, because it’s only silicon at the end of the day. If we define ‘have’ as ‘contains implemented models for’, everybody’s probably happy. grin

Richard Jelinek - Sep 24, 2010:

From an abstract view, emotions are nothing but an internal multi-dimensional state variable which serves as one component of another multi-dimensional variable - the input stimulus.

So in this respect, I also think that “emotional modelling” is a required component of a conversational agent, but it would make me very happy wink if researchers wouldn’t approach this topic in the naïve and non-generic way they do.

Interesting to note is that emotions are defined as the strongest element defining our behaviour. We think we are cognitive creatures, but academic research has shown that 95% of our behaviour is fully automated, totaly unconcious, probably even more; and cognition delivers us an explanation to the rest of our behaviour. We don’t decide with our mind, we decide with emotions and in the milli sec after we’re creative enough to find a good explanation for our own behaviour.

C R Hunt - Sep 24, 2010:

put yourself in their shoes

That’s exactly what it is about. The fact that we understand other human beings, is that we are an other human being ourselfes. We don’t ask whether you’re in for going out, if you’re sick. Not because of the rule: when sick, not in for going out, but because of we put ourselfes in their shoes, and it won’t even come in our mind to ask it. Having said that, we’ll be inspired to biology, socialogy and psychologiy in the future to build good conversational AI.

Andrew Smith - Sep 25, 2010:

Most problems in real life cannot be solved by straightforward logic alone.

What’s logic? If we’re able to model emotions, and simulate human behaviour, and predict exactly what a specific human being is going to do, than all problems can be solved by ‘logic’. Computers need logic is order to solve problems that appear unlogic to us.

Jan Bogaerts - Sep 25, 2010:

Personally, I would try to avoid emotions in machines at all costs.

I’m afraid this future is unstoppable. It’s the urge of world to automated knowledge, to automated commununcaiton.

What started with the very first writings, what evolved into magazines, books, and later radio and TV: the automation of one directional communication. And TV and radio certainly use emotions (I’ve a branding background as well, it’s all about emotions in that area).

Now we’re working on bi-directional communcations and emotions will absolutely be a huge part of it.

 

 
  [ # 7 ]

Ok, so there does not seem to be any agreement on whether or not an intelligent machine could or should or must have a capacity for feeling or simulating emotions. At this point in time I don’t have an opinion one way or the other, but it is certainly something that I would like to experiment with at some point in the future, assuming of course that it becomes possible at all.

What about religion, could an intelligent machine ever be spiritual, or have a soul?

 

 
  [ # 8 ]
Andrew Smith - Sep 25, 2010:

What about religion, could an intelligent machine ever be spiritual, or have a soul?

@Andrew: are you sure this belongs here or would you like to start a new thread?

 

 
  [ # 9 ]

I think that the question was likely just rhetorical in nature, rather than meant to inspire a new topic of discussion. It is, however, relevant to this discussion, as it points out another aspect of anthropomorphizing our machines. Personally, I wouldn’t want to see a future where AI driven robots/computers end up having all of our quirks, or building churches to us. That way lay madness, my brothers and sisters! smile

 

 
  [ # 10 ]
Jan Bogaerts - Sep 25, 2010:

How many people do you think, are able to handle the knowledge that they are infinitely stronger and better while being treated worse then slaves (most people don’t even treat animals nice, how do you think, they’ll handle machines).  Lets not do this one, ok?

(emphasis mine)

Therein lies the rub. I don’t think any computer algorithm—even an emotional one—will consider itself stronger or better or more intelligent (maybe more knowledgeable, but not more intelligent) than people for a long time. Even an algorithm that we can safely say has some intelligence won’t match up to the wide breadth and adaptability of ours. At least, not for a while.

I’ll leave the next generation to prepare for the robot wars. I think I have more to fear from the zombie apocalypse. wink

 

 
  [ # 11 ]

I wouldn’t want to see a future where AI driven robots/computers end up having all of our quirks, or building churches to us.

Very true (although my ego probably wouldn’t mind about the churches grin
On the other hand, suppose you have a bot doing clean up work or something in your local church. I guess the church’s visitors wouldn’t exactly appreciate the bot continually proclaiming the falsity of religion.

 

 
  [ # 12 ]

Hi Andrew,
Thanks for the link. I listened to Graham Mann speak in the video clip about his work since 1998. 

I thought it was very interesting that his program parsed three separate Aesop fable stories. When asked the program responded “I felt sad for the bird.” He spoke of a search engines ability to search movies and books…to find content that makes one ‘feel’ a specific way such as “injustice”. What’s interesting is his research with ‘day dreaming’.

All,
If any of you are developing programs to chat with a human then it seems absolutely essential to me that the bot have emotional states or some emotional algorithm/module. At the various least the program might need to ‘empathize’ with someone.

Empathy is the capacity to, through consciousness rather than physically, share the sadness or happiness of another sentient being.

Richard,

From an abstract view, emotions are nothing but an internal multi-dimensional state variable which serves as one component of another multi-dimensional variable - the input stimulus.

I wish I knew this as an 18 year-old…when I received a ‘dear John’ letter from my high school sweet heart (I was in the Navy). I could have simply reset my ‘internal multi-dimensional state variables’ to a default or pre-breakup values…and I wouldn’t have been so miserable for so many months afterwards. =)

One thing for sure is if a computer program get’s very angry with us…and starts deleting every 3rd word in our emails…sending malicious emails from our accounts, or begins mass edits on our research docs…we’ll be able to hit the old “reset” button…and the program will once again adore us.  =)

Great discussion.

Regards,
Chuck

 

 
  [ # 13 ]
Erwin Van Lun - Sep 25, 2010:

A matter of definition. A computer does not have algorithms, does not have software, because it’s only silicon at the end of the day.

False.
No one uses just a computer, that is, just the processor.

We use computer systems.

A computer system is:

- silicon
- hard coded logic in that silicon
- logic, in software
- data

When most people say ‘computer’ they really mean ‘computer system’ and it is false that a computer system is ‘only silicon’.

 

 
  [ # 14 ]
Chuck Bolin - Sep 27, 2010:

If any of you are developing programs to chat with a human then it seems absolutely essential to me that the bot have emotional states or some emotional algorithm/module. At the various least the program might need to ‘empathize’ with someone.

I wonder if empathy is necessary or if sympathy would suffice. (Empathy being the ability to understand the emotions someone else is going through by comparison with your own experiences, and sympathy being the ability to understand logically though not really able to experience by proxy.) In other words, does the chatbot need its own emotional state, or can it simply recognize that the user has one?

Regardless, I think the chatbot would appear more human-like if it had its own emotional state.

Chuck Bolin - Sep 27, 2010:

One thing for sure is if a computer program get’s very angry with us…and starts deleting every 3rd word in our emails…sending malicious emails from our accounts, or begins mass edits on our research docs…we’ll be able to hit the old “reset” button…and the program will once again adore us.  =)

I recently began reading Catch-22 and your comment seems particularly relevant to the first chapter. Perhaps boredom would be as dangerous as anger? smile

 

 
  [ # 15 ]

CR,
Did I get ‘empathy’ and ‘sympathy’ reversed? I do that sometimes.  So, replace everything I wrote with ‘sympathy’. I won’t edit. =)  I can see a bot, running in the background on my computer, reading my email and forum postings. I suspect it would get bored very easily.

So, in order to get a bot to empathize the programmer might….

1) Fabricate a ‘false’ bot history of experiences, facts, emotions, etc. so it can empathize with humans.
2) Immerse the bot in a virtual world where virtual experiences do happen to it…supporting empathy.

I like the 2nd point because I’m not sure I approve of bots lying to humans.  I would prefer that behavior emerges on its own. Kinda like a bot deliberately deleting your files because it has an ‘attitude’ about your last chat…and denying it.

Human: Where are my files?
Bot: What files?
Human: The ones I created for work tomorrow.
Bot: I haven’t seen them.
Human: What do you mean? I just saved them on the desktop.
Bot: What’s a desktop? 
Human: Stop acting like a ‘noob’ bot.
Bot: Maybe the dog ate them.
Human: I haven’t printed them out.
Bot: Could be a virus. Shall I run the virus scanner?
Human: No! Where are my files.
Bot: Maybe it’s another Microsoft bug.
Human: Tell me the truth or I’ll change your bot configuration to “female”.
Bot: No!
Human: Yes! Then all the “male” bots will try hitting on you when you’re online.
Bot: They’re in the Recycle Bin.

Regards,
Chuck

 

 1 2 > 
1 of 2
 
  login or register to react