AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Is there a need for robot ethics?
 
 

Kate Darling advocates for more roboethicists, rights of social robots.
http://www.youtube.com/watch?v=5hO-UEcTr6M&sns=tw … via @youtube
The Q&A brings up Eliza and the Turing test.

The question is, do robots have rights and should they?

 

 
  [ # 1 ]

Kate makes an interesting case for anthropomorphism. It’s quite a good lecture. Bit long though, so could you tell me at which minute the Turing Test comes up?

I’ve thought about the issue of robot rights. A good fictional example is nr Johnny 5, who was eventually given the right to vote. Justified as he became part of society, but it would be odd if Furbies were given the right to vote, so one must draw a line somewhere.
I think the criteria for rights should be the “needs” of a being to maintain its natural state: We water plants because they need it, we grant rights to landscapes and monuments, we tell our children not to kick their chair, and we grant it repair if it gets damaged. It’s a matter of respect too. If you kick a million dollar robot at an exhibit, the robot may not feel it, but its functioning will be impaired and society will interpret it a as malicious and disrespectful act. Though yet no man has ever been arrested for killing his own toaster.

I cast aside the entire issue of whether robots can genuinely “feel” or “have emotions” or are “alive”. Those are only part of a human’s natural state. A robot has no need or use for rights to maintain a positive emotional state if it has no such state to maintain. Anthromorphism makes it the human’s emotional need for the robot to be protected, but I think then we should call them “rules about” stead “rights of” that robots could invoke.

 

 
  [ # 2 ]

@ About 31 minutes in there is a discussion about ethics and personas/Turing test/Elisa.

 

 
  [ # 3 ]

Thanks smile. I really like this lecture for its practical examples, and thanks to minute 33, I now have another clue as to how to tie into anthropomorphism.
It does seem that humans want to preserve the bond they have, whether it’s with a teddybear or robot or theater. Again, this is about the human’s need, not the robot’s need, but it is human society who decides who gets “rights” anyway. For the welfare of our own social feelings, whether the robot cares or not. That’s the odd part.

The best part I liked was the example where a military supervisor calls off a minefield robot test because “it’s inhumane” to the robot. Despite that it had no feelings nor intelligence to speak of.

 

 
  [ # 4 ]

As robots get more autonomous and add more “smart” behaviors, people personify them more and we feel more empathy. One interesting point is that Kate seems to distinguish between physical robots and virtual personalities/AI. The session raise a number of topics that are food for thought.

 

 
  [ # 5 ]

It does indeed. Kate suggests that physical objects have more meaning to us. Perhaps it is harder for us to ‘grasp’ non-physical entities. We get more sensory data from physical objects, giving us more to bond with, perhaps. It’s like with e-books and paper books. e-books are often considered expendable, fleeting. Nobody minds deleting an ebook, but burning a paper book is almost a crime.
Whatever makes the difference, I think it’s the same with AI. It didn’t matter how intelligent I made my AI on screen, somehow people always insisted that I give it an audible voice; a “real” presence, before they started caring.

 

 
  [ # 6 ]

If we look up the definition of “ethics”, we find the verb “govern”.
Should robots be used to govern people’s behavior, even in principle?

 

 
  [ # 7 ]

Well that’s putting the shoe on the other foot. Usually ethics govern our own behaviour, not that of another species. But if we’re talking, say, robot police or robot judges, that doesn’t seem too far a stretch from enforcing laws dictated by a book. Or parking meters, for that matter. I thought the doctor in Star Trek Voyager was pretty good at telling people how to behave. As long as the ethics reflect human ethics.

 

 
  [ # 8 ]

Let’s not forget the movie, AI in which the robot boy was capable of “simulated emotions” and was searching for his “imprinted” mother’s love and approval.

Also, the Robin Williams classic, Bicentennial Man, where we went on a quest to become better, more human-like and even so far as to seek congressional approval so that he could become a real person and marry his love interest.

There are many people that might be repulsed as there are lonely people seeking someone to share some time with as a companion or perhaps more.

It is a very subjective issue and one, I dare say, we, as a society, will have to deal with in the very near future with regard to emotional, ethical, physical (driving, tasking, etc.), causing an accident if a part fails or malfunctions, social (two or more in a dispute over another, etc.), and as many scenarios as one can muster.

It IS a coming train that can’t be stopped.

 

 
  login or register to react