AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Robots have power to significantly influence children’s opinions
 
 

A study proved that Robots have the power to significantly influence children’s opinions. The study, conducted at the University of Plymouth, compared how adults and children respond to an identical task when in the presence of both their peers and humanoid robots.

It showed that while adults regularly have their opinions influenced by peers,  they are largely able to resist being persuaded by robots. However, children aged between seven and nine were more likely to give the same responses as the robots, even if they were obviously incorrect.

When children were alone in the room in this research, they scored 87% on the test, but when the robots join in their score drops to 75%. And of the wrong answers, 74% matched those of the robot.


https://www.youtube.com/watch?v=n8d_liSKT5g

 

 
  [ # 1 ]

This is just another AI scare story. Children are influenced by anything from cartoon characters to Youtubers. It’s up to the adults to teach them right from wrong.

 

 
  [ # 2 ]

I used to write pattern recognition software (for semicon wafer inspection) and learned how easy it is to become self-hypnotized by an incorrect algorithm. It goes something like this: you want to detect a “ring” shape and so you write a reasonable algorithm for it. Then you start testing and find false positives and false negatives - so you want to add samples of each and try to train the algorithm to be better. On the one hand, this leads to over sampling the boundary between “ring” and “not ring”. On the other hand, this leads to many examples where I found myself looking at something that was not particularly ring-like but wanting it to be a “ring”, since that’s what the algorithm called it.

It is only natural to start tolerating something not so ring-like in order to preserve one’s belief in one’s own algorithm. It is a pernicious problem. I called it PRADHS - for “Pattern Recognition Algorithm Developer’s Hypnosis Syndrome”.

The robot article highlights one of the threats from AI that I believe is most real: poor algorithm design having negative social impact. If I think of myself as a “weenie” sitting up late at night snacking on Cheetos, then what right does this Cheeto-snacking weenie have, dreaming up an arbitrary black box that determines social outcomes?

 

 
  login or register to react