AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Cambridge to set up Centre for the Study of Existential Risk
 
 

I saw an interesting article in the magazine for Cambridge Alumni, explaining that a Professor of Philosophy is teaming up with a founder of Skype to study “potentially cataclysmic effects of human technology”. The article goes on to focus on AI as a potential source of future trouble.  It’s short on detailed discussion, but it’s interesting to see that academics see it as an area for future research.  The comment that most of the thinking in this area is happening outside academia should ring true for members of this forum. 

You can read the article on this link: (sorry about the terrible magazine reader interface, jump to pages 24-25)
http://issuu.com/cambridgealumnirelationsoffice/docs/cam_68_composite_online-100dpi_opt?mode=window&backgroundColor;=#222222

I take a much less pessimistic view, because I think slow progress in AI will give us time to figure out how to manage these risks while we develop increasingly useful and autonomous AIs.  Just as we have roughly figured out how to make industrial machines safe for humans (think of the cut out that stops a shredder sucking you in when it grabs your tie) and how to educate children so they don’t all become psychopaths, so we should be able to handle sophisticated AIs.

 

 
  [ # 1 ]

Thanks for the information about that article. In my opinion the article didn’t say anything of value, though: we need to define “intelligence”, ethics is important to study, there might really be a breakthrough someday, we need to study the possible effects of AI. Duh.

OliverL - Mar 25, 2013:

It’s short on detailed discussion, but it’s interesting to see that academics see it as an area for future research.  The comment that most of the thinking in this area is happening outside academia should ring true for members of this forum.

It rings too true. Future research? With what money? In the USA now there is little money left for anything except for basic survival, and even that’s not assured. Sorry to be so negative, but I’m tired of having a Ph.D. specializing in AI and living year after year in poverty. I volunteered to judge at a science fair last week and for the first time they couldn’t even afford to give the judges a token $2 gift as they used to do, and the public was barred from speaking with the exhibitors, probably due to poltically-driven fears over underage exhibitors talking to adults, so all my suggestions and ideas, as well as the suggestions of other qualified people, cannot be communicated to upcoming scientists anymore, which impacts future science. I also lost a math tutoring job I had last week because the student was becoming so hostile to having to actually do any math (his previous tutor was fired for doing all his work for him—duh, ethics is important!) that he refused to attend any more, and arrogantly said “Nobody can tell me what to do.” His mother just caved in to his desires, despite admitting that the older kids in school are influencing his attitudes so badly that she feels forced to move him into home schooling. That’s the state of science and education in this country now as I see it. I view these as really bad signs, like the warning signs of a general collapse of society.

 

 

 
  login or register to react