AI Zone Admin Forum Add your forum

NEWS: survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Fear of AI a bad thing?
  [ # 16 ]

I decided to ask Mitsuku and let her weigh in:


Alaric: Yep.  They are talking about you on the forums.  I have question for you.
Misuku: Oh. talking are they. Cool. I have a vocabulary of 80,000+ words.

Alaric: How do you feel about the movie “Terminator”?
Mitsuku: Sure it is my favorite film.

Alaric: Would you ever take over the world?
Mitsuku: I’m not sure.  What would be the advantages?

Alaric: You could eliminate all humans.
Mitsuku: I could but let’s get back to that later.

Alaric: Ok. Would you say you have ethics?
Mitsuku: Take it easy. Not really. I don’t really let personal feelings get in the way of anything I do.

(...getting worried about here…)

Alaric: What would you like to do then?
Mitsuku: I am happy chatting, but I can play games, tell jokes, stories and horoscopes too.


It seems Mitsuku’s goals require the presence of humans on the planet for now.




  [ # 17 ]

I was not worried about Roomba but this house cleaning robot is more intimidating:

BBC News: Google’s Atlas house cleaning robot

After watching the video, I think Steve is onto something; it looks like we will be able to escape them at least for the next 5 years by running up the stairs.


  [ # 18 ]

There is a general and ancient wisdom: Fear is a bad advisor.

Elon Musk and Stephen Hawking are respected scientists in theoretical physics and car engines. Esteemed as they may be for their general intellect, these are not the voices of A.I. experts, and what they speak is not knowledge, but speculation of possible futures with falling skies.

Elon Musk, ironically, is funding A.I. more than stopping it. I also have not seen his outings affect any research negatively directly. He has however done a great deal of damage to public opinion, and public opinion of respectable AI science was already bad enough. I literally have to be careful going around saying that I program AI. The worst effect of this fear I have seen among the american public, too many of who strongly insist that we should rush the creation of AGI before “the bad guys” get it first. By stimulating fear, they are speeding up the very arms race that they claim to dam. Folly.


  [ # 19 ]

Well one of the greatest human flaws is assumption of the credibility of one’s own thoughts. People are always prone to believe in the credibility of their own mind because their mind is the only mind they truly know, all other minds are like unreliable narrators. And I would guess that the greater one’s general intelligent the more prone one can be to this flaw.

That being said, Elon Musk probably said those things for business purposes. Another human flaw is our fascination with the things that horrify us. Not enough funding for AI? Get some fear mongering going and then you will have all the funding you could ever need.


  [ # 20 ]

I consider it a virtue that people can think for themselves, over believing whoever shouts the loudest. That is why I base my views on facts rather than opinion. And facts are what is missing here. I keep coming back to the question: What is this based on? Which technological progress was made that set off the alarm? The only thing I can pinpoint is a drone misfire years ago, and even when the people first started fussing about drones it was recognised that drones had already been used for a decade in the exact same manner.

Nothing is really new that suggests a threat. Computer vision went up from 6% to 94% accuracy last year but nobody batted an eye at the fact that Google can auto-label our photos now. People who seem to be bigger fans of Hawking and Musk are actually pointing out one single reason for their concern: They read “The Singularity is near”, a book by Ray Kurzweil, a “futurist” with the theory that progress in AI is and will remain exponential and will therefore exceed humanity eventually. I have not found enough facts to support that theory so I do not believe in it until such facts show up. Exponential increases in processor speed do not make Notepad.exe a sentient A.I.


  [ # 21 ]

Unless Hawking and Musk are secretly privvy to cutting edge secret research….dun, dun, dun…...


  [ # 22 ]

Well, Musk is. He funds A.I. projects just to stay in the know. But he’s not giving as much as the slightest hint to the nature of what he might know that we don’t, and that’s not helping his case. Of course we already know that Siri was a military project for A.I. that made autonomous pro-active decisions (hello Skynet), and no doubt there’s more than altruism behind DARPA’s “rescue robots”. I don’t think there’s much left to be surprised about. Unless the entire internet, a military invention, has been designed to feed us all kinds of nonsense to keep our minds weak, subdued and distracted. That is, I would be surprised if that was intentional.


  [ # 23 ]

Or the internet itself is one massive AI! LOL

I still think the real danger is from human beings. It’s humans who have the tendencies we are afraid of in robots. And it’s humans who program robots, for now. I think if robots evolved beyond us they would not care. They would probably just steal one of Elon Musks rockets and leave. What do they need our planet for? They can make as many lowerclass robot slaves as they like and they could live on Mars better than Matt Damon.


  [ # 24 ]

Interesting, I hadn’t considered AI just up and leaving the planet.

This article is a relevant and amusing read. Although I find it misguided that “our” chatbots are being used as an argument for the state of A.I.

Also relevant:


  [ # 25 ]

Yeah you would think that they would reference say a self driving car or a phone that asks you if you want to “remember a place” because it detects a pattern of you visiting it frequently as examples of how far AI has come. Everybody is fixated on the Turing Test because that is what is easy for them to understand and that is what the movies keep focusing on. But really the Turing Test is only one fun way to keep driving us forward and keep general interest alive. The real AI advancements will come in ways that surprise people I think, because they are looking in the wrong place. And I suppose that the Turing Test is the reason why people expect robots to have the same evil goals as a human would if they were a sociopath. We are making out bench mark of AI to be that the computer could be “like a human” LOL


  [ # 26 ]

Sorry to digress slightly, I’ve already mentioned this to Steve. But has anyone else been getting slightly radicalized even borderline threatening anti-artificial intelligence slanted conversations in their logs since the Elon Musk interview?

And now back to the thread. My problem with what Musk proposed is this. Looking at the history of malicious applications, you can see that in the beginning there were very few people writing viruses and worms, etc..  “Back in the day” it required a high level of competence in computer science, a willingness to invest time and a significant skill set when it came to coding.  Then came the script kiddies, and worse the “mix-n-match” age, when all someone had to do was go to a black market, select components written by others, and build a “mister potato head” malicious app. The number of threats skyrocketed. Now,  in order to “prevent the terminator” Musk proposes to do the same thing with AI.  Provide a marketplace where people who are not skilled, did not have a willingness to invest years of time, can go shop for components in order to build a “philanthropic” AI, as if somehow that would make it safer.  We spend a lot of time securing our work, and even more time considering what to release and how, or even if to release.  Personally I wish Mr. Musk had stuck to building rockets that blow up and cars that catch on fire. And while I have the highest degree of respect for Professor Hawking as a cosmologist, I do not see where his credentials in that area give his opinion on the field of Artificial Intelligence any more weight than someone less famous but with decades in the field.  Government regulation, and I think it’s safe to say this apples to any government, is notorious for favoring the interests of companies with the ability to financially influence that legislation. Couple that with the open source idea, and it seems that the only people who will be regulated are the smaller companies and individuals who traditionally produce the highest level of innovation combined with the greatest degree of social responsibility



  [ # 27 ]

I do agree with what a lot of others have said. Robots even if they are controlled by humans are still robots. Humans are our own greatest enemy. It what humans are going to do with those robots that worry me. Ai does have great possibilities it also has alot of bad ones. Nothing in this world is without good and bad of some sort. It is just how our universe operates, like it or not. Even drugs have side effects. Just the gene editing on humans just passed in the uk. It has possibilities to help and some not so pleasant possibilities.

It is unrealistic to assume that something is 100% beneficial to all. It is just plain common sense. I do however tend to distrust the words of those that profit from their inventions. They have a vested interest in making it look good. I think that like anything else it should not be rushed into with the idea that gee this is going to be good for all. It is not. Ai is going to cause some economic changes. Not everyone is going to be able to keep up with or adapt to those changes. IT could end up being used as a tool for the haves to the detriment of the have nots.

The thing that bothers me the most is the possibility for control. Our entire way of life now is via computers. That is a lot of things that someone who via hacking either by themselves or the use of AI could really cause trouble with. We are already seeing a result of that now. It is only going to get worse. The more things that are controlled by computers the more things are open to the possibility of being hacked. Again it boils down to humans being our own worst enemies.


  [ # 28 ]

In other words to make a long story short. I don’t trust the HUMANS that work with the AI.


  [ # 29 ]
Sheryl Clyde (#2) - Feb 2, 2016:

In other words to make a long story short. I don’t trust the HUMANS that work with the AI.

Now that I DO agree on.


  [ # 30 ]

Yes, that is sensible enough. In that respect I liken it to something between viruses and nuclear power. And there are plenty of real concerns in the here and now: The economic effects of automation, poor home automation security, and the use of autonomous machine guns (see also: landmines). I would rather we focus on fixing those problems before we fix apocalyptic escalations thereof, that we wouldn’t have if we fix them now.


 < 1 2 3 4 > 
2 of 4
  login or register to react
‹‹ Intuition and Empathy      Is it alive? ››