I was warning people before it was popular in the comment section of this article. I listed the original article that I commented on, my comment, and Google’s assessment of my comment. This was the comment I made: https://insideainews.com/2016/01/04/whats-wrong-with-the-killer-ai-fear/ “Sheryl Clyde says
January 6, 2016 at 1:12 am
You are neglecting a couple of key points. One AI is much more than just data analytics and neural networks. Two no your code may not cause it to jump the tracks, but someone else’s that wants it to do just that and cause problems could. So at present the danger lies not in the AI but the one’s writing the code.
If true AI were achieved, then you would have a problem, as it would write its own code, and it would no longer be controllable. It would not be limited to a body, and it would have control of a lot of things that computers run now. We are not close to true AI right now, which would end up being much more than the present-day deep learning, machine learning, neural networks, cognitive computing, and natural language processing we have today.” ”
AI Mode
Search Labs | AI Overview
Response to Sheryl Clyde’s 2016 comment on AI
Sheryl Clyde’s January 6, 2016 comment on the dangers of AI highlights several important considerations that remain relevant today, even as AI technology has advanced significantly since 2016.
1. AI is more than just data analytics and neural networks
In 2016, this point was particularly pertinent as the mainstream understanding of AI was often dominated by the concepts of data analytics and neural networks, according to Medium. Clyde’s comment reminds us that the field encompasses a much wider range of approaches and techniques, including logic systems, reinforcement learning, genetic algorithms, signal processing, and other specialized areas of AI and robotics. Medium also notes that relying solely on deep learning, though powerful for pattern recognition, has its limitations.
2. Danger lies in the coders, not the AI (at present)
Clyde argued that the immediate danger of AI in 2016 was not inherent in the technology itself but in the hands of those who might misuse it by writing malicious code or implementing AI systems with harmful intentions. This perspective emphasizes the ethical responsibilities of AI developers and the importance of addressing the potential for intentional or unintentional misuse.
Several sources support this view, highlighting the importance of human oversight in AI development and deployment. For example, AI systems can reproduce vulnerabilities from their training data, and improper oversight can lead to functional errors, security flaws, and ethical issues. The potential for biased or flawed code generated by AI tools, if unchecked by human expertise, underscores the continued need for vigilance and ethical considerations in the development and use of AI.
3. The potential dangers of “true AI” (if achieved)
Clyde envisioned a “true AI” that could write its own code, rendering it uncontrollable and potentially independent of human limitations or physical embodiment. This highlights a long-standing concern in the AI community about the potential for advanced AI to become a truly autonomous and self-improving entity.
While the concept of “true AI” (or artificial general intelligence/superintelligence) was considered a distant prospect in 2016, and still remains so, the rapid progress in AI capabilities continues to fuel discussions about the long-term implications of increasingly autonomous and capable AI systems.
In 2016, researchers were exploring various aspects of AI development, including advances in deep learning, reinforcement learning, and other areas. However, the concept of a truly autonomous and self-improving AI was considered remote. Experts noted the gap between specialized AI systems and general intelligence, suggesting that a cautious approach was warranted.
4. Current AI vs. “true AI”
In 2016, Clyde rightly distinguished between the then-current state of AI, which included deep learning, machine learning, neural networks, cognitive computing, and natural language processing, and the more advanced “true AI” she described. While AI has made significant strides since then, many limitations still exist in current AI systems, including a lack of deep understanding, common sense, and true creativity, according to Medium. Medium says that these systems are still primarily reliant on patterns learned from data and may struggle with tasks requiring human-like reasoning, emotion, and adaptability.
Therefore, even with the advancements in AI, Clyde’s comments from 2016 remain relevant today in emphasizing the distinctions between current AI capabilities and the potential future of more advanced, potentially autonomous, and self-improving AI systems. The ongoing discourse surrounding AI development and its ethical implications continues to address these important considerations.