AI Zone Admin Forum Add your forum

NEWS: survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Computers are designed to be programmable and not evolvable

Computers are designed to be programmable and not evolvable

Computers are anticipated, if properly programmed, can achieve human level artificial intelligence and eventually super intelligence. However, the computer architectures that we had today are designed for maximum programmability. The source code or designed are input to the compiler or synthesis tool, and after that an executable or computer logic is synthesised. Thus, the output is a programmed/hardware developed system that can execute the code or instructions. The case is here, the computer will only execute the program. The computer is designed to achieve maximum execution speed. The computer/IC is aimed to be designed to possess speed intelligence only.

What does this imply to achieving true artificial intelligence? The requirement of artificial intelligence is that it has to learn, not just execute the instructions. It should ideally rewrite its own code. The system has to possess qualitative intelligence, not just speed intelligence, as Nick Bostrom from his popular book “Superintelligence” has defined. What is happening now is that the best we could do on these programmable computers is that we compile a source code that emulates learning on top of a frozen code, right? My opinion is that some machine learning aspects are innately ineffective in von Neumann architectures or other conventional hardware. It may possibly worsen the performance on every increase in machine learning complexity. In other words, machine learning cannot be efficient as computers see the program as intelligent, but as a finite set of instructions with maximum flexibility.

There’s a chance that further adding features of more generalized intelligence will result in a computational bottleneck that will greatly reduce “speed intelligence” to compensate for “quality intelligence”, cancelling the accummulated effects the two intelligences previously mentioned. Or at a lesser extent, the expected speed increase in machine intelligence would be magnitude lower by two for example, as general learning can be done at an emulated environment. An example is the convoluted neural network model of visual recognition, an intensive computational power is required due to the inefficiency of the underlying hardware. A fortunate immediate solution to this is to use GPUs to increase computation at a magnitude increase. This is a sample of an architectural consideration at a hardware level, but it may still not be enough for higher cognitive tasks.

The bottomline is, for the envisioned artificial general intelligence to be evolved, assuming that the correct AGI model is achieved and simulated, what immediately follows is an architectural redesign of our computer systems that are aimed for evolvability or “true learning”, and not just programmability for. This will fully realize the expected speed intelligence that computers exhibit today and to be coupled with high quality intelligence that humans possess.

What are your opinions on this? I want to know if hardware development of computers will ultimately be the stumbling block of the most intensive AI researches today. Development of quantum computers or memristors are under development to set the computers to better performance, but I think a more direct approach is needed. Or is it possible that if evolvability is integrated into a system, speed will naturally and significantly suffer?


  [ # 1 ]

Computer speed is not currently the problem. CPU power, multiprocessing and the cloud have eliminated this issue.
Computer languages with on the fly compilation enable things to be learned and added into a program in real-time.

As and example, Skynet-AI is easily 50x faster than a human in responding to items in knows (people are constantly suggesting to slow down bots to make them seem more human). Most of the advanced chat bots have some ability to ‘learn’. AIML has a learn file, Skynet-AI writes it’s own code during the conversation. Items learned are easily integrated into the conversational stream.

No one would say these learning systems are all that is needed for AGI. But, computational power is not the bottleneck.


  [ # 2 ]
Arjeus Guevarra - May 1, 2015:

It should ideally rewrite its own code.

This is the main assumption that keeps everyone looking into the wrong direction (for solutions). There is no need to rewrite any code; we don’t need to rewrite the operating system of a computer to make it facilitate new applications. What the system SHOULD be capable of, is to be able to rewrite its data (knowledge and experiences), like every DBMS on the planet currently is capable of. When you get this, you will understand that there is NO bottleneck wink


  [ # 3 ]

@Merlin, what computer languages that have on the fly compilation? How efficient are these programs in terms of program execution?

The point here is that the human brain only consumes minimal power than a supercomputer in terms of orders of magnitude. There are good chances that there are fundamental limitations that computers have today, forcing to use increased computational power and energy to achieve to good ANI levels, and not AGI level yet. There are good chances that an architectural (and possibly even basic level) redesign on computers are required to facilitate optimal learning, and not just programmability as we know it today.

What do you think? ThE post above is a rather pessimistic opinion, and I wish that we don’t wait for a very advanced hardware to achieve AGI.

@Hans, is what you mean a hardware that’s highly efficient on rewriting its “experiences” and execute them with no sacrifice on execution time?

Or if you meant is a fixed code, what you mean is an emulated learning system on top of a frozen code. This will slow down a system by a certain magnitude, unlike in a fresh rewritten code, when compiled, will be execute very fast, where every instruction directly goes to hardware execution. The computer still heavily relies on serial operations, and is beginning to explore parallel capabilities, where the brain is very good at. Plus, the brain can change its artchitecture in a few days. Anyway, the slowdown by a simulation might still be faster than human, so there are good chances that this may work.

Thanks guys for your replies!


  [ # 4 ]
Arjeus Guevarra - May 9, 2015:

@Merlin, what computer languages that have on the fly compilation? How efficient are these programs in terms of program execution?

Many modern computer languages include “Just-In-Time” (JIT) compilation.

JIT compilation can be applied to a whole program, or can be used for certain capacities, particularly dynamic capacities such as regular expressions. For example, a text editor may compile a regular expression provided at runtime to machine code to allow faster matching – this cannot be done ahead of time, as the data is only provided at run time. Several modern runtime environments rely on JIT compilation for high-speed code execution, most significantly most implementations of Java, together with Microsoft’s .NET Framework. Similarly, many regular expression libraries (“regular expression engines”) feature JIT compilation of regular expressions, either to bytecode or to machine code.

The AI language that I invented (JavaScript Artificial Intelligence Language - JAIL) takes advantage of this capability. It has proven to be very efficient in terms of program execution and is much faster than traditional approaches.


  [ # 5 ]
Arjeus Guevarra - May 9, 2015:

the brain can change its artchitecture in a few days.

This is where your perception of how the brain works is wrong; the brain is not a turing machine as it can not run any different code then the code that is implemented. That is because the code is implemented in the ‘hardware’ of the brain (mostly refered to as ‘wetware’). And because the brain’s operating system is implemented in it’s hardware, it can definitely NOT change it’s architecture. What you point at, is the capability of the brain to rewire information pathways, which is definitely not it’s architecture but much more synonymous with the ‘database’.

To build a real AGI (what I’m doing by the way), you need to figure out a datamodel that can represent the brains ability to create any data structure it needs to represent things (I have done this) and then engineer the operating system that handles base cognitive functionality on top of this data structure (working on it wink ).


  [ # 6 ]

@Merlin, yeah, I researched up a bit and it is a feature on several programming languages. My language of choice now is Python anacanda, and it supports JIT compilation.


  login or register to react