AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

How to build a conscious AI-mind
 
 

I’m starting this topic to put some of my ideas up for discussion in a more philosophical view. I think it is possible (and even not that difficult) to actually engineer a conscious AI and I have some insights to share on that matter:

1. Building a conscious AI is probably not that difficult. Most AI-research is looking at very complex solutions to a problem that is fairly simple in nature; how to create a machine that can do what our brain can do. The (overly) complex solutions focus either on imitating the biological complexity of the brain (neural nets) or by emulating each and every feature of the brain in a dedicated software-process (like e.g. NLP). However, the main functions of our brain, memorization and processing, are already emulated in todays computers. Turing invented the Turing Machine as the proposed digital equivalent of a human brain and the Von Neumann Architecture was formulated (and implemented) to actually build a Universal Turing Machine. The real complexity of a human brain already has it’s counterpart in the electronic complexity (integrated circuits) that underlays todays computers. There is no reason why we should replicate this complexity AGAIN in the software that is running the AI-mind. So it stands to reason that the software only needs to replicate the intrinsic processes that are working in our own brains and everything else (including consciousness) will stem from that.

2. Our brain can be seen as a biological Von Neumann Architecture that is running an operating system and a (very comprehensive) database management system. There is also an IO-subsystem for communication with the ‘outer world’. The brain operating system is not constantly being rewritten, just like a computer OS is fairly static. Of course there is an ‘upgrade’ sometimes but that can be seen as ‘evolution’ (and we do actually see software versions as the evolution of a computer program). Our ‘brain OS’ mainly implements the ‘plumbing’ (similar to a computer OS), all learning, experience and therefor consciousness is held in the data-model.

3. One of the most important aspects of ‘consciousness’ is that we are actually ‘driven’ by instinct, desires and ambition. Instinct is again fairly static (i.e. only changes through evolution) and can therefor be programmed, but desires and ambition are forces that drive themselves. Because of this I do believe that the mind-OS needs one or more feedback-loops because that is the equivalence of a self amplifying system. I think that feedback-loops are essential for implementing AI-consciousness.

Looking forward to the replies on this one smile

 

 
  [ # 1 ]

Interesting metaphor.

The operating system can be safely eliminated.
So you can bootstrap to your AI-consciousness.

 

 
  [ # 2 ]
Hans Peter Willems - Mar 7, 2011:

[...]  Turing invented the Turing Machine as the proposed digital equivalent of a human brain and the Von Neumann Architecture was formulated (and implemented) to actually build a Universal Turing Machine.

http://en.wikipedia.org/wiki/Von_Neumann_architecture is (correct me if I am wrong) not a massively parallel (“maspar”) architecture like our human brains are. Therefore in the Mentifex AI Minds such as http://www.scn.org/~mentifex/AiMind.html and in MindForth, the human maspar brain-mind is merely simulated, not re-implemented.

Hans Peter Willems - Mar 7, 2011:

[...] So it stands to reason that the software only needs to replicate the intrinsic processes that are working in our own brains and everything else (including consciousness) will stem from that.

Sounds reasonable. In fact, MindForth and AiMind.html both rely heavily on breaking down the process of spreading activation into a http://code.google.com/p/mindforth/wiki/ConSciousness tier where concepts “crest” briefly at a maximum activation, and a http://code.google.com/p/mindforth/wiki/SubConscious tier where concepts linger as available building blocks of new thought while gradually degrading down towards zero activation.

Hans Peter Willems - Mar 7, 2011:

[...]
I think that feedback-loops are essential for implementing AI-consciousness.

Yes, so do I, and I must confess that the theory of consciousness which I put forth sounds perhaps too simplistic because it relegates consciousness mainly to being the “searchlight of attention”, but I feel strongly that not much more is required for the dynamic process of consciousness—the feeling that, wherever we shift our attention, we perceive our own process of perceiving.

Meanwhile Dr. Ben Goertzel and his OpenCog group, after two years of gearing up, have finally published http://opencog.org/roadmap as their “Roadmap to transhuman AGI by 2023”, so the race to human-level AI and beyond is really heating up now.

 

 
  [ # 3 ]

It seems that prediction is the main mind process, the more I read lately.  It’s the only model that makes sense that I’ve seen.  I guess the question is, how does the brain handle a non-predicted event.  Therein lies the essence of intelligence.

 

 
  [ # 4 ]

Test-Driven Development tells us to write tests then write the code to pass them. Then as you write more code if the test fails, go back and fix the code. Wouldn’t it be great if the program could make hypotheses and try out ways of passing the tests on its own?

 

 
  [ # 5 ]

1. Building a conscious AI is probably not that difficult. Most AI-research is looking at very complex solutions to a problem that is fairly simple in nature; how to create a machine that can do what our brain can do. The (overly) complex solutions focus either on imitating the biological complexity of the brain (neural nets) or by emulating each and every feature of the brain in a dedicated software-process (like e.g. NLP). However, the main functions of our brain, memorization and processing, are already emulated in todays computers. Turing invented the Turing Machine as the proposed digital equivalent of a human brain and the Von Neumann Architecture was formulated (and implemented) to actually build a Universal Turing Machine. The real complexity of a human brain already has it’s counterpart in the electronic complexity (integrated circuits) that underlays todays computers. There is no reason why we should replicate this complexity AGAIN in the software that is running the AI-mind. So it stands to reason that the software only needs to replicate the intrinsic processes that are working in our own brains and everything else (including consciousness) will stem from that.

If this were true, don’t you think that people would have been able to do this 50 years ago?

 

 
  [ # 6 ]
8PLA • NET - Mar 7, 2011:

Interesting metaphor.

The operating system can be safely eliminated.
So you can bootstrap to your AI-consciousness.

I think we still need the OS (i.e. the plumbing) to run the ‘data-model’. But I argue indeed that this ‘brain-OS’ does not ‘implement’ things like intelligence and consciousness but merely supplies the base functionality to have intelligence and consciousness emerge in the data-model.

 

 
  [ # 7 ]
Arthur T Murray - Mar 7, 2011:

http://en.wikipedia.org/wiki/Von_Neumann_architecture is (correct me if I am wrong) not a massively parallel (“maspar”) architecture like our human brains are. Therefore in the Mentifex AI Minds such as http://www.scn.org/~mentifex/AiMind.html and in MindForth, the human maspar brain-mind is merely simulated, not re-implemented.

I think this is one of those ‘complexity-pitfalls’ that most AI-researchers run into; the parallel processing that is going on in our brain is just the way that ‘processing’ is implemented in the human brain. I see no reason that ‘processing’ needs to be implemented in a certain way to be able so substantiate consciousness. In a computer we already have ‘processing’ covered. Parallelism is (in this case) merely a solution to speed-bottlenecks, and although we do have to deal with the ‘Von Neumann bottleneck’, I still don’t see that as a barrier towards AI-consciousness. Maybe the first AI-conscious machines will be slow thinkers, but they will be conscious nonetheless.

 

 
  [ # 8 ]
Toby Graves - Mar 8, 2011:

It seems that prediction is the main mind process, the more I read lately.  It’s the only model that makes sense that I’ve seen.

I totally agree that Jeff Hawkins is on to something smile

Toby Graves - Mar 8, 2011:

I guess the question is, how does the brain handle a non-predicted event.  Therein lies the essence of intelligence.

Not sure if that IS the essence of intelligence, but great food for thought.

 

 
  [ # 9 ]
Robert Mitchell - Mar 8, 2011:

Test-Driven Development tells us to write tests then write the code to pass them. Then as you write more code if the test fails, go back and fix the code. Wouldn’t it be great if the program could make hypotheses and try out ways of passing the tests on its own?

What you describe is the continuous process of deduction, deliberation and decision, i.e. reasoning. Clearly a conscious AI should be able to do that. However, I state again that this is not ‘code’ that needs to be rewritten and tested, it is data that is being evaluated and either validated or discarded, or something in between like ‘not sure yet’.

 

 
  [ # 10 ]
Jan Bogaerts - Mar 8, 2011:

If this were true, don’t you think that people would have been able to do this 50 years ago?

Surely not. By your reasoning that would mean that something as simple as the paperclip would have ‘emerged’ the moment that we where able to make small pieces of metal and bend them.

Most great inventions have shown to be pretty simple in nature, once someone had thought of it. And afterwards the collective mantra is ‘why didn’t we think of that, it’s so OBVIOUS’.

Couple that with the human tendency to keep following the once chosen path, putting all effort into defending the choice made, instead of constantly looking for that small reason that could eliminate any validity to the choice that has been made.

In my view, the fact that we have 50 years of AI research behind us means that, instead of still trying to validate that 50 years of research, we better should think about why 50 years has NOT brought us conscious AI. I think there are only two possible answers for that question: either conscious AI is impossible (which is already disputed by many scientists like David Chalmers) or we are barking up the wrong tree. I vote for that last option and we seriously need to rethink AI-research (as several people like e.g. Jeff Hawkins and David Chalmers are already doing).

 

 
  [ # 11 ]

I recommend this book (“Self Comes to Mind:
Constructing the Conscious Brain” by Antonio Damasio), to you guys!

 

 
  [ # 12 ]
Hans Peter Willems - Mar 7, 2011:

1. Building a conscious AI is probably not that difficult. . .

Most great inventions have shown to be pretty simple in nature, once someone had thought of it. And afterwards the collective mantra is ‘why didn’t we think of that, it’s so OBVIOUS’.

Sort of like the “Egg of Columbus”.

Hans Peter Willems - Mar 7, 2011:

3. One of the most important aspects of ‘consciousness’ is that we are actually ‘driven’ by instinct, desires and ambition. Instinct is again fairly static (i.e. only changes through evolution) and can therefor be programmed.

Maybe the problem is that although instinct could be programmed, “what exactly to program” still escapes us.  If you agree with Scientists who have estimated that humans branched off from our common ancestor, chimpanzees about 5–7 million years ago then maybe a million years from now they will be talking about how AI life branched from humans in our time.

How do we compress a million years of evolution into a decade or two? We also don’t just want random evolution, but guided evolution which will result in intelligent entities that can communicate with us. Animals had the benefit of an external environment to help shape the evolutionary path. With AI we must build both the animal and the environment. We must decide how to feed our evolving AI “artificial experiences” if it will learn/evolve/progress.

 

 
  [ # 13 ]

Most great inventions have shown to be pretty simple in nature, once someone had thought of it. And afterwards the collective mantra is ‘why didn’t we think of that, it’s so OBVIOUS’.

Perhaps you mean something like resonance?

By your reasoning that would mean that something as simple as the paperclip would have ‘emerged’ the moment that we where able to make small pieces of metal and bend them.

Well, it’s not exactly like the worlds greatest minds spent their entire career looking for a (as in any) working solution to put 2 or more papers together using a small metal bar, did they?

 

 
  [ # 14 ]

maybe a million years from now they will be talking about how AI life branched from humans in our time.

interesting. That’s something I’d also like to know how this turns out. Ahh well, I guess we are not meant to know.

 

 
  [ # 15 ]
Merlin - Mar 8, 2011:

Maybe the problem is that although instinct could be programmed, “what exactly to program” still escapes us.

We might not know ‘everything’ that goes into instinct, but there are some prime suspects that are obviously seated in instinct, like for example self-preservation and the drive to procreate. There are several others I’m sure can be easily identified. I think by using thought-experiments several things might be discovered that can be preprogrammed much like instinct.

Again, there is not much use in emulating instinct ‘exactly’ like it works in humans. It’s much more useful to look at the function of instinct (i.e. preprogrammed behavior that can bootstrap other stuff) and emulate that. Ask yourself ‘what does it do in a human brain’ and then think about ‘how can we create that effect in a software program’.

 

 1 2 3 > 
1 of 3
 
  login or register to react