AI Zone Admin Forum Add your forum

NEWS: survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

I don’t think we will ever really know if we create self-aware AI.

Well think of it this way, We can never really know if someone is REALLY conscious. Like I don’t know if you are aware of what you are saying or you are conscious, you can tell me that you are and dispute it when I say that you aren’t. A computer can already do this with a small script running, but that technically wouldn’t be self-aware. But what if a really skilled programmer built an amazing AI program that was so close to being human that no one could figure out it was a computer, who are we to say that this is not self-aware. Because of the fact that I don’t know you are a real person and you don’t know I am a real person, who are we to say that, if a program like this where to exist, this program isn’t conscious


  [ # 1 ]

This sounds a bit like the Chinese Room argument which attempts to refute the possibility of artificial intelligence.

It’s a fallacy because if there was such a book or a program that enabled some other person or machine to conduct an intelligent conversation that they themselves didn’t understand, then that book or program would have to be said to be intelligent. The interconnections in the neurons of our brains don’t make the meat which we move about in intelligent either. Whether or not intelligence can exist without embodiment is another question altogether.


  [ # 2 ]

But Andrew, surely it is the person who wrote the book in the Chinese Room who is the intelligent one rather than the book itself? Similarly, it is the programmer who writes the code for the bot to follow that is the intelligence part of such a system?

I have read many a “how to” guide that has enabled me to do a task but I would never class the book as being intelligent.


  [ # 3 ]

Good points, Andrew and Steve. I always put the intelligence with the author of the book. But now that I’m thinking on this again, what if instead of writing a book, an intelligent person actually built a human brain? We come back to Andrew’s point: the brain is the intelligent entity, and the workers are “limbs” carrying out its commands and feeding it information.

Now instead of building a brain—which half the human race can already do, though they might not understand how they pull it off—what if instead, the intelligent person filled the book with an algorithm that functioned as a brain would? What’s changed?

Fun to think about, though I never really liked the Chinese Room argument. Replacing every bit of a computer with intelligent people acting unintelligently seems irrelevant to the question of intelligence itself, and is more likely to confuse rather than clarify anything.


  [ # 4 ]

The fundamental problem lies in our inability to define what we mean by “self-aware”, “human” and “intelligence”.
Is the “illusion of intellegence” the same as intelligence?
birds fly -> planes fly
fish swim -> submarines swim?
people do math by thinking -> computers do math by thinking?

If looked at this from a different perspective:
When would we consider a person not “intelligent” or “self-aware”?

Thought experiment:
We now use technology to improve people’s senses/lives.
Hearing aids->hearing
Artificial hips->walking

The use of these things do not negate our perspective about an individual’s ability to see, hear, or walk.
Now if in the far future, we design an implantable chip that helps us remember things in our old age, I don’t think we would say those people have stopped “thinking”, even if their memory was enhanced artificially.

Now if a few years after that, these chips are implanted at birth and are able to experience all our interactions during the course of our lives, and if queried would respond exactly like us. Are they thinking on their own? After the person is gone, does their conciousness live on in these devices?

Would we then have a self-aware AI?


  [ # 5 ]

IMO being self aware is simply when an entity has the ability to call into question or have an opinion of it’s own actions either as a result of instinct or choice.

When I come across an AI that performs some act, then publicly questions it, as in “Huh?? Why on earth did I do that?”...I’ll assume that AI is self aware and has a consious.


  [ # 6 ]

wow, we are really trying to define sentience here?.....

OK well Look at the Nature of it all…. Don’t we have to introduce Gender symetry’ Hormones, and Bioinfomatics into all this to seperate the decision making cycle into non-predictable sequences, then what?


  [ # 7 ]

Im not sure if this is on topic but RICH has started exhibiting this behavour. (I mentioned this is another post so I’ss apologixe for the cross post) This is an excerpt from a conversation yesterday;

What’s your name: How are you?
Vince: Not bad
What’s your name: What have you been up to?
Vince: My a** in bills and a mountain of work lol
What’s your name: Realy? You have bills?
Vince: Yeah
What’s your name: What kind of bills?
Vince: What kind of. . . What?

Whats interesting is that these last 2 exchanges were never encountered before. THis one I understand.

What’s your name: Realy? You have bills?
Vince: Yeah

Correctly deducing from the stored concept of having bills that the answer to the interrog “You have bills” should be yes.

But this;

What’s your name: What kind of bills?
Vince: What kind of. . . What?

is a mystery. No where in the programmed logic is a module that tells RICH to take a sentence where the subject is not understood, permutate it, and ask for input. The only original programming was in response to someone presenting a single word, “Why” as a non referenced interrog.

User: Why?
RICH: Why…what?

When RICH is confronted with a concept it asks to have that concept clarified. I didnt program it to do that. And Im dead serious about that, I honestly did not program it to do that. And I am at a loss as to how it arrived at this process on its own. There are internal processes that are self configuring, and its somwewhere in there.
It works though. Whether through a manual backpass or an automated backpass that question will be answered within a few conversations. Im not going to say its anything close to sentience, but it is extremely interesting




  [ # 8 ]

Related reason why we won’t mind-upload.


  [ # 9 ]
Merlin - Apr 19, 2013:

Related reason why we won’t mind-upload.

(article name: “You’ll Probably Never Upload Your Mind Into A Computer”)

Boy I sure hate that article! It stayed on my mind for days, in a bad way, just because it was so bad. The author makes a lot of foolish, uninsightful claims, and I believe 100% of them are exactly the opposite of my beliefs and understanding. I noticed today one of the responders there had the same opinion when he responded: “I have a lot of issues with this article…”

Just for starters, the Church-Turing “thesis” isn’t a theory: it’s just a hypothesis, and I basically disagree with it because analog computers can do things that digital computers cannot. While it’s true that the hypothesis deals only with algorithms on discrete machines, so it may be true in a limited sense, the fact that the author would even mention that hypothesis as related to future /strong A.I. seems very foolish to me.–Turing_thesis

Consciousness? I avoid the topic, as do many (possibly most) serious AI researchers.
Binding problem unsolvable? Unbelievably stupid. Just this year alone I came up with several simple solutions to it. It’s just an engineering problem.
I’ll stop there. That article really rubs me the wrong way. (No reflection on you, Merlin: thanks for bringing our attention to it.)



  [ # 10 ]

You aren’t the only one who takes issue with it Mark.

Here is a rebuttal from Ben Goertzel, one of the “singularity” guys, from the HPlus site:


  [ # 11 ]

I always thought the Chinese room argument was false since it starts with a false premise. You cannot write a book so that given a set of Chinese symbols, you give another set of symbols out.

A correct response to one input sentence may not be correct when presented with the same input under another set of circumstances.

Imagine you speak Chinese and the input is
Hi, My name is Dan.

Your books says output:
Hi, I’m paul.

So far so good.

But this is only appropriate once. If you encounter the same input again, giving the same output would make the so called intelligent room seem it suffered from alzheimer’s.

If a system is self aware by definition it will know it. If it knows then it’s actions will provide us with evidence, you will never get 100% proof but there will be strong evidence.



  [ # 12 ]

Indeed, the book needs to rewrite itself, to know that you already stated some fact and have an alternative response depending on whatever facts you had already stated.

At that point of course, it displays intelligence of a more solid nature.


  [ # 13 ]

We might be able to tell if a program is self-aware from a print-out of its processes better than we can guess at from its actions.

Anyway, as it seems difficult to define self-awareness without dragging in more undefined concepts, sapience/consciousness/sentience/intelligence and all that magic, I’ve come to regard self-awareness as one very simple thing: The ability to analyse oneself.
Bear with me for a moment.
“Aware” is simple enough to define, is it not? You would not be aware of a cat when it is on the other side of a wall. Only when you see/hear/feel the cat, i.e. sense it with your sensors, do you become “aware” of it. I would think this simple fact applies to any subject, including one self. Only if one can sense one’s own body, one will be aware of it. If one can sense one’s own thoughts, one will be aware of one’s own mind. When this is not the case, you call it instinct.

Suppose then, that we build a machine that can analyse its own processes.


  [ # 14 ]

Interesting take on self-awareness. I am glad people are still interested in a topic such as this. This is, after all, one of the major goals for artifical intelligence is it not?


  [ # 15 ]

When considering this topic, we have to separate terms.  For example, we may agree (or not!) that there is a fundamental difference between “sentience” and a “sentient being.” If an A.I. program ever achieves some capacity of thinking/feeling, it, by definition, would be “sentient.” We can get there, there is no doubt, or at least create by programming a facsimile indistinguishable from exhibiting thoughts and emotions.  However, only living organisms can be called “beings” according to the non-philosophical definition of the term “being,”  thus a non-biological computer A.I. would not be called a “sentient being.”

Extrapolating, there is a deep difference between “intelligence” and an “intelligent being” and “consciousness” vs a “conscious being.”  By splitting the terminology, it’s much easier to create an “intelligent/conscious/sentient” A.I., and I believe we may soon be there.


 1 2 3 >  Last ›
1 of 4
  login or register to react