AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Do computers think ???
 
 
  [ # 16 ]

Going back to the original topic, just because a program searches for a template before displaying it doesn’t mean it is thinking. Would you say Notepad thinks when you do a search and replace?

 

 
  [ # 17 ]

Reading the topic from the start I was just about to post a certain question… but Steve already steered towards the original topic wink

The way I see it, it is possible for computers to ‘think’ in a way that looks very similar to how humans think. However, for that to happen, the computer must involve the same thinking-processes that we humans use when we think. So ultimately it comes down to cognition, which in turn needs things like episodic memory, emotional responses, autonomic goal setting, and a whole lot more. And if you think that would look a lot like a ‘conscious machine’ then you are right grin

 

 
  [ # 18 ]
Dave Morton - Jul 2, 2012:

Respectfully, I have to disagree at this point in an absolute sense, mainly because of the nature of how AIML works. For the most part, if you have multiple conversations with a typical AIML chatbot with a given user typing in the same inputs (bear with me here, there’s a point), you have a better than 90% chance of getting the same responses.

Wow, then maybe Skynet-AI DOES think. It is virtually impossible to have the same conversation with it two times in a row. wink

About all that I ever hoped for was the illusion of intelligence. In the last version I added extra features which allow it to write more of its own code. It generates new responses and solves math problems by adding new neurons/program code on the fly. Is that thinking?

 

 
  [ # 19 ]

Artificial Intelligence = Artificial thinking wink

 

 
  [ # 20 ]

http://www.scn.org/~mentifex/mindforth.txt (updated 1 July 2012) thinks in English.

 

 
  [ # 21 ]

No it doesn’t Arthur. It is just following a load of IF…THEN…ELSE… commands.

 

 
  [ # 22 ]

The question is not does this or that software think or not think—it is not a simple binary yes/no.  It is more like, how much, or to what granularity, or how many levels of indirection of abstract processing is a human, animal, or peice of software capable of?

If a human has to think to acomplish X, and a machine can do X, then the mahine can be said to think.

And yes, a normal caculator thinks then, albeit in an extremely limited domain.  BUT if the full general purpose and almost infinitely deep abstraction levels that the human mind is capable of is say ‘level 1,000,000,000’, then a calculator ‘thinks’ at say level 1, or perhaps 10.  smile

Now, add pattern recognition and ‘conditional branching’ which a computer has that a calculator doesn’t, now perhaps you are at level 100 (of 1,000,000,000).

‘thinking’ is not a binary yes/no,  it is ‘what depth’ or what level of generality of abstract processing (ie thought) is this entity , man, machine, animal, alien, capable of.

Intelligence is directly related to how general or broad the set of types of problems that can be solved.  So a modern digital calculator is more intelligent than an abacus, and a computer is more intelligent than a calculator, and an ape is more intelligent than a computer, and a human is more intelligent than an ape.

People will never see computers as ‘thinking’ though, until they can perform ‘thinking’ (ie abstract processing) in an extremely varied set of domains.  for that, they will have to understand the world.  To do that, they will have to partake in naturual language understanding smile

Now computers won’t ever REALLY understand the world the way we do, simply because we have senses that they do not (and probably vice-versa eventually), but language will be a kind of ‘common denominator’ for us to communicate with them and solve real world problems.

 

 
  [ # 23 ]

With that said, now the question is: Is it possible to program a peice of software that can take the “level 100” computer up to say a level 1000 ???

 

 
  [ # 24 ]

Um… Yeah… What he said. cheese (meaning Victor, of course)

While I agree with you, victor, I think there’s a component of “breadth” to consider, as well. Even a calculator can surpass the average human in depth, within the very limited domain of “simple math”, and a computer, likewise, can do so in a broader sense, though still in a fairly limited fashion. However, what we lack in depth, we more than make up for in breadth, and that’s where even the so-called “lower life forms” (e.g. chimpanzees, gorillas, etc.) far exceed the capabilities of computers, and where we humans surpass these “lower life forms”.

 

 
  [ # 25 ]

Well, that is where two ‘schools of thought’ come in. 

School ‘1’ says, let’s take all these ‘narrow ai’ agents, and write a kind of ‘wrapper’, that will ‘call’ each, at the appropriate time.  A kind of central ‘general’ core, which calls specific modules.  So here, you start more detailed focus, specific algorithms, and sort of, ‘add generality’ as you go.

School ‘2’ says let’s start from the ground up with general algorithms.  So this school says let’s be as general as we can, and have the system learn more specifics.  So you’re going in reverse to school 1.

Some members on chatbots are going with school 1.  I’m trying school 2.  Who knows which will turn out to be successful . .perhaps both.  I sure hope it is one of them!  Who is to say?  time will tell smile

 

 
  [ # 26 ]
Dave Morton - Jul 3, 2012:

I think there’s a component of “breadth” to consider, as well. Even a calculator can surpass the average human in depth .... .

Agreed. . . two things the computer has over all humans is: 

[1] speed of processing

[2] free of error

these 2 things, once understanding is achived (for example in the form of language, and yes, other forms of understanding count as well, visual, auditory,etc), but once the compuer has some kind of understanding of the world in whatever form,  its speed and ‘error freeness’ will turn it into an incredible force!

 

 

 
  [ # 27 ]

I’d have to debate the concept of “error free”, but I would accept a concept of “minimal error”, instead. smile

 

 
  [ # 28 ]
Victor Shulist - Jul 3, 2012:

If a human has to think to acomplish X, and a machine can do X, then the mahine can be said to think.

I disagree. By that definition, when we think about how to do something, then write an algorithm for it and put it in a machine so the machine can do it, the machine would be ‘thinking’. This is why I would argue that a calculator does not ‘think’ in any way or by any definition; the ‘thinking’ part in a calculator is still human, it’s the thinking of the programmer who made the algorithm. This is also the case with (current) chatbots; the ‘thinking’ of the chatbot is that of the programmer who put his/her thoughts into the algorithms.

In my perception, ‘thinking’ is very close to ‘consciousness’, it has connotations of ‘self-generation’ (of the thoughts). I would say that real thinking is thinking ‘by itself’. If ‘something’ is not ‘thinking by itself’, then it is not really thinking at all, it’s following it’s programming. We have this in humans as well; we do not see ‘instinct’ as the equivalent of ‘thinking’. Instead, we see instinct as ‘the things we do without thinking about it’.

Thinking of it, by that definition, a calculator does not think but does have instincts. And indeed I see human instincts as preprogrammed algorithms.

 

 
  [ # 29 ]

Interesting Hans and I think (no pun) that you are largely right, or at least there’s not much wrong with your perception of the current state of affairs.

However what would you say about a system like this :

http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html?_r=2&partner=rssnyt&emc=rss

Where the programmer gives the machine the means, but the machine teaches itself…

 

 
  [ # 30 ]

Hans - I would say that real thinking is thinking ‘by itself’. If ‘something’ is not ‘thinking by itself’, then it is not really thinking at all, it’s following it’s programming.

I would say thinking “by itself” (without input) is linked to creativity. But, do you believe thinking by itself with input is “Thinking”?

Let me try a very limited domain thought experiment, 1 input, 1 output and see what everyone “thinks”:

USER:What is the number between twenty one and twenty three?
AITwenty two

This is an actual input/output from Skynet-AI. The numbers 21, 23, and 22 do not exist anywhere in Skynet-AI’s programming or database. Neither do the words; “twenty one”, “twenty two” or “twenty three”. The AI writes its own code to understand the natural language input, solve the problem, and produce natural language output.

So let me ask you, in this example, does Skynet-AI “think”?

 

 

 < 1 2 3 4 >  Last ›
2 of 5
 
  login or register to react