AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Do computers think ???
 
 
  [ # 46 ]
Hans Peter Willems - Jul 4, 2012:

I would say that my own ‘thinking about a cat’ involves a lot more then ‘identifying the pattern that represents it’. That is ‘seeing’ instead of ‘thinking’.

I ‘see’ (as identifying the pattern) a lot of things each day, without thinking about it. If I was to think about everything I saw every day I would go bonkers. We are pretty good at discriminating the input from our senses, to be able to focus on what we deem important. Seeing is a ‘peripheral’ process, thinking about something is not peripheral.

Just because the part of your brain that talks doesn’t keep tabs on the part that determines what objects are in your field of you, does not mean it doesn’t require thinking to do it. When my “conscious” self wants to picture a cat pouncing, it is not the “conscious” self that decides the cat is black or that I’m seeing it in profile. (Though I can certainly “request” something different afterwards.) But just because I’m not aware that these decisions are being made and that my mind is digging through memories of cats doesn’t mean that they aren’t nor does it mean that my ability to generate such a mental image is not a sign of thinking/intelligence.

In fact, as I recall one standard measure on IQ tests is to have the person speak as many animal names/places/and so on that come to mind in a set amount of time. These items just seem to “pop” into our heads, but they are still taken as a sign of our ability to think quickly and creatively.

 

 
  [ # 47 ]
Merlin - Jul 4, 2012:

Birds fly, airplanes fly.

Yes, but airplanes do not fly ‘the way birds do it’... and that’s where the point of discussion lies about thinking wink

 

 
  [ # 48 ]

Is this enough like a bird for you Hans ?

http://www.youtube.com/watch?v=nnR8fDW3Ilo

wink

 

 
  [ # 49 ]

I’m going to back out of this discussion. It’s clearly (again) about how we can make something that ‘seems’ to do something (intelligence?), instead of building something that actually does it. I know this comes down to personal perception, but when a discussion winds down into semantics, for the sake of discussion, I’m out.

I’ve made this statement before here (like a year ago or such); the real test is not in the ‘appearance’ of ‘thinking’ or other things like consciousness or just intelligence for that matter. The real test is in ‘looking under the hood’. We can actually see inside an AI-system to see if things are faked or instead handled like, or close to, how humans do it. And yes, we can look under the hood with humans as well; we are humans ourself, and we are capable of introspection (some better then others, for sure).

So have fun, I’m done here grin

 

 
  [ # 50 ]

Or maybe this : http://www.youtube.com/watch?v=NuD1WKHsggs

Flip through to about half way…

Enjoy smile

 

 
  [ # 51 ]

Hans you are no fun, ducking out just when we were getting an idea about how you think.  Oh well, there’s nothing wrong with your opinions either though, shame you feel that way.

Cya around…

 

 
  [ # 52 ]
Roger Davie - Jul 4, 2012:

Is this enough like a bird for you Hans ?

http://www.youtube.com/watch?v=nnR8fDW3Ilo

wink

Hey Roger, your post merits a reply grin

I have another video of the Festo bird in my library, it’s totally beautiful. There are other similar projects as well.

However, it has little to do with the discussion on ‘thinking’. And ‘planes’ still don’t fly the way birds do wink

 

 
  [ # 53 ]

Oh thanks for the reply I was just interested in what you thought and it seemed like a good time to drop that in.

I’ll duck out too now as I am heading off topic…

Cya Hans and don’t be a stranger, everyone is entitled to an opinion smile

 

 
  [ # 54 ]
Roger Davie - Jul 4, 2012:

Hans you are no fun, ducking out just when we were getting an idea about how you think.

Most people here know already how I think. The ‘problem’ (just to name it) is that this is actually the wrong forum for me. I’m working on a real AGI-engine, and I’m obviously the only one here. My research is simply on another level (currently with active contacts to almost a dozen universities around the world), then what everyone else here is doing. For my project I’m knee-deep in cognitive science, behavioral sciences, psychology and a whole lot of IT-tech. I’m being coached by several professors, and in serious talks with a robot manufacturer in the USA, who is pretty convinced that I have the final solution for their biggest problem. It’s too bad that it is just a little too early, or I would have been a contestant in the upcoming DARPA robot challenge.

So I just don’t belong here wink

 

 
  [ # 55 ]
Hans Peter Willems - Jul 4, 2012:

I’m going to back out of this discussion. It’s clearly (again) about how we can make something that ‘seems’ to do something (intelligence?), instead of building something that actually does it.

All of my comments/questions have been aimed at discerning what actually qualifies as “doing it”.

It’s funny, I’m definitely an hobbiest in the area of chatbots/functional AI/what have you. But I’m a member of the academic community in another field. And anyone in cond mat who felt compelled to give the little speech you just gave concerning your laurels would be laughed out of the room. LOL

 

 
  [ # 56 ]
C R Hunt - Jul 4, 2012:

And anyone in cond mat who felt compelled to give the little speech you just gave concerning your laurels would be laughed out of the room. LOL

Of course! There is only one big difference: I’m NOT part of the academic community, and all these people have approached me to work with me, not the other way around. You see, there ARE in fact academic researchers who understand that I’m ‘on to something’. It even has happened (just a few weeks ago) that a AI-researcher with a PhD emailed me, asking me to give him advice on which books to study on ‘consciousness’ for AGI research (this baffled even me).

So I’m pretty sure that your condescending comment here doesn’t concern me.

 

 
  [ # 57 ]
Hans Peter Willems - Jul 4, 2012:

The ‘AI-effect’ is indeed a fallacy, because it is wrongly used by people in discussion, exactly the way you do here.

Take 2 seconds and read what the AI Effect means and you’ll see I used it correctly.

From Wikipedia:
The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.
Pamela McCorduck writes: “It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was chorus of critics to say, ‘that’s not thinking’.”[1] AI researcher Rodney Brooks complains “Every time we figure out a piece of it, it stops being magical; we say, Oh, that’s just a computation.

If you still don’t understand you’re obviously just trying to misunderstand.

Hans Peter Willems - Jul 4, 2012:

Yes, but airplanes do not fly ‘the way birds do it’... and that’s where the point of discussion lies about thinking wink

Alan Turing didn’t think so—functionality is bottom line.  Sorry, but if it is a toss up between the world’s most famous computer scientist, and the father of computing basically, no offense, but I think I’ll go with his idea of a functional test (Turing Test), and not ill-defined ideas.  A bird starts from the ground, goes into the air, stays there’s for whatever amount of time, then lands.  Airplane starts from the ground, goes into the air, stays there’s for whatever amount of time, then lands.  Function accomplished. .  period.
Human carries a conversation and is declared to be a thinking entitiy.  If machine does the same, machine is thinking, period.  Arguing HOW it does it, what it is made of, is utterly inconsequential.  But if you say, yes but it also doesn’t do X, and we try, and it fails at DOING x, then sure, perhaps it doesn’t think.  But arguing with hazy, misty, fluff-speak like consciousness is completely useless.  I can see two vendors of AI in the futuer… vendor A has a system that combines powerful grammar skill and world knowledge, that passes the TT, vendor B’s product doesn’t really do much of anything, but vendor B says “but but. .  it has consciousness!” lol.

Hans Peter Willems - Jul 4, 2012:

This is also where I believe that purely grammar-based systems will never actually ‘think’, because in the end it is just grammar, not ‘understanding

duh!  Grammar plays a big role in language, not all, but plays a big role.  World knowledge, though, is the most vital ingredient.  Without knowledge of the world, there is no way to disambiguate language.  World knowledge provides the content, the meaning, grammar provides the structure to that meaning, which IS meaning in and of itself.

 

 
  [ # 58 ]
Hans Peter Willems - Jul 4, 2012:
Merlin - Jul 4, 2012:

Birds fly, airplanes fly.

Yes, but airplanes do not fly ‘the way birds do it’... and that’s where the point of discussion lies about thinking wink

I agree that discussion of the process is important. But of course biologic and non-biologic systems will never perform the task in exactly the same way. So some people would be valid in saying that computers are incapable of ever thinking like people do. I think your earlier quote sums it up best:

Hans Peter Willems - Jul 2, 2012:

The way I see it, it is possible for computers to ‘think’ in a way that looks very similar to how humans think. However, for that to happen, the computer must involve the same thinking-processes that we humans use when we think. So ultimately it comes down to cognition, which in turn needs things like episodic memory, emotional responses, autonomic goal setting, and a whole lot more. And if you think that would look a lot like a ‘conscious machine’ then you are right grin

To move to AGI, there needs to be much more than a pattern based stimulus/response approach. Although I don’t necessarily agree that direct emulation of the way humans think is the best method for an AGI to accomplish the same task, there are a number of parts of the human experience that can be used as a model for “bot grounding”. It may be possible though, like in my toy math example, to eat an elephant in bite sized chunks.


http://www.stanford.edu/~phinds/PDFs/Stubbs-et-al-IEEE.pdf
http://mitpress.mit.edu/books/chapters/0262290758chap130.pdf

 

 
  [ # 59 ]
Hans Peter Willems - Jul 4, 2012:

The ‘problem’ (just to name it) is that this is actually the wrong forum for me. I’m working on a real AGI-engine, and I’m obviously the only one here. My research is simply on another level (currently with active contacts to almost a dozen universities around the world), then what everyone else here is doing. For my project I’m knee-deep in cognitive science, behavioral sciences, psychology and a whole lot of IT-tech. I’m being coached by several professors, and in serious talks with a robot manufacturer in the USA, who is pretty convinced that I have the final solution for their biggest problem.

Hans, I have great respect for your work, but I think you are being a bit too dismissive of everyone on this forum. I for one have spent a lot of time looking at cognitive science and how to emulate human interaction with technology. The difference between a real AGI-Engine and what might come out of the future of chat bot technology has yet to be seen and as I have said before, I will be satisfied with the “Illusion of Intelligence”.

But, at least in the example I gave and in any way that matters, Skynet-AI really “thinks”.

 

 
  [ # 60 ]
Merlin - Jul 5, 2012:

Hans, I have great respect for your work, but I think you are being a bit too dismissive of everyone on this forum.

Merlin, I do have a clear appreciation of several members on this board. For obvious reason those are the ones that engage me in constructive discussion. Having said that, many people here like to throw age-old ideas like the ‘AI-effect’ into a discussion, or point to Turing (for whom I have the deepest respect) as the ‘man who knew it all in advance’ while just about most of AI-academia have dismissed the Turing-test as a means to test for machine intelligence years ago. The A(G)I-world has moved on since. Turing had some great ideas, but that doesn’t mean he was right in everything. I have great respect for Turing, but I don’t worship his ideas (or those from any other great scientist, for that matter). It’s healthy to stay skeptical, it feeds innovation.

Over the last five to ten years there has been an incredible amount of research been done worldwide on conscious machines, affect computing and emotion-based reasoning. I’m not some idiot with a different idea or perspective, but rather I’m currently moving at the edge of today’s AGI-research. That is why there has been pretty much an avalanche of interest in my project over the last few months since the website went on-line.

Yet still here on the forum most people like to show that they know what the Turing-test is, and some even have the idea that chatbot-technology could evolve into full AGI. That’s simply not going to happen. So from my perspective, debating AGI-technology with people who firmly believe that a pattern-based challenge-response systems is the way to go, seems pretty pointless to me.

Another view on the matter: chatbots are basically ‘expert-systems for language’. And expert-systems where decades ago already ruled out as real AI, not because of the ‘AI-effect’, but because it has become clear that in an expert-system there’s no thinking involved, no real intelligence at work. In the words of Steve, it’s just a whole bunch of IF-THEN-ELSE patterns.

 

‹ First  < 2 3 4 5 > 
4 of 5
 
  login or register to react