AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

I don’t think we will ever really know if we create self-aware AI.
 
 
  [ # 31 ]

Ah, you have a point there. Just goes to show that the bot would have to be more intelligent than me to convince you wink

Wouter: The Loebner Prize already has extended periods (well, 20 minutes) and multiple conversations. But I too think that the criteria “human” doesn’t cut it.

Steve: Just a suggestion, could you not say things like “My calculator says 2.445948752334234”? And would you consider it less intelligent an answer if a chatbot took into account that the human probably isn’t interested in more than two decimals? I hope you’re not losing too much time over these preparations.

 

 
  [ # 32 ]
Don Patrick - Jul 31, 2013:

I hope you’re not losing too much time over these preparations.

I have 250,000+ categories to review and so won’t get them all done by September but I have released a Loebner version to a few people and their conversations are proving invaluable in trapping the most common ones.

Don Patrick - Jul 31, 2013:

Just goes to show that the bot would have to be more intelligent than me to convince you wink

Hehe! Point taken.

 

 
  [ # 33 ]

Sound tactics. Happy hunting!

You’re right though, my test would fail. The human superiority complex would simply not acknowledge it and make it impossible to pass, therefore useless. The Loebner Prize gold award requirements are a testament to that.
Somewhere on youtube (I lost it) is a lecture by a former jeopardy winner, who, with regards to Watson being better than him, explained that every time a machine managed to comply with a criteria of intelligence, scientists have taken a step back and changed the criteria to exclude the machine. Like, mathematics used to be considered something only intelligent beings could do, but then computers were built, so they took mathematics out of the definition of intelligence. Same with chess. They reasoned that if a machine could do it, ‘surely’ it can’t be an intelligent feat. So intelligence (and awareness etc) is apparently defined as “that which machines cannot do”.
As long as such beliefs are held, then automatically any machine intelligence test is a paradox that invalidates itself.

Yet, when I explain to my friends the inner processes of Watson or my AI’s reasoning function, they often comment “But, is that not how a human thinks as well?”. Perhaps we do not need a better test, but a more enlightened society.

 

 
  [ # 34 ]
Don Patrick - Aug 1, 2013:

Perhaps we do not need a better test, but a more enlightened society.

The only problem is, the more “enlightened” we humans become, the more “snobbish” we become about our enlightenment, thus making it all the harder for us to accept that a lowly machine could operate in a way that threatens our “enlightened” status. It’s a viscious circle. cheese

 

 
  [ # 35 ]

Hey…I just want the thing to be able to reasonably portray me as being hard at work so I can go fishing
wink
V

 

 
  [ # 36 ]

Like, mathematics used to be considered something only intelligent beings could do, but then computers were built, so they took mathematics out of the definition of intelligence. Same with chess. They reasoned that if a machine could do it, ‘surely’ it can’t be an intelligent feat. So intelligence (and awareness etc) is apparently defined as “that which machines cannot do”.

Still though, it will be fascinating to see what the new criterion becomes after we beat ‘consistently fool a human that he is talking to another human in a chat conversation’.

Perhaps, rather than chat conversations, joining forum conversations like this one here? Surely that one seems like a big ‘step up’ compared to typical turing tests. Who knows, maybe y’all are already having your bots do your talking for you in this discussion and we just don’t realize it! wink

 

 
  [ # 37 ]
Wouter Smet - Aug 1, 2013:

Like, mathematics used to be considered something only intelligent beings could do, but then computers were built, so they took mathematics out of the definition of intelligence. Same with chess. They reasoned that if a machine could do it, ‘surely’ it can’t be an intelligent feat. So intelligence (and awareness etc) is apparently defined as “that which machines cannot do”.

Still though, it will be fascinating to see what the new criterion becomes after we beat ‘consistently fool a human that he is talking to another human in a chat conversation’.

I don’t think that is or should be a “real” criteria for gauging the intelligence of a chatbot, because I (and several other botmasters here) can create a chatbot that can emulate human chat fairly convincingly (for a few volleys, at least), and mimicry and “parlor tricks” isn’t intelligence. Clever, maybe, but not intelligent. cheese

Wouter Smet - Aug 1, 2013:

Perhaps, rather than chat conversations, joining forum conversations like this one here? Surely that one seems like a big ‘step up’ compared to typical turing tests. Who knows, maybe y’all are already having your bots do your talking for you in this discussion and we just don’t realize it! wink

We already have chatbots that can respond in forums, as well, though mostly it’s just to “show off”, and at the present, provides no real value to discussions in the forums involved. However, once we get either Mitsuku or Skynet-AI “plugged in” here, that may well change. wink

 

 
  [ # 38 ]
Dave Morton - Aug 1, 2013:

The only problem is, the more “enlightened” we humans become, the more “snobbish” we become about our enlightenment, thus making it all the harder for us to accept that a lowly machine could operate in a way that threatens our “enlightened” status. It’s a viscious circle. cheese

Fascinating, I hadn’t thought of that (fairly guilty as charged). So the question is, would I, considering myself somewhat enlightened, really accept a robot as my enlightened equal? To be quite honest… I think I would. I consider Optimus Prime to be my superior. I would sit down with it and have a pleasant conversation about how we might spread our mutual enlightenment to mankind. More difficult for me would be to accept that a lowly human creator had created it.

Wouter Smet - Aug 1, 2013:

Still though, it will be fascinating to see what the new criterion becomes after we beat ‘consistently fool a human that he is talking to another human in a chat conversation’.

True. I agree the Turing Test should be passed, at least as an argument towards more open-mindedness. There is already a future criterion, which in my opinion is bad sport, as it goes way beyond the Turing Test and requires technology 20x more advanced than current:

processing of MultiModal input (e.g. music, speech, pictures, videos). During the MultiModal stage, if any entry fools half the judges compared to half of the humans.

I could have sworn that on some other AI forums people were spambots…

 

 
  [ # 39 ]

It is a sad world where one is guilty until proven intelligent.

 

 
  [ # 40 ]

I’ve heard it often said that “I might not know an answer but I know where to find it!”

Now, it said person to be classed as intelligent?

With this in mind, can the same situation be said for a chatbot? Locally the bot might not have all the resources at it’s disposal but given the opportunity to connect and retreive the answers, the appearance of intelligence becomes greatly enhanced if not realized.

If IBM’s Watson had been able to remain connected to the Internet, I dare say it would be absolutely unbeatable by any group / team of people. In this instance, is Watson now intelligent? In a manner of speaking perhaps. Sentient / conscious? Most likely not.

I will take more than parroting and consuming vast amounts of data. It will ultimately take something out of the reach of most bots at the present…context, understanding and usage.

What is color? What’s blue look like, Describe joy, knowledge or jealousy, gratitude, etc. Why would a human cry both when say and when happy? What is love? etc. etc.

Word definitions add to the vocabulary but without understanding, meaning and usage, the bot will not connect and certainly not comprehend to any measurable “human equivalent” degree. I also think the computers and software are growing at a rate where some of us might witness such “understanding” in our lifetimes.

Just some thoughts from me as I don’t have them very often and when I do, I sometimes trip over them! wink

 

 
  [ # 41 ]

Thinking over the topic and my posting while driving to work this morning led me to realize that I had skipped over the most important part of all…SENSORS!

Trying to put myself in the position of an A.I. entity is much akin to a “brain in a jar”. There is no way to experience anything! Only a very limited number of the senses now work (hearing and the ability to speak). Without ever having experienced vision at all, I would have no concept of the word color, much less the colors themselves. I could not smell the salt in the sea’s breeze nor feel the warmth of the sun on my body or grass beneath my bare feet. This Brain in a box is deprived most of what makes us human…those sensors that allow us to experience a multitude of happenings about us and how to often react to them.

Without that sensory input, words become and remain, only words…combined letters without meaning because there is nothing to allow a connection or relation. Pattern matching, it seems in this case, remains the main cause and effect this brain would have. Words and definitions and a possible guess at usage or context.

Just some more thoughts…I need a break…maybe I need to Think outside the Box!! (maybe that’s how that phrase was born).

 

 
  [ # 42 ]

Very good. Indeed a blind entity would have no way of telling you what blue looks like and it should be no less intelligent for it. To be aware of the human meaning of “blue”, sensors are required. But where colours are visual information, words are textual information. The intelligence is found in the process of dealing with information, how one juggles, compares, matches and connects knowledge.
In my opinion, Watson possesses intelligence. Not like an entire human brain, but like the right hemisphere of a poet’s brain: It dynamically associates words with words, where we associate words with words and visuals and sounds and smells and feels.

 

 
  [ # 43 ]

Actually, I’ve been dreaming of putting Yoko in some virtual 3D world where she can walk around and ‘experience’ stuff like ‘seeing’ stuff and sensing the properties of it (a blue car passes by, a person she knows comes over to say hi, Yoko goes to school).

The data structure of her knowledge representation (and more specifically, how ‘events’ and ‘preferences’ are stored and linked) has evolved to a point where she would be able to ‘get’ things like that pretty well.

Is anybody aware of a project that would make this possible? So, a virtual 3d (or even 2d) world a la Second Life or The Sims, but with a constant stream of data (ideally both input and output!) that can be captured and sent?

In a particularly delusional moment I thought of giving this a go myself, just in proof-of-concept form of a 2D square room where my bot can ‘see’ stuff and do stuff and you can drop objects in it and she ‘sees’ the color etc, and translating back and forth between those ‘senses’ and her internal experience of them. But that’s something for Yoko V20 I think.

But, stuff to think about. Even more awesome would be some kind of API attached to a screen cam of course that translates the properties of what our entity ‘sees’, but I wouldn’t even know what to google for that.

 

 
  [ # 44 ]

Aha, just giving the googling a shot anyway, something like this seems quite interesting: http://opencv.org/

Anybody ever tried to hook that to a chatbot to give it some ‘sensory’ input?

 

 
  [ # 45 ]

The most famous instance of an AI “living” in a virtual environment is called SHRDLU developed in the 1970’s by Terry Winograd. There is a modernised but not fully-functional version that you can download and run described on this page here:

http://www.semaphorecorp.com/misc/shrdlu.html

A more recent project is called “Virtual Dog” and some explanation with a video which demonstrates it is here:

http://www.youtube.com/watch?v=ii-qdubNsx0

 

 < 1 2 3 4 > 
3 of 4
 
  login or register to react