AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Hi ... can you help me please to start a project?
 
 
  [ # 16 ]

Marco, the question labeled #4 in your most recent post is VERY subjective, and you’re likely to get a number of possibly conflicting answers here. My take on it is that it depends a lot on the word itself, and on what specific criteria for “understanding” you’re looking for. Here are a couple of examples, to illustrate:

1.) The word “walk” - Simple, you may think, yes? Perhaps, but that depends on whether you just want the person who “understands” the word to know just the basics (e.g. putting one foot in front of the other), or perhaps something much more complex, like the entire scientific description of the process of walking, which is immensely complex.

2.) The word “Oligosaccharide” - which means a carbohydrate with very few component sugars. Almost nobody knows what this word means, BTW.

As for 4a, anyone can say that at any time, if they’re willing to lie about it. raspberry

The remainder of your post wanders dangerously close to philosophy; something that I’m not philosophically suited to deal with, at the moment, but I’m sure that you’ll find all sorts of opinions, should someone else care to chime in. cheese

 

 
  [ # 17 ]
Marco Marazzo - Jan 20, 2012:

In this moment the help of persons more expert of me, like you, is very important for me.

lol.. nope, we’re all equals on here.

I agree what you said about understanding being associations.  I guess there’s two ways to understand something.  a) by the senses (raw experience) and b) associations/correlations.

I believe for an AI to be useful, it will only need to understand words, phrases, etc, via correlations.  I also don’t believe understanding is a binary true/false thing - that is you either DO understand or DO NOT understand. I think instead it is simply levels of understanding.  The more correlations the system makes, the more it understands.  No, it is not true understanding like a human mind perhaps (that is, understanding the word ‘cold’ from actual physical discomfort of being cold), but I don’t think that will matter to creating a useful system.  Computers don’t play chess or do math like us, they don’t even know they’re doing these things… but they do them BETTER than us !  Strange, but true!

 

 
  [ # 18 ]
Marco Marazzo - Jan 20, 2012:


The understanding of concept is an illusion for human being.

Yes, basically.  Well, that illusion though, is good enough, practical, and servers our purpose.  I think, like the computer, we humans can have an only limited level of understanding.  But I’ll leave that to the philophers.  The point, I think, is that “ultimate” understanding is not required.  Depending on the type of system you are trying to build.

Dave Morton - Jan 20, 2012:

1.) The word “walk” - Simple, you may think, yes? Perhaps, but that depends on whether you just want the person who “understands” the word to know just the basics (e.g. putting one foot in front of the other), or perhaps something much more complex, like the entire scientific description of the process of walking, which is immensely complex.

This goes back to my statement earlier about depending on the application.  If your application is just a “casual chat” chatbot, then I doubt if you need the scientic understanding of walking.  However, if used for medical purposes, perhaps you would need to ‘drill down’ to a certain level.  All based on applicaton.  If you demand your system be an expert in everything, I hope you have enough money to employ an army of ten million people to work for a century to input the data.

If you are thinking of something like the Loebner Prize, well, having a certain level of knowledge of walking should suffice, and it’s ok if your bot says “Ok buddy, I don’t know *THAT* much about how signals get from a human’s brain, travel through the spinal cord and cause muscle contractions which move the legs”.  Seriously, how many people do you know know everything to the finest detail, from the scientific explanation of walking to biology, astronomy, history,etc?  No human knows everything, perhaps no AI will (however, given the fact that AI’s can “live” forever, perhaps they’ll have an enormous amount of time to gain knowledge and come extermely close smile )

There’s this idea that an AI must know absolutely everything about everything, and, unless it does, it is worth ZERO, and not intelligent.  An idea which is nonsense of course.

AI, like humans, will be limited, just as everything in the universe is, even, perhaps the entire universe.  It will have limits to understanding, to knowledge.  The important thing is—does it have enough knowledge to be useful, and is it flexible, and can it learn (to whatever degree).

I watched a good video awhile back, from one of the researchers of “GOFAI” (good old fashioned AI), and he said, you cannot ever state any statment to be completely true or false.

“birds fly”

is this true?

WELL…. what if it is a dead bird?

WELL….. what if I clipped its wings?

WELL…. what if the bird is in a small cage, no room to fly?

nonsense.  The system must, like humans, consider the statement “birds fly” to be GENERALLY TRUE.

GENERALLY TRUE.

But, have the capablity to , in a given conversation, know that, this or that SPECIFIC bird is dead, or has clipped wings,etc.

The trick is… can the system learn during conversation, about theese exceptions, and automatically integrate them into its knowlede base.

example…

human—-  in general, birds fly
AI—thanks, I now know that in general birds fly.

human—charlie is my pet bird
AI—ok
human—can charlie fly
AI—i’m assuming so

human—charle can’t fly
AI—oh? why is that?
human—well, you see, charle just died 2 seconds ago
AI—I see, so generally birds can fly, but if they’re dead they can’t?
human—yes, that is correct

human—I just got a new bird called sam
AI—ok
human—can sam fly
AI— GENERALLY birds fly, so I’m going to say yes.
human—correct
AI - smile

human—my friend has a bird named Tommy, but he can’t fly
AI—oh, he’s not dead is he?
human - nope, he’s in perfect health.
AI—aw.. why can’t he fly?
human—because tommy is an Ostrich
AI - aw, I see, so generally birds can fly, but some cannot?
human— you got it
(updates its database)

human—- i picked up a new bird today
AI - ok
human—can my new bird fly?
AI—i don’t know, is he dead?
human - LOL, no… *GENERALLY* when people pick up pets from a pet shop, they are not dead !
AI - Ok (updates its database)

human- my friend bob picked up a bird today, his name is Joe
human—can Joe fly?
AI—Hum, generaly speaking birds fly, and generally when one picks up pets from a pet shop, they are NOT dead, thus I am going to assume Joe is NOT dead, and I’m going to say yes, Joe can fly, am I right?
human—yes, good for you
AI—thanks


so this is perfect examples.. but close enough to illustrate the point smile

humans reason this way I believe… children do.  We make assumptions, we make mistakes, we learn by trial and error.  We’re not going to develop an AI by first trying to figure out “scientific explanations” of things like walking, and devleoping like that.  If an AGI program is going to succeed, it is going to happen like the way we learn.  anyway, thats my take smile

 

 
  [ # 19 ]

Hi Dave thank you for the answer. I think that the example you used was very useful. Especially the first part when you used a verb. Now I start to think to verb and maybe you helped me to find the direction to make some steps ahead.

About the example (2) I know that:
Oligo = few in old greek language (I learned that at school)
saccharide = I think that mean sugar or maybe with this suffix mean something derived from sugar
Anyway if I must say the truth after I read the definition on Wikipedia I still do not understand what this word means because the definition is in chemichal language.

Anyway I have big confidence that in the future I can explain to computer that:
suffix oligo=few ... and then ... AI must understand the word after the suffix
And so I have big hope for the middle part of my project ... but ... I still do not found the exatctly start point (but you helped me to find the direction). The most difficult part for me is still explain to a computer what sugar is, in this case the example is particualr difficult, because I know the sugar mainly because I eat it ...

Now i have another question maybe the last one for this post:
What is the start point? What is the smallest “human” concept you can explain to a computer (please do not answer 1 and 0) Because I want just to teach the computer the first word, maybe something like the difference between big and small? Too much complicate?

I hope to receive answer to this question too,  because I think that this is the last one that I need, and then I’m ready to start my project.

Then Dave Morton… I watched your profile and you wrote:
“My prime goal for my chatbot is to find some way to allow him to “reason”, rather than just respond to stimulus.”

Can I know your definition of “reason”?
Can you make me please two or three examples of things/behaviors/situations that you define “to reason”? Do you think that “to reason” do not depend from “a stimulus”?

For example I think that a calculator that make 6+5=11 .... in my opinion that means “to reason”. And I know some human beings that are not able to do 6+5 ...than to say to a computer in plain english two plus two is another little step in understanding and that means also “to reason” in my opinion.
Maybe you wanted say to learn from plain english ... or to understand more things ... i do not know ...

 

 
  [ # 20 ]

Hi Victor,

I read your example with attention and I found it very useful.
I started to read it this morning and I still do not finished to read it again. Because i think that 90% of the things you said are right, especially: “The point, I think, is that “ultimate” understanding is not required.”
I think human understand things only in part (not completly) ... the question is ... everything? I would be happy to explain to computer few simple human concepts… like small, big, weight, meter ecc. In other words the first “word”.
Do you have an idea about the first word?

But I have some doubt about “GENERALLY TRUE”. I do not think that human brain use “GENERALLY TRUE” to keep stored informations. Maybe you learned the sentence and you learned to repeat the sentence in some occasions, but i think that the human brain keep stored this information .... in the same way he keep stored milions of other informations ... with his method: “right or false”. And I think we must learn from human brain how to keep stored this information in memory. And maybe if I think enough about that, I will find a good method to do that.

In other words I do not want to use the first line of your example:
“human—-  in general, birds fly”

I’m sure that this is not the human method ... and my opinion is that in this case I must learn from human-brain way to keep stored information.

Anyway I want thank you for your help because you helped me to do another step ahead. Your example is very useful and I saved it on the hard-disk.

 

 
  [ # 21 ]

Glad I could give you some ideas Marco, and that Dave’ suggestions helped.    Perhaps you’re correct about the “generally true” concept, but myself, I’m going to stick with it for now.    I guess the reason I really like it is because I don’t want to have to worry about “getting everything right” straight away… to be able to teach it simple sentences, without too many *exceptions*.  And the main focus being on developing the algorithms to deal with exceptions (via language), and have it ‘auto-update’ its knowledge base, which will get richer and richer.

I guess neuroscience will tell us someday how the miracle human mind really does it.  I’m glad you’re getting a lot of ideas from chatbots.org !  Keep us posted on your approach, I’m sure it is yet another unique idea in AI.

 

 
  [ # 22 ]

Hehe ... about my project original… I always try to do something original ... starting from the first step. But at the same time in this case there is a lot of work to do and I really needed this discussion to have some clear Idea. I think I will try to build my project using the experience of the person that already are working on chatbot and AI.

I will sure post my updates and ask other question if I need help.

Thank you again.

 

 
  [ # 23 ]

I don’t have too much to say here. Welcome Marco and I’ll keep track of this thread.
Victor, your examples are so nice.
I think that that is the way a really useful AI could behave.
But do you think we will see something like this in our lifetime? ( For all the things you guys say, I think that a reasoning AI is almost an impossible human dream) downer

Dave, I agree, there is something of philosophycal when we thing about how and why humankind evoluted to today’s condition of intelligent and sentient creatures and how it will have an echo in AI…But I don’t menage philosophy in an advanced level to make things deeper.. ( And I know it is not the purpose of this thread) tongue laugh

 

 
  [ # 24 ]

Hi Fatima,

I just started this project so I do not have the answers to your questions, but when I see the work of other persons in this forum, I think that things go complicated very fast when you try to build AI.

I like to read this forum and I hope to learn something from persons more expert than me.

Thank you again for your help.

 

 
  [ # 25 ]
Fatima Pereira - Jan 20, 2012:

Victor, your examples are so nice.

Thanks

Fatima Pereira - Jan 20, 2012:

I think that that is the way a really useful AI could behave.

That is what I am focusing on now, and for a long time.  Turing test is a nice idea, but I have come to realize that is to distracting.  I’m approaching it from a practical stand point.  Also, to have it learn from general knowledge, and learn more specifics at it ‘grows’. (like Allan Turing’s concept of a ‘child machine’).

Fatima Pereira - Jan 20, 2012:

But do you think we will see something like this in our lifetime?

Oh, absolutely.  It -is- within our reach.

 

 
  [ # 26 ]
Victor Shulist - Jan 21, 2012:

Oh, absolutely.  It -is- within our reach.

Cool!
I thought all of you guys were a little “obsessed” about Turing test ( don’t take me wrong, please).
But I allways thought that if it’s being created a new kind of intelligence, it doesn’t mean it must behave 100% human. It just need to interact and learn, so it will be functional. We ( humankind) and this kind of AI can have lot’s of possibilities of working, interacting, learning and develop together
I really hope to see this.

 

 

 

 

 
  [ # 27 ]

I couldn’t agree more with Victor and Fatima on this one.

While entertaining for some, the Turing test as it is presented now is largely meaningless. When Alan Turing first proposed the Turing test, what he had in mind was intelligent human behaviour at its best. If you have ever read the examples that he gave, well, not many people would be able to pass the kind of Turing test that he envisaged either.

If it’s human companionship that you want then you should be working on your social skills and getting a real life instead of trying to create it in the lab. Otherwise, why settle for anything less than superhuman capabilities.

 

 
  [ # 28 ]

Fatima - obsessed with Turing Test , well, perhaps I was a couple of years ago.  But I *do* admit to being excessively obsessed to complete my chatbot! smile  Andrew, yes, perhaps the TT should have just stayed a thought experiment?

I propose the following 4 categories for ‘level of development of a chatbot’, I’m interested in what everyone else thinks of this…..

Level 0 - A ‘PSR bot’ - pattern/stimulus/response.  So this is, if we see this regular expression, template, then we immediately map that to a set of possible response choices, and perhaps randomly pick one, and response.  So, the only thing the bot ‘cares about’ is simply the last input string - there is no consideration of conversation as a whole, and no correlation with previous ‘facts’ it learned.

Level 1 - A ‘USR bot’  understanding/stimulus/response- This is also a stimulus / response system, where the bot only cares about last input & mapping to responses.  However, with a bot like this, there is very deep NLP happening and semantic inference, so the bot generates possibly many interpretations, and assignings different weights to each, and decides which one of its many competiting hypothesis is the right one.  Yes, the bot needs some world knowledge to do this.  The one example I’m sure everyone is getting tired of seeing from me is the elephant in pajams (‘while in africa I shot an elephant in my pajamas’—hey, it’s kind of funny & I can’t think of a better example right now).  So with a level 1 USR bot, it does that inference (that elephant is an animal, and thus probably doesn’t wear pajamas, whereas the pronoun ‘I’ usually refers to a human, which usually -does- wear clothes, thus ‘in my pajamas’ PP should be considered to modify ‘I’ and not ‘elephant’.  So going from a level 0 bot to level 1 bot is huge.  But again, the bot , once it understands, maps this to one of many ‘hard coded’ response selections.

Level 2 - ‘RASSAR bot’ - representation-agnostic-semantic-store-and-retrieve - I am targeting this kind of bot perhaps mid-2014.  I already have a lot of the architecture and proof of concept running, just need to add an enormous amout of knowledge of the world.  So with this bot, not only does it have NLU like a level-1 USR bot, but it takes in the user input, generates many possible semantic interpretations, then selects the most ‘credible’ one based on a confidence formula.  But, once that text is understood, and it knows if this last user input was a question, fact, command, etc, responses by correlating with a previously understood fact in its KB.  So this kind of bot takes the understanding of last input from user, and say if it was a question, tries to find the closest match with previous fact understanding.  I have the basics of this working in a proof of concept now, the big ‘tortue test’ will come when the system has thousands of grammar production rules & grounding rules.  LIke speech recognition (used to be), you would teach it your voice saying the numbers 0 through 9.. and it would get 100%.. but then start teaching it more words and it starts screwing up…. of course because it gets more difficult to differentiate when you have more things to differentiate.  So, level 2 bot will be a very useful product.  I want a search engine that has a little discussion with me to narrow down my search results…........continue next post….

 

 
  [ # 29 ]

  The idea of RASSAR is—I want to tell the bot something today, using one grammatical structure, and one set of words, and with a variable number of term & tree modifiers.  Then, when I ask it a week later, I don’t remember how I told the bot, I’m not sure which of the many synonyms I used for any word, ALSO, I don’t of course want to have to know the grammatical structure, and I also don’t know how many modifiers I used.    The bot should be completely ‘representation agnostic’, meaning, the words themselves don’t matter.  An interesting situation happens when you realize——- how do you really define a synonym for a word?  right now I can say to my bot “Bob went to a party given by his company’, and ask it ‘Did Bob go to that celebration sponsored by his great company’ , and it will indicate that ‘party’ and ‘celebration’ are basicaly the same (because in the systems ontology the both are defined as ‘class-of-human-gathering’).  It will also indicate that your question modified the <class-of-human-gathering> with ‘that’, whereas the closest fact match used just the article ‘a’.  Also, the question may not have the “given by his company” at all, and the bot will indicate that.  So creating a level-2 bot is no easy task. 

Level 3 - the Holy Grail of chatbots ... a ‘UCI bot’ - this is a bot that has a) understanding (like RASSAR & USR bots), but this one has U=understanding, C=correlation (RASSAR), *and* -I- = inference!  So with this bot, we have of course all the functionalities of level 1&2, but with inference.  So with this bot, when a question comes in, the first thing of course it tries to do is find a “direct” fact (meaning an actual fact saved statically on disk) as the source to reply.  But if there is no fact, then what… well, can we find logic that could deduce it?  So, we have ‘logic modules’, with descriptions in naturual language, that indicate their a) purpose, b) how to use them.  The bot can then read these descriptions, and find what logic-module can be executed to produce the response.  Now, these logic modules have “fact requirements”.  We open logic-module A, and it may require fact 1 & 2.  So, the bot goes to try and find those facts, which may be directly stored on disk (a direct fact you told it), or perhaps these facts that logic-module needs, also don’t exist—so—you guessed it, it looks for another logic-module (so we “push” the current logic-module on the stack… to deduce its requierments, then go back to originally logic-module, complete execution of it, and reply back.  There could be any level of indirection with this type of bot.  Ba


.....oh nice…. I had a huge post… but it came back with charactor limit… but came back with a lot of the original post missing!

Oh well..you get the idea.

 

 
  [ # 30 ]

the ‘gist’ of the “lost portion” of my post was that, perhaps a “level-4’ bot… where, those ‘logic modules’ themselves, the code (portions of its own program), are not fixed, but rather, it infers, from NL, the code to write.  So basically, a procedure given in plain English must be first converted into those ‘logic modules’ which can be executed to produce a response.  If anyone reaches a “Level 4” bot, I believe that would be “the singulariy” smile

 

 < 1 2 3 > 
2 of 3
 
  login or register to react