AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Is it possible to generate knowledge within the knowledgebase?
 
 

Hi all

I am going to post a thought which just crossed by mind.

For example we have a knowledge base or the database, is it possible to create new knowledge within the existing database. Not talking to anyone else..

What I want to say is that a chatbot when not talking to anyone runs a script which evaluates the existing knowledge and generate more knowledge out of it…

I am talking just like some people do in meditation, they evaluate self and try to reconstruct their behaviour or even infer from experiences.

Is it possible to implement this in chatbots (It definitely is…) but how far reaching will be the results?
Where’s the limit?
Will anyone like to share opinions? on this random thought

 

 
  [ # 1 ]

I don’t know about the whole meditation thing in a chatbot lol…. but you CAN do certain deductive and inductive type reasoning operations to produce more information from given information.  Doing this with free form natural language is certainly not ‘low hanging fruit’ though smile

 

 
  [ # 2 ]

Would it be possible for a bot to conclude something like

User> Will you answer no to this question?

—- AI thinks to itself, hum, if I answer “yes”, then I would be saying “Yes, I will answer no to this question”.. but that doesn’t make sense because I answered Yes, not no.  What about no, then I’d be saying “No, I will not answer no to this question”, but that doesn’t make sense because I *did* answer No.  Thus both yes and no answers are no good.  What if I say “I don’t know”, well, then I am not answering no, so no would seem to be the right answer.  Conclusion: there is an answer, but it cannot be stated, because by stating it you change the validity of the answer.  This is similiar to the halting problem.  Also the bot could realize it is a self referential question.  These are the ultimate functionalities we dream of in a strong AGI system.

 

 
  [ # 3 ]

I’ve read about bots that have implemented a “dreaming” phase for processing information during downtime.

I do know that Ultra Hal on Twitter http://twitter.com/UltraHal is always learning from conversations on Twitter without participating in them.

 

 
  [ # 4 ]

Yeah! this is quite advanced thing. Implementing a chatbot answering this will make chatbots stronger than some human beings…

But that wont be natural though. . . For example expert systems can answer precisely and accurately but humans cant..

The question “Will you answer no to this question” I asked to some people even puzzled some of them for a while…
Some instantly answer it in yes or no but an intelligent chatbot if detects the condition wont be human like anymore. It will be better than humans hence not natural… We wont be creating true intelligence because intelligence we need in chatbos is not about answering accurately, its about percieving, thinking, and answering…

A chatbot should be more creative i.e. may be it should be able to program itself… just like humans do.. Not every human is programmed in right way but they do program themselves.

So, concluding it, that is to say that it is not about making chatbots accurate its about making them more human like…
What do you say?

 

 
  [ # 5 ]

@ Marcus Wow! I will have a look at it, whats it about… But learning from twitter is again external knowledge source. . .
I talked about learning from within..

 

 
  [ # 6 ]

Skynet-AI Response:
USER:Will you answer no to this question?
AI: Answer no to this question? Would you do my bidding if I said I would?
wink

 

 
  [ # 7 ]

IBM Watson would be an example of “internal”, and I believe they had to do a tremendous amount of “pre-processing” to create knowledge from data.  Whereas, Apple Siri might be an example of “external”, where there is supposedly crowd-sourced cloud-learning happening all the time.


The Zabaware Ultra Hal Assistant, by Robert Medeksza, has a “Dream Machine” plugin called “Dream Out Loud” that enables “talking dream mode”.  Apparently, “it keeps track of the topics you talk about then at some idle point will look up those topics and find sentences and or phrases about that topic and will use them to talk to itself” (sort of a sub-conscious).


Someone has actually taken out a patent on “robot dreams” (see below link), but when I contacted them about it got no reply.

=> http://www.meta-guide.com/home/bibliography/google-scholar/robot-dreams


Vinu Arumugham (link below) has created FemtoThnk (below 2), a cybernetic behavioral model that features higher level brain functions that include a version of dreaming.

=> http://home.comcast.net/~vinucube/femtothnk.html
=> http://sourceforge.net/projects/femtothnk/


And another more recent example (below) from the “HARUMI project”.

=> http://jeffelson.e-monsite.com/

>>When Harumi is not sollicited by user for more than 10 minutes, she will start to “dream”, meaning she will explore randomly answers available in her database then put back those answers as stimulus if they contain a word non present in entries. This means that she’ll restructure by herself the organisation of her memory in order to be more efficient.<<


= = =


BTW, I’ve never really liked the term “chatterbots”, though it was certainly an apt description of a generation of conversational agents; however, in the immediate future I expect bots to be more practical, task oriented, and useful to people, hence the current term “assistant”.  For this reason, I’ve always felt that the Turing test was a red herring.  We don’t need bots that can fool people; we need bots that can help people, along the lines of augmented intelligence.

 

 
  [ # 8 ]
Marcus Endicott - May 2, 2012:

IBM Watson would be an example of “internal”, and I believe they had to do a tremendous amount of “pre-processing” to create knowledge from data.

I recall watching a video about WATSON where one of the engineers stated “Watson ITSELF doesn’t understand a sentence directly”.
What that means exactly I’m not sure.  But I would imagine if it COULD, without, as you say a lot of pre-processing, we would KNOW ABOUT IT!  Operating systems would come bundled with an NLP interface and Google would be history!  Unless they bought them of course.

Marcus Endicott - May 2, 2012:

BTW, I’ve never really liked the term “chatterbots”, though it was certainly an apt description of a generation of conversational agents; however, in the immediate future I expect bots to be more practical, task oriented, and useful to people, hence the current term “assistant”.  For this reason, I’ve always felt that the Turing test was a red herring.  We don’t need bots that can fool people; we need bots that can help people, along the lines of augmented intelligence.

 

100% agreement.  Chatbot to me, is a ‘cheap’ name.  I also like assistant or ACE (artificial conversational entity).

Merlin - May 2, 2012:

Skynet-AI Response:
USER:Will you answer no to this question?
AI: Answer no to this question? Would you do my bidding if I said I would?
wink

So he did NOT answer no.. thus ‘no’ was the right answer, but again, skynet couldn’t actually say that—would have made the answer incorrect.

Muhammad Kashif Shabbir - May 2, 2012:

So, concluding it, that is to say that it is not about making chatbots accurate its about making them more human like…
What do you say?

Well I guess that depends on your objective.  To pass a Turing Test, yes.  For me though, I want accuracy - I’m thinking more like Marcus is.

                                      ~  ~  ~  ~


Thinking about this a bit more…..  the whole accuracy thing.  I think I will have options for how accuracy the bot will be.  Right now Grace can be told things like:

User> John went to a celebration.
Grace> Understood.
User> Did John go to a big party?
Grace> Yes.

and it will, based on context, allow ‘celebration’ to be used interchangably with ‘party’.  (and yes, control whether or not it points out adjectives—if ‘big’ modifying party/celebration is import, see Grace/Clues thread for some examples).

now some people I know are rather ‘anal retentive’ (consult Urban Dictionary lol), and to them, there is a fine distinction between almost any two words. 

thus, it should probably be up to the user.  Some users are fine with ‘liquid’ and ‘fluid’ being the same, I’m one of them, but a physicist in a lab would want there to be a clear distinction.

Thus, what is it about ?  it’s about the end user - know your audience.  A ‘saturday night just for fun’ bot will be much more forgiving or should be, about interchanging words and not care much about accuracy.  But in a scientific context, you want almost infinite degrees of clarity.

 

 
  [ # 9 ]

Victor, the statistical confidence scoring of IBM Watson would take care of this accuracy issue; and, there are already APIs in the pipeline that could make that kind of statistical confidence scoring available to “street level botmasters”.  ;^)

I certainly want bots (assistants) that are much smarter than people.  ;^)  See my recent Quora answer to, “How would a society benefit from an AI that passes the Turing test?”:

=> http://www.quora.com/Artificial-Intelligence/How-would-a-society-benefit-from-an-AI-that-passes-the-Turing-test/answer/Marcus-L-Endicott

 

 
  [ # 10 ]

Yeah I agree assistants would be more helpful and if we create accurate systems then it will be more like Expert system, assitant and smarter than humans in Particular task.

For me I would love to see chatbots in NPCs (Non Playable characters) in games. it would be fun talking to them. Currently the latest games allow to choose one of few responses to talk with some NPC but its again quite obvious that nothings going to change . Open world (Genre) games would be much more fun if chatbots get smarter and more human like.

The spam will be less spam-like or less annoying if smart chatbots are made to carry out spam grin

@Marcus Dreaming thing is interesting and i will go through the links you provided and it seems good.
Thanks for sharing

 

 
  [ # 11 ]
Victor Shulist - May 2, 2012:

now some people I know are rather ‘anal retentive’ (consult Urban Dictionary lol), and to them, there is a fine distinction between almost any two words. 

Thus, what is it about ?  it’s about the end user - know your audience.  A ‘saturday night just for fun’ bot will be much more forgiving or should be, about interchanging words and not care much about accuracy.  But in a scientific context, you want almost infinite degrees of clarity.

Currently I am taking the Stanford NLP course (the final programming assignment is to build a “Watson”), and the current lecture is on semantics. In it, they point out that synonyms rarely have precisely the identical meaning. This is especially true when they are used in common practice. Water and H2O are used as an example. You typically wouldn’t use H20 to describe your beautiful day in the mountains looking at a running stream.

 

 

 
  [ # 12 ]

Hum, well I won’t claim to be a WATSON expert that is for sure.  But I think that sort of statistical accuray you’re refering to is more dealing with ‘global’ information, and not in the specifics of understanding a particular conversation with a user.  Once the data is in a structured form perhaps.  But can WATSON, right now, if I say to it

Joe went to New York last friday evening with Henry and Tom to discuss the new marketing strategy.

versus

Joe went to New York on a business trip.

Will it know which is more accurate.  And not just by looking at the length of the text, because it is very possible of course that the longer one, in some cases would be more general than the shorter.

Joe went to New York a few days ago with 2 friends to do something related to their work.

 

 
  [ # 13 ]

Watson is not a “conversational” program. If anything you can think of it as a narrow domain search engine. How it evaluates and scores document fragment retrieval is interesting and could be applicable to a conversational AI, but in many ways it has the same characteristics as Google. After I am finished with the NLP course I may do a little write-up on what I think are the limitations of traditional/current NLP approaches in doing conversational AI.

 

 
  [ # 14 ]

Yes, I too am taking that Standford NLP course.  Open a new thread with your comments.  Looking forward to comparing notes.

 

 
  [ # 15 ]
Merlin - May 2, 2012:

After I am finished with the NLP course I may do a little write-up on what I think are the limitations of traditional/current NLP approaches in doing conversational AI.

I’m very interested to see what you come up with as well smile

 

 
  login or register to react