AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Applied Problems, an AI evaluating standard for chatbot.
 
 
  [ # 16 ]
Victor Shulist - Sep 7, 2010:

A hamsandwich is better than nothing.
    Nothing is better than a million dollars.
    Thus, a ham sandwich is better than a million dollars.

This is getting humorous. Can computers create humor?

http://www.highbeam.com/doc/1G1-210033064.html

As far as I remember, I once read an interesting articles about Computer Generated Humor in AI Magazine. I’ll come back to this!

 

 
  [ # 17 ]

There are scenarios, you know, where the sentence “A ham sandwich is better than a million dollars” is absolutely correct. Picture yourself stranded in a remote, desolate location, with no transportation, no means of outside contact, and nothing but water to take in. I’m reasonably certain that, at some point, the sandwich would be MUCH better than a million bucks! smile

 

 
  [ # 18 ]

thus common sense knowledge about the context?

 

 
  [ # 19 ]

Ok, one thing I have learned about NLP and the way humans seem to understand it, is that we evaulate sentences, at first anyway, under normal circumstances.

That is, for the most part we would all agree, in general that elephants don’t wear pajamas….

“While in Africa, I shot an elephant in my pajams”

Humans evaulate this and IN GENERAL people don’t think of Elephants wearing pajamas.

I’m going to stress again,  IN GENERAL.

This seems to be the way humans parse and understand sentences.

Thus, a human will assume right away that “in my pajams” means that you were WEARING the pajamas, and not the elephant.

Now, you could have a strange friend that has a pet elephant that he puts in pajamas, sure, but in general it is not really true.

Humans use knowledge that applies for the most part.

In general, under normal conditions a million dollars is better than a ham sandwich.

But you’re right, given a specific EXTRA-ordinary situation like being stranded on an island, then the sandwich is better.

But people don’t assume you are on an island, they assume normal conditions.

Then you say “Oh.. but I didn’t mention, you’re on an island with no food and no hope of getting rescued”

Then you get the “ooohhhh….. .ok….. I’ll go with the sandwich”

You see, that “ohhhhh ok”  means they have switched contexts from using normal conditions context to this specific case.

But it would be non productive to create a chat bot that takes user input likek that and brings up 500 options… asking you if you are on an island, asking you this, asking you that, asking 1,000,000 out of the ordinary things, before it evaluates the sentence.  That’s a waste of time,  assume standard conditions, until told otherwise.

As for a computer generating humor, absolutely !!!!  I wrote programs within 3 months of learning programming that had me on the floor gasping for breath!

 

 
  [ # 20 ]

This can be done and has been - check out the STUDENT program: http://en.wikipedia.org/wiki/STUDENT_(computer_program)

 

 
  [ # 21 ]
Victor Shulist - Sep 8, 2010:

Sorry, but I very much disagree with you on this point.  Just because the naturual language (NL) statements deal with mathematical concepts doesn’t mean there is a fixed number of them that you could create a template/pattern to match against.  There are probably just as many NL statements in math as there are about any other topic (an astronomical number).  I could create a complex/compound sentence which links together any number of NL sentences with conjunctions and any depth of nested NL statements tied together with subordinate conjunctions and subordinate clauses that deal with mathematical problems.  Good luck parsing that with simple pattern matching techniques.

Come on now. I’m sure that for a sufficiently diabolical example, word problem solving could be turned into as difficult a problem as general NLP. (In fact it becomes the same problem.) But that’s not what I’m referring to. Go through a typical algebra textbook and tell me that most word problems there (and most problems we encounter in real life) aren’t phrased in a limited number of ways. There is little ambiguity in expressing an equation in natural language, as there must be.

Victor Shulist - Sep 8, 2010:

umm.. I thought of that 20 years ago actually, and that is no easy programming task.  Perhaps someday, someone will code something like that, and I’d like to see it in action, but me, personally, I’m taking a much more direct approach.

I agree that for a generalized chatbot this would be at best a slow development approach, and at worst an exercise in futility. But for a more limited goal such as solving algebraic word problems, I think this stands a chance. But as I said, I haven’t tried.

Victor Shulist - Sep 8, 2010:

There are so many issues with NLP.  One of the most annoying for chatbot developers is when humans shorten a sentence, leaving out important words, and just using the word “is”.  I see it so much I’m getting sick of it…

Yes, I’m realizing that as I pile on grammar rules, I’m restricting myself more and more. I don’t notice how much I rely on proper and crystal clear grammar until I try to feed my bot sentences from other sources (wikipedia, newspapers, etc.) And if those are giving me trouble, chatting—which is so informal—will be a nightmare.

I believe you (but correct me if it was someone else) commented elsewhere that it will be easier for a bot to understand a grammatically incorrect sentence if it knows proper grammar and can identify how the sentence deviates from this or that rule. This is the hope I’m working off of.

The important things is not to bite off too much too fast. Even getting a bot to properly turn grammatically correct sentences into a structured database is a lofty enough goal for now!

 

 
  [ # 22 ]
Rob Lockhart - Sep 8, 2010:

This can be done and has been - check out the STUDENT program: http://en.wikipedia.org/wiki/STUDENT_(computer_program)

Ha! Fantastic!

Edit: Here’s Bobrow’s thesis: http://dspace.mit.edu/bitstream/handle/1721.1/5922/AIM-066.pdf

Edit^2: Of particular interest is page 9, which begins the discussion of how the sentence is broken down into variables and operations.

 

 
  [ # 23 ]

I read a little of his thesis(http://dspace.mit.edu/handle/1721.1/6903). I seriously respect the pioneer’s great work about 50 years ago.

However, I don’t think we can say the problem has been done. His work is based on manual formats and rules.  At that time, he believed the computer memory was the limitation(end of page 103). Actually, the cost for building formats and rules are the main limitation. His system is a demo with limited formats and rules which showed the possibility to solve the problem by computer. But It is far from solving this problem.

As a programmer and researcher, I am ashamed that we did not solve this problem in 50 years. We are still working for it.

Besides, when we attempt to solve this problem, we need to design a general knowledge representation model for both chatting and word problem solving.  We need the ability to solve word problems in chatting context.

Actually, if we have a banchmark system for word problems, his system could be a good baseline.

Rob Lockhart - Sep 8, 2010:

This can be done and has been - check out the STUDENT program: http://en.wikipedia.org/wiki/STUDENT_(computer_program)

 

 
  [ # 24 ]
C R Hunt - Sep 8, 2010:
Victor Shulist - Sep 8, 2010:

Sorry, but I very much disagree with you on this point.  Just because the naturual language (NL) statements deal with mathematical concepts doesn’t mean there is a fixed number of them that you could create a template/pattern to match against.  There are probably just as many NL statements in math as there are about any other topic (an astronomical number).  I could create a complex/compound sentence which links together any number of NL sentences with conjunctions and any depth of nested NL statements tied together with subordinate conjunctions and subordinate clauses that deal with mathematical problems.  Good luck parsing that with simple pattern matching techniques.

Come on now. I’m sure that for a sufficiently diabolical example, word problem solving could be turned into as difficult a problem as general NLP. (In fact it becomes the same problem.)

 

I’m not sure what you mean here.  If you are saying word problems (math word problems) can be as tough as general NLP, then YES, I Agree, and that is what I was trying to point out.

 

 
  [ # 25 ]
C R Hunt - Sep 8, 2010:

 

I believe you (but correct me if it was someone else) commented elsewhere that it will be easier for a bot to understand a grammatically incorrect sentence if it knows proper grammar and can identify how the sentence deviates from this or that rule. This is the hope I’m working off of.

 

Yes, it was me.

So your bot engine is based on grammar also ?

My bot’s grammar rules will have ‘crystal clear’ grammar as you say, and also common bad gammar.  Also fuzzy grammar rule matching.

 

 
  [ # 26 ]
Rob Lockhart - Sep 8, 2010:

This can be done and has been - check out the STUDENT program: http://en.wikipedia.org/wiki/STUDENT_(computer_program)

Dam, at work now, but can’t wait to get home and try this smile

 

 
  [ # 27 ]
Nathan Hu - Sep 9, 2010:

I read a little of his thesis(http://dspace.mit.edu/handle/1721.1/6903). I seriously respect the pioneer’s great work about 50 years ago.

However, I don’t think we can say the problem has been done. His work is based on manual formats and rules.  At that time, he believed the computer memory was the limitation(end of page 103). Actually, the cost for building formats and rules are the main limitation. His system is a demo with limited formats and rules which showed the possibility to solve the problem by computer. But It is far from solving this problem.

I highly agree.  If he would have ‘done it’ , I think we’d know !  We wouldn’t be pointing and clicking, we’d have products in our O/S which we could talk to our computers like on Star Trek by now if this STUDENT program was the end all !!

Even if it couldn’t pass the Turing test, I think that program would be a lot more popular if it was ‘it’. smile

I think Google would have bought the algorithm from him and we wouldn’t be just using ‘key word’ searches, we’ll ask in NL what info we want from Google smile

 

 
  [ # 28 ]
Victor Shulist - Sep 9, 2010:

I’m not sure what you mean here.  If you are saying word problems (math word problems) can be as tough as general NLP, then YES, I Agree, and that is what I was trying to point out.

What I’m trying to say is that solving word problems can potentially be as difficult as general NLP, but in a vast majority of cases isn’t. In fact, by virtue of the stiff formalism for representing equations, the natural language representation of an equation tends to take a small number of forms.

Actually, my opinion about this was reinforced last night, when my brother called asking for help on some ODE word problems for a class he’s taking. All of them had the same general grammatical structure.* Which is the idea, I imagine. Students are meant to learn commonalities between the problems so that the technique used to solve that particular type of problem can be reinforced and learned.

*(Agh, I never noticed this stuff before I started working on NLP…)

 

 
  [ # 29 ]
Victor Shulist - Sep 9, 2010:

Yes, it was me.

So your bot engine is based on grammar also ?

My bot’s grammar rules will have ‘crystal clear’ grammar as you say, and also common bad gammar.  Also fuzzy grammar rule matching.

Yes, the first thing my bot does is try to analyze the grammar of its input. Every other stage in “learning” relies on a correct grammatical interpretation.

Actually, I’ve recently become interested in “how wrong” the grammar can be and still yield an effectively correct knowledge base. How many parts of speech can be combined into more general categories and still the knowledge is stored effectively?

What inspired this reflection was (a) having been recently reminded that German does not distinguish between adverbs and adjectives and (b) realizing serendipitously that the best way to store conjunctions in my knowledge base was the same way I store prepositional phrases. I didn’t really expect that, but it has worked out amazingly well.

 

 
  [ # 30 ]
C R Hunt - Sep 9, 2010:
Victor Shulist - Sep 9, 2010:

I’m not sure what you mean here.  If you are saying word problems (math word problems) can be as tough as general NLP, then YES, I Agree, and that is what I was trying to point out.

What I’m trying to say is that solving word problems can potentially be as difficult as general NLP, but in a vast majority of cases isn’t. In fact, by virtue of the stiff formalism for representing equations, the natural language representation of an equation tends to take a small number of forms.

To me, one of the most effective ways to measure intelligence is how much a bot/human can accept representations of things in non-standard ways and cope with them.

Just because most text books follow a consistent way to represent math problems doesn’t mean it is not an effective way to test a chatbot’s abilities to handle those more complex and as you say diabolical examples.

Those complex, non-textbook cases are the ones we want to use to develop and judge our bots abilities and milestones.

Also, don’t forgot that it may not be simple sentences, but rather an entire interactive dialog of user and the AI to just determine the problem itself, and clarifying questions asked by the AI.  So we’re developing that functionality as well.  Think of the bot being perhaps a mathematics instructor for students.

How far have you come with your bot so far, do you have any sample conversations, or any output samples of how it parses?  Not many members on this site are focusing too much on grammar on their bots.

Also, does your bot generate many parse trees in the cases where many words can have multiple parts of speech?  Also, is one grammar rule applied to the whole sentence, or does the bot figure out the combination of grammar rules to apply to the entire sentence on its own?

 

 < 1 2 3 4 > 
2 of 4
 
  login or register to react