AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Intelligent behavior
 
 
  [ # 16 ]

#Andrew

....
So Andy, in future I hope you won’t give up so easily! smile

Thank you for the data, I also have read all this papers a few years ago!
My parser is based on the syntax of the Java CUP LALR parser, but implemented the Masaru Tomita ‘87 shema (splitting stack) and finally corrected the left-hidden recursion! (a very awkward problem, very difficult to find, track and solve). Then I built a variation (for me) to support multiple input tokens (I’ve called it Schrödinger Tokens) helping to correct input errors. The GLR* algoritm makes a similar thing but is prone to exponential explosion being intended for single tokens, this GLR parser schema is intended to parse formal languages, mine is not! (it’s beyond this).. huh!

Also because of ambiguity, the pure-Tomita GLR parser literally explodes! and the stack grows exponentially, so If you don’t prune the trees, you end up easily on a stack-overflow or a slow 30 seconds parse for a 20 word sentence!

As you may know a GLR parser has the exponent of the multiplicity lying inside the grammar, and using NLP with ~600 productions, you get into trouble easily!

I’ve handled this with another strange solution, a priority mechanism burned inside the GLR schema! let me explain the idea:

¿Do you remember when you use a LR parser or a LALR one, bottom-up, you need to solve known language ambiguities like the ‘dangling else’ or the priority of concatenated operations like algebraic sum and multiplication? Well those are solved by ‘tweaking’ the table by means of smart ‘compile-time’ decisions yielding a good parser as output!/

Well I used the same schema but embedded inside a GLR parser!, the result? the ambiguities are filtered out cleanly unless you don’t specify priority at all! and even you are able to grant a NP-Hard combination on certain grammar-parse sections!

This parser I’ve accomplished to work fine, about 4 years ago and I am still tweaking it!
and as my parser also needs a scanner, I had to build one for him, on my own, so I built a C#Lex scanner generator and this tiny beast is actually dealing with all the ‘dirty’ text in a very efficient way!

So Andrew, I think we are both on our way! (headed where.. I don’t know.. but still moving on) and for sure: I’ll never give up! smile

 

 

 
  [ # 17 ]

Considering a model, I just revisited Cleverbot’s features.  That bot saves the context of the whole (current) conversation, not just each input’s context (my idea of a model?)  So it refines and refines again and refines that set some more from the selection of millions of responses others have made given the input and the context until it narrows the list of possibilities down to only one.  What is significant here is the bot learned by observing.  Clients have commented that Cleverbot knows what they’re going to say even before they input it.  It predicts from experience.  It knows much because it experiences over a million exchanges every day.

Often it can’t answer a puzzle: Sue is taller than Jane and Betty is taller than Sue - who is the tallest?

Folks who talk to Cleverbot claim it is AI.  Is this intelligent behavior?

@Andy, @Andrew: after parsing the inputs until you get a complete understanding, do your systems search a bunch of responses for the one to make?  Or does the input carry with it a proposed response like AIML?

I would think an intelligent chat bot would think up a response regardless of the input, but maybe still consider the input when dreaming up what to say.  So the reasoning process could be running continuously, interrupted by user inputs and outputting whenever it wanted to, that is, not locked into a stimulus/response cycle.

 

 
  [ # 18 ]

Sorry for the tangent, but am I missing something with Cleverbot? I have yet to have a single reasonable exchange that isn’t filled with non-sequiturs. When you say, “Often, it can’t answer a puzzle…” do you mean sometimes it can? I couldn’t even get it to handle simple attribution (e.g. “I have a red cup. What color is the cup?”)

 

 
  [ # 19 ]
Gary Dubuque - Aug 1, 2011:

Considering a model,.....

@Andy, @Andrew: after parsing the inputs until you get a complete understanding, do your systems search a bunch of responses for the one to make?  Or does the input carry with it a proposed response like AIML?

.....

Yes and not, the ‘understanding’ is ready when there happens 1 o 2 things: a) a pattern is matched fully (including previous variables set or grabbed from the conversation or the bot-user knowledge) b) an intelligent goal has been reached, by fulfilling some automated questions, along with several previously matched patterns.
¿is it clear?

I don’t include light-reasoning at this time, but the plans are to let a minimum OWL-type kernel knowledge along with some human basis (FOAF) math and physics ‘common sense’ relations. This will help (an I see the need) when evaluating the found relationships and solving anaphoric coreferences.

When the goal is fulfilled, the section composes a response in a similar way to AIML , with a huge difference, the system has a Natural Language Generation facility, a Felxioner and a planner with several interesting and useful capabilities, so you may throw inside a verb in infinitive, several time adverbs, some subject and person elements and voilá! you got out a perfectly composed sentence!

My actual headache are the Spanish verb conjugations (16 tenses * 2 number * 3 person) mostly underspecified, so I have to grab information about the described event or sentence and impact the Flexioninig system correctly filling in all the defaults, this is weird ! (English = only 6 tenses * 2 numbers * 3 persons , yeah!!!) no major morphologic differences on conjugations, etc. piece of cake!, but Spanish is really a piece of work (Some count on 110 different conjugation models, others 66 according to RAE, and no one get a clue on how to conjugate many of very common but strange verbs, you have to guess and conjugate as if you would know them!)

 

 

 
  [ # 20 ]

I’ve got some questions about parser technology.  Keep in mind that I don’t have a formal education in NLP.  My degree is in Psychology.  (And, my IT background is a result of my father being a computer science professor.)  I have done my own parsing, converting books into XML for example.

What actually is the parser doing?  The example of Andrew’s results showed the parser was breaking out sentences from a text, and perhaps identifying them in dialog.

Is a parser then not a tagger?  Is the parser adding metadata?

What is the relationship of “standard” parsers to semantic web data such as triples?

A simple seach for “parser AND RDF” lead me to Raptor RDF Syntax Library http://librdf.org/raptor/ , which mentions “serializers” in addition to parsers.  What in fact is a serializer?  What does it do?

Basic video tutorial suggestions welcomed!  ;^)

 

 
  [ # 21 ]
Marcus Endicott - Aug 10, 2011:

What actually is the parser doing?  The example of Andrew’s results showed the parser was breaking out sentences from a text, and perhaps identifying them in dialog.
Is a parser then not a tagger?  Is the parser adding metadata?

A parser converts serialised data into structured data, as simple as that.

There are a number of different ways that it can do this ranging from hand-coded recursive descent parsers (inefficient) up to GLR parsers (most efficient) which are programmed using abstract grammar definitions written in some derivative of Chomsky normal form (ABNF, EBNF, TDL etc). The advantages of using an abstract grammar definition are ease of use, portability (you can easily use the same grammar with different algorithms and programming languages), and the ability to preprocess them into either an oracle which is interpreted, or executable code which is compiled.

As for what can be converted, the answer is anything that can be expressed as a context free grammar. The example that I already gave is for a discourse analyser and sentence splitter, but it could instead or as well as perform part of speech tagging and even semantic analysis.

For example, consider the following toy grammar where terminals are in lower case and non-terminals are in upper case:

S := NP VP
NP := Det N
NP := Det Adj N
VP := V
VP := V Comp
Comp := P NP

Det := the
Det := a
N := cat
N := mat
V := sat
V := sits
X := on
X := beside

This could parse a sentence like “The cat sat on the mat.” into

<S>
  <NP><Det>The</Det><N>cat</N></NP>
  <VP>
      <V>sat</V>
      <Comp>
        <X>on</X>
        <NP><Det>the</Det><N>mat</N></NP>
      </Comp>
  </VP>
</S>

For X read Preposition: problem with forum preprocessor.

Hope this makes it clearer.

 

 

 
  [ # 22 ]

Marcus askes: Is a parser then not a tagger?  Is the parser adding metadata?

For english, think of a tagger as labelling parts of speech. that’s part 1.  Part2 is generating chunks of text (noun phrases, prepositional phrases, etc). and then the parser builds the tree structure that shows how they all fit together.

 

 
  [ # 23 ]
Bruce Wilcox - Aug 11, 2011:

Marcus askes: Is a parser then not a tagger?  Is the parser adding metadata?

For english, think of a tagger as labelling parts of speech. that’s part 1.  Part2 is generating chunks of text (noun phrases, prepositional phrases, etc). and then the parser builds the tree structure that shows how they all fit together.

You could do this in a number of separate steps using for example a statistical tagger, a named entity recogniser, a phrase and sentence chunker, and then a parser to create the parse tree (or parse graph, because there is always some ambiguity involved); or you could do it all in one step using a sufficiently comprehensive grammar like ERG and an efficient scalable parser like the one that I’ve been developing.

 

 

 
  [ # 24 ]

Thanks for your feedback!  Let’s see if I understand correctly?

= = =

1) tagger (http://en.wikipedia.org/wiki/Part-of-speech_tagging)

2) chunker (http://en.wikipedia.org/wiki/Shallow_parsing)

3) parser (http://en.wikipedia.org/wiki/Parsing)

4) parse tree (http://en.wikipedia.org/wiki/Parse_tree)

Question: What is the difference between a Parser and a Parser generator?

= = =

1) statistical tagger (http://nlp.stanford.edu/links/statnlp.html#Taggers)

2) named entity recogniser (http://nlp.stanford.edu/links/statnlp.html#NER)

3) chunker (http://nlp.stanford.edu/links/statnlp.html#NPchunk)

Question: Are there only Noun Phrase chunkers?

4) parser (http://nlp.stanford.edu/links/statnlp.html#Parsers)

5) parse tree or parse graph (http://nlp.stanford.edu/links/statnlp.html#Treebanks)

Question: Would a “statistical tagger” be a “probabilistic parser”, if so then where do “Semantic Parsers” fit in??

= = =

LinGO English Resource Grammar (ERG) http://www.delph-in.net/erg/ [HPSG]

HPSG http://en.wikipedia.org/wiki/Head-driven_phrase_structure_grammar

Question: In this case, a grammar is a thing; so, what does that thing look like, and what does it actually do (in the simplest terms)?  (I’m less interested in the theory here and more interested in the thing itself.)

 

 
  [ # 25 ]

Question: What is the difference between a Parser and a Parser generator?

A parser generator is a parser that’s able to generate parsers based on a definition. So you declare what you want parsed, or how the structure of the content is and the parser generator will build you a parser for that.

In this case, a grammar is a thing; so, what does that thing look like, and what does it actually do (in the simplest terms)?  (I’m less interested in the theory here and more interested in the thing itself.)

The grammar is what you call the ‘structure’ that you give the parser generator to let it do it’s thing.
Here’s a parser generator I like: coco. There are some examples on the site, here’s one for C

 

 
  [ # 26 ]

Marcus wrote:

Question: Are there only Noun Phrase chunkers?

No. They are simply the most common stand alone. Whenever you have a parser for a human lanugage, it will have done chunking of some kind in order to build the tree.

Question: Would a “statistical tagger” be a “probabilistic parser”.

No.  that just describes the algorithm used in the tagger. A tagger can try to figure out what part of speech something is by considering rules of grammar or by guessing
based on odds of words seen in text corpora or both.

Question: if so then where do “Semantic Parsers” fit in?

This describes a parser (not a tagger).  At the level of parsing algorithms, you have statistical, grammatical, and semantic (what does the bloody thing mean). There are lots of ambiguities even when you know the pos tags. E.g.,
“He saw the girl in the park with a telescope”.  Chunking will give you “he”  “saw ”  “the girl”  “in the park”  “with a telescope”, for example. But where does one attach those prep phrases?  is HE in the park. Does SHE have the telescope? 

 

 < 1 2
2 of 2
 
  login or register to react
‹‹ In Search of Intelligence      Harry Topics ››