AI Zone Admin Forum Add your forum

NEWS: survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Conversational Expert Systems

Regarding my concepts of “Open Chatbot Standards” for a “Modular Chatbot Framework” ..

I’ve been seized by the idea that a dialog system (chatbot) need not incorporate an expert system, so long as it can call out to consult an expert system, for instance via API.

For example, if we call Apple Siri a dialog system and IBM Watson an expert system, then the availability of a Siri SDK and a Watson API would allow for integration.

At this point, I believe there is not sufficiently open and robust “AI middleware” available, much less cloud-based, to facilitate “rapid application development”.  Not to mention the insufficient “interoperability” among currently available components.

Mark your calendars; as, I’m planning to expand on these concepts March 31 in Philadelphia at Chatbots 3.2 ..


  [ # 1 ]

Do any of these provide the kinds of services that you are looking for?


  [ # 2 ]

Indeed, Siri already uses WolframAlpha to answer questions such as these:


  [ # 3 ]

I agree Marcus. From the beginning Skynet-AI has used outside resources to compliment the internal knowledge base. It makes little sense to have every bot contain a dictionary, wikipedia, etc. The only limitation is the effort it takes to interface all the APIs. Each knowledge source requires their own special API interface. Maintaining and updating them all is time consuming and does little to add to the overall knowledge that the bot contains.


  [ # 4 ]

My system too has a wikipedia agent, google calc agent, wolfram alpha agent, link grammar agent, etc. It had a google translator agent too but then google eliminated their free api, so I have to modify the agent, possibly to work with bing? Ideally, another agent would be able to automate the process of handling changing apis…

Sketch of a possible learning, scraping agent: teach it by telling it what you’re looking for on a specific query, and then it figures out how to extract that answer, and generalizes to looking for answers to other queries in the same HTML context? If it makes mistakes, you teach it some more, or tell it where it’s going wrong.


  [ # 5 ]

It’s already been done. Be sure to watch all the videos of this system in action.

The goal of the PLOW (Procedure Learning on the Web) is to build a system with which a user can teach the computer to perform tasks on the web. PLOW learns from both explicit demonstration of the task together with natural language instruction. The natural language play by play provides key information that allows rapid and robust learning of complex procedures including conditionals and iteration in one short session. PLOW demonstrates the power of an integrated approach to learning, combining deep natural language understanding, reasoning and machine learning.


  [ # 6 ]

I haven’t come across PLOW (Procedure Learning on the Web) before.

I found some interesting videos about PLOW, including “PLOW iphone”, here:

I’ve also put together a webpage analyzing the Google Scholar references to PLOW here:

PLOW seems to be classed together with “Web Macro” systems.


  [ # 7 ]

Yeah, I’d like to use PLOW to teach an agent to parse the TRIPS parser output automatically…(Note it gets “Fruit flies like a banana” wrong.)

But PLOW doesn’t seem to be available for download. TRIPS parser either.

So maybe I can start with the Stanford Lexparser,

> parse: John ate an apple.
  (NP (NNP John))
  (VP (VBD ate)
    (NP (DT an) (NN apple)))
  (. .)))

> The subject is John.
[Agent searches for “John” in the parse tree and associates the tags next to it with “subject”]

> What is the subject of “John ate an apple”?
[Agent tells itself to “parse: John ate an apple”, then searches through the parse tree for the tags associated with “subject”, and hypothesizes that this is the answer to the question.]

> What is the subject of “The philosopher ate an apple”?
[Agent doesn’t find the same sequence of tags associated with “subject”, so it gets the answer wrong.]

> The subject of “The philosopher ate an apple” is “The philosopher”
[Agent now associates the DET-NN tags with “subject”.]


Actually it would probably be easier to use stanford’s dependency relations to extract subject-verb-object:

> dependencies: John ate an apple.
nsubj(ate-3, John-2)
root(ROOT-0, ate-3)
det(apple-5, an-4)
dobj(ate-3, apple-5)

Now the agent could associate “subject” in a query with “nsubj”.

If first taught “The verb in ‘John ate an apple’ is ‘ate’”, the agent could associate “what is the verb” with the “root” tag.

I don’t know how to correct the following misparse, though:

> dependencies: Fruit flies like a banana.
nsubj(flies-3, Fruit-2)
root(ROOT-0, flies-3)
det(banana-6, a-5)
prep_like(flies-3, banana-6)

Actually, I have an idea but it would involve totally rewriting the parser’s output. I actually tried something like this with Link grammar, and it was so complicated I don’t think I could make it work now. So I try to find a way that’s easier to remember months (or years) later…


  login or register to react