AI Zone Admin Forum Add your forum
Task oriented AI verses converstational
  [ # 16 ]
Raymond Lavas - Nov 29, 2011:
i’m not sure if anyone else has seen this. but we have talked about this here.

Raymond Lavas

Reminds me of Marcus Hutter’s AIXI model.

Laura Patterson - Nov 29, 2011:

The bottleneck in processing multiple agents is in the data acquisition. Like Apples Siri, for every query there is a request to the database server. For Siri to perform a multiple agent query and process the data output as an intelligent reply, would require multiple requests and data packets that would have to be returned to the client application.

Cloud based knowledge bases are not practical when parallel processing is involved.

I think services like Google, Siri, and Watson prove that large scale parallel processing can be a good approach. The problem is the vast amount of resources required.

Without those resources, a more nimble approach can lead to better results.


  [ # 17 ]


Yes, I am very aware of some technologies that could be applied to the communication problem. The Unified Field is the network of the universe, and Quantum Entanglement would be a great carrier in which to synchronize the data.

Am I getting warm? smile


  [ # 18 ]
Laura Patterson - Nov 29, 2011:


Am I getting warm? smile

I knew it… The felines have won the Race.  Dr. Lisa Randle and a host of other females are ahead on this.  I feel a bit impotent when I look around and see a bunch of good looking gals taking the lead on artificial intelligence, and bioinfomatics…

But then we men take a lot of time to let the other half of our species THRIVE….
it’s looking good for the ladies now, so let’s just work together… If the Arabs will just loosen-up a bit and let their better half vote, drive and go to school, it will look better for all of us in the end.

Personally, I have selected gender issues/non-issues become the first and Cardinal criteria in A.I.
The chicken comes before the egg.

That I’m sure of now.

Best wishes to your success Laura.
Go FORWARDS and be strong.



  [ # 19 ]

@Robert and Laura: Good point. I must admit that I haven’t encountered that yet, since I don’t use use webservices as modules. However, there are some ways to work with this problem.

Last saturday I heard a talk about the quality of webservices. To calculate this quality, among others the quality of the feedback was of course used, but also the processing speed. If a module provides good results, but is always very slow, the overall quality of this module will be lower.

A way around this could be incremental processing. This means that the system does not wait for the input to be complete, but starts processing the input incrementally (piece by piece while it gets available). This means the computer can do something while the user is still typing.

(by the way, sorry for the late response here, but I was kinda busy lately. Since 2 days I’m now officially a Doctor after succesfully defending my PhD thesis, yay :D)


  [ # 20 ]
Mark ter Maat - Dec 2, 2011:

(by the way, sorry for the late response here, but I was kinda busy lately. Since 2 days I’m now officially a Doctor after succesfully defending my PhD thesis, yay :D)

Congratulations Mark, that’s brilliant!

Your subject was something to do with human computer interaction wasn’t it?



  [ # 21 ]

Supposedly, one can cobble together a “supercomputer” using Amazon’s cloud services.

I’m still in the middle of investigating cloud databases.  Right now, I’m leaning toward semantic databases.  However, the best (free) ones are still in beta, so haven’t had the chance to actually test them yet.

I’ve got this concept of being able to cobble together a “Watson Junior” in the cloud.  This concept involves my “Open Chatbot Standards” or “Open Chatbot Framework”.

I’m now looking for AI “middleware” in the cloud.  The closest things I’ve found are GateCloud and JadexCloud (which doesn’t exist yet).  With cloud-based AI middleware one would theoretically be able to cobble together many different cloud APIs, and therefore distribute a multi-agent architecture throughout the cloud.  Actually, I believe, Siri is made in this way.

Sooo, I need a simple dialog system (interpreter), with a basic stock personality, in the cloud, in other words SaaS, that allows for API access into (and out of) its knowledgebase, in order to access the semantic expert system.

I would call this configuration a “Conversational Expert System”.

Please let me know of any potential problems you might see with this!

You can check my progress at .


  [ # 22 ]
Mark ter Maat - Nov 15, 2011:

I’m a great fan of your third method: splitting the input into parallel processes. This makes it modular (you can simply add or remove a module), and since each module is specialized in its own task it is much simpler.

What could be useful perhaps is that each module can suggest a response to the user’s question, but also a confidence-score that tells you how sure it is that this really answers the user’s question. For example, if the user’s question matches a stored question-answer pair exactly, then the confidence will be very high, and vice versa. This does mean that each component needs some extra code to determine the confidence score.

More difficult still (but highly useful) is to add context to the confidence score.  For example, if the conversation just prior to “What time is it”/“What is time” involves metaphysics, then confidence should be higher in an answer of “Time is a system to measure…”, however if the prior conversation was more general, such as “what are you doing right now” or “I am going to the store” then confidence should be higher in an answer of “It’s 12:40pm.”

One could argue, however, that “What time is it” is a completely different question than “What is time” based purely on the grammar.  The first is a question in which only an answer like “It’s 12:40pm” would make sense.  However if your algorithms for determining the meaning of a sentence aren’t solid enough, then you might have to employ a “trick” to discern meaning like the ones we’re describing in this thread.

What I describe above probably wouldn’t work well for a Turing Test questioner, as a turing conversation can be somewhat random since it’s purpose is not necessarily “to have a conversation”, but I think context based confidence scoring is useful/important in real conversations.




  [ # 23 ]

I have been working on a module that pre-qualifies the user input to make a determination if it should be processed as a task or just a conversational reply. This has been challenging since there are several levels of processing and parsing involved. As suggested by Ntate, referencing the previous user’s input can be a great help in determining the context of the question. This is also true when it is necessary for the bot to change to a relevant topic.

The chat log is readily available in the bots active memory and can be scanned and parsed as needed for any reference that may be useful in this determination. The longer the chat has continued the more useful this data becomes. This is the advantage to client side processing since variables are updated constantly and are easily accessible to the script.


  [ # 24 ]
Laura Patterson - Dec 29, 2011:

The chat log is readily available in the bots active memory and can be scanned and parsed as needed for any reference that may be useful in this determination.

I take this to mean that your program does not perform the full range of its input analysis upon first receiving the input. How does it determine whether the analysis methods it has performed are sufficient to generate an output?

Or do you mean that the bot can re-analyze an input, using later input to inform the analysis? If so in what specific ways? Pronoun referencing? Setting the topic of conversation? Determining whether a previous input was a request for action? Can you provide a list of what methods are available and how it determines when/whether or not to employ them?


 < 1 2
2 of 2
  login or register to react