AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Output Macros / Interacting with system
 
 

The thought occurred to me as I was working through the tutorial again, let’s say the user asks “How are you?”

NORMALLY ChatScript will respond with whatever I have scripted for it. Is it possible (of course it is, but is it possible without a major rewrite or addition to the code) when asked that question or one like it, for CS to poll the sensors on a robot to come up with an environment based response instead of a canned response? I skimmed output macros and didn’t see anything that really lets CS talk to the system in an instance such as this, but again, I just skimmed.

curiouser and curiouser was, Richard

 

 
  [ # 1 ]

Also along these lines, I’m trying to wrap my head around multiple users in the instance of a robot. Let’s say the robot is in a room with more than one person, and there’s a conversation going on. I THINK the answer is running CS as a server, but am looking for confirmation, and have posted it here as it is another example of the original post.

So Inga (the robot) is in a room with Richard, Bruce and Dave. Inga has facial recognition, maybe voice identification, definitely speech recognition and text to speech. So Bruce asks, “How are you?” Inga recognizes that the voice is Bruce, the “BruceUser” part of the server kicks in and CS responds appropriately to Bruce. Then Dave says to Inga, “What have you been up to lately?” and CS recognizes that the voice (or face) is Dave and checks the Dave logs before responding. So, we’re back to interacting with the system before responding, but also, CS “sees” three different conversations occurring, one with Bruce, one with Dave, one with Richard.

I’m wondering if there is a way, or should there be a way, for using ChatScript on a robot that treats all conversation as one conversation as opposed to separate conversations (like when using CS on the web to serve multiple users individually for example) with CS just “remembering what has been said to each person or not. Y’know, kind of like we silly humans do.

regards, Richard

 

 
  [ # 2 ]

I use the word NORMALLY in CS documentation because simple control scripts of the ilk I supply do normal things. Since one can write one’s own control scripts, however, how the system behaves can be quite unusual if you want.

It is easy to script what you asked.  But I suppose you are also asking HOW. I have uploaded a new doc to sourceforge- ChatScript External Communications. It splits off from the advanced manual data on this topic.

If you wrap chatscript as a subroutine called from an app, then the custom is to send out-of-band information (commands to avatar) as leading [ ..... ]  text with or without any actual message response text. The wrapper app decodes and executes it.

If chatscript is the dominant partner, then you use the external calls: ^system(), ^import(), ^export(), ^popen(), ^tcpopen().

The :functions debug command displays the entirety of api routines CS has, and the bottom end of the list has a section entitled External Access, which is the area you are concerned with.

 

 
  [ # 3 ]

As for multiple users in a room, it’s entirely up to you whether the lone bot communicates via a separate user login and so keeps the conversations distinct, or whether it munges them as all coming from a single conversation. You can add out-of-band information as to the human source, so you can choose in script whether to care who said it or not.

 

 
  [ # 4 ]

Great! Thanks for the new doc, I’ll go pull it as soon as I’m done here. In “thinking out loud mode,” after reading your second post aI wondered if it really matters who said what so why not have one continuous conversation with a general “user” that can be more than one person, but now that I think about it a little more, it DOES matter who said what. If I’m in a conversation with someone I’ve know for years and someone I’ve just met, then my statements, questions and responses are all contingent on whether I’m speaking to one or the other or both. This is a curious challenge that requires more thought on my part.

regards, Richard

 

 
  [ # 5 ]

I see me using a combination of both embed and call. Having not really spent any time with the planning part yet, at the moment I think the best implementation for me is for CS to be embedded in my “master program” (AI) because there will be many other things for Inga to do than chat that require the AI I have already mostly written but when chat is appropriate, a call from the main program to CS will do the trick. I’d also want to do calls from CS for things like “how are you?” which would require a poll of sensors and the “mood” environment set by the AI.

Again, this requires a great deal of further thought on my part and some proof of concept work on the integration with the AI.

regards, Richard

 

 
  [ # 6 ]

embedding probably makes the most sense. for how are you, that could be done from cached info in CS, just use the out-of-band mechanism to pass in state info as it changes (assuming slow change) and bot can remember it.

 

 
  login or register to react