AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

ChatScript 3.0
 
 

BEFORE YOU BEGIN CONVERSION, BACK UP YOUR EXISTING SYSTEM COMPLETELY (including exe, dict, livedata, etc). That way if you get stuck partway thru and are awaiting an answer from me, you won’t be dead in the water.

Some esoteric functionality has been removed and/or added. Most users are unlikely to notice this, because they use basic ChatScript. But most users will probably experience compile errors on their old code, because the :build command is much more stringent now.

In particular, #xxxx comments will trigger errors because you must separate the text from the #.  Most fixes to update to the new system will require a global edit to change all instances of something at once and should be easy. Pay attention to warning messages involving missing concepts. Maybe the concept has been renamed, or you never noticed you referred erroneously to some things. If you get a missing concept warning, run the system and think of a word that would be in that concept. Type :up xxxx and look at the sets it shows for the word, and see which would likely be your missing concept.

Also, for now, the Part-of-speech/Parser code is not present. It was not highly reliable so few could have depended on it. It will be upgraded and re-released later. Currently the system will report all part-of-speech potentials for a word, which is what it would have done when it failed to parse correctly.

I am not going enumerate what has been removed, added, or renamed. RTFM, because likely you haven’t been reading the manual as updates happened and so have missed a bunch of features a while ago. Now is the time to refamiliarize yourself with ChatScript and discover new abilities. I will point out the system now has “rule tags” which support greater introspection and has an embedded debugger.

Because this is a significantly changed release, there are likely new bugs or old bugs rejuvenated. Faster service and less noise will happen if you directly email me a bug report. If you have a previous bug you were waiting to be fixed and it still exists in this version, email me about it.

If you have built a modified version of the ChatScript engine for your own purposes, you may have difficulty swallowing this one. Maybe I can give advice if you tell me what you’ve done, but mostly I don’t worry about what happens to people who are not staying with the open source version.

My internet is currently flaking and I’m in transition from Italy to the UK, so don’t expect rapid response this week.

Bruce

 

 
  [ # 1 ]

I’m loving the “quibble”

 

 
  [ # 2 ]

Good to see the new release

 

 
  [ # 3 ]

Looking forward to checking out the new release Bruce. Thanks for all of your work.

regards, Richard

 

 
  [ # 4 ]

Wonderful to see that everything is preapared for a german ChatScript.
Especially by our new flags in dictionarySystem.h

A very good new introduction is the ARBOR-paper,
which makes the work clearer for creative writers like me.

I will study the new files and documentation for some days
and will be back with some gifts in a few days, too.

Best

Andreas

 

 
  [ # 5 ]

Andreas, what is the ARBOR-paper to which you refer?

regards, Richard

 

 
  [ # 6 ]

Hi Richard,

you can find the ARBOR-Paper
in ChatScript 3.0. / Documentation.

Regards
Andreas

 

 
  [ # 7 ]

Bruce,
I enjoyed the “Making it real” paper and background on Angela.

 

 
  [ # 8 ]

I understand from the documentation that chatscript uses wordnet information for it’s ontology. Does this allow chatscript to understand e.g. if I scripted a response for “what does this widget cost”  That a question with “how much do I have to pay for this widget” should also get that answer?
I hope my example made sense.

 

 
  [ # 9 ]

Sadly no.  Wordnet means it knows parts of speech of words, and ontology of what words might imply, but USING that information and idioms of meaning requires you script what meanings you hunt for. Nothing comes automatically.

 

 
  [ # 10 ]

Having a dictionary does not give you meaning automatically. Consider that meaning is only an agreement on what words and their order mean.  We agree that “dog” is tied to a physical class of objects. And has a whole lot of other meaning in wordnet also, including an ugly looking woman.  We agree that words should follow a grammar for English but we accept things when they don’t. We agree on special meanings given to specific phrases called idioms. And depending on what we are intending to do with the response to an input, we might have “I like juice” mean something different from “I really absolutely like juice”. So you have to decide for yourself what inputs you are going to handle as meaning what and what you will do with them.  For questions like “what is your favorite”,  I have a topic with all sorts of ways of phrasing that question, which can be invoked for anything.  “Which city in France do you prefer” is something that would be handled by that topic, using a table lookup for “French city”.  “What color is your hair” is handled by a hair-color macro, given the argument of whose hair is involved, so different rules would just invoke that macro as their pattern and have unique answers as their output. Similarly, what does xxx cost would either be represented as a topic doing fact lookup, or as a macro that different rules could call, and so in either case all the forms I could think of that mean “what is the price of ”  would be represented. But its not automatic. It’s scripting the concept.

 

 
  [ # 11 ]

Just for my understanding.

E.g. the word Orange can have multiple meanings like a city, a fruit, a colour

If the engine would see a sentence saying “I live in Orange” it could take out of the association from the word “live” that in this case highest likelihood is Orange being the city and not the fruit or colour. If it could understand this concept, it could match this to e.g. responses in a topic cities instead of the topic colours.

Certainly there would be some customisations necessary depending on what a bot is supposed to do, but for most bots a majority of those general English associations should stay the same. This should increase the robots precision. But from your description chatscript doesn’t do that. May I ask why? Is what I said just theory and has little impact in practise?

 

 
  [ # 12 ]

ChatScript does not use probabilities. When you open that door, you need to return multiple choices (the different probabilities), which slows down things dramatically.  You can already match things with appropriate responders: ie if you have a list of cities, then u: (I * ~live * ~city) can match.  Patterns supply the context of how to interpret orange. This means YOU define the likelihood of a meaning by putting the word in a pattern context.

 

 
  [ # 13 ]

What do you mean with having to return different probabilities? To the end user or just as an engine internally to evaluate which one is the most likely and to put that out to the user.

I understand it can be nice to have the ability to define everything, but it also means one has to define everything = a lot of work. I have seen now 2 commercial products that provide this probability evaluation and do it with enough speed even on a standard computer. Admittedly they aren’t opensource or cheap in general.

 

 
  [ # 14 ]

let’s go back to your example.  Orange has multiple meanings. How are you going to decide the probability that
“I live in Orange”  means it is a city, in the engine itself. Does the engine know that Orange is a city? If it does, does the engine know all words that involve cities, such that it can create a probability that Orange is a city.  Feel free to describe how the engine would compute that probably we are talking about the city Orange.

 

 
  [ # 15 ]

What 2 commercial products were you referring to?

 

 1 2 > 
1 of 2
 
  login or register to react
‹‹ libstdc++.so.6      Catching Tokenized Input ››