IBM Watson would be an example of “internal”, and I believe they had to do a tremendous amount of “pre-processing” to create knowledge from data.
I recall watching a video about WATSON where one of the engineers stated “Watson ITSELF doesn’t understand a sentence directly”.
What that means exactly I’m not sure. But I would imagine if it COULD, without, as you say a lot of pre-processing, we would KNOW ABOUT IT! Operating systems would come bundled with an NLP interface and Google would be history! Unless they bought them of course.
BTW, I’ve never really liked the term “chatterbots”, though it was certainly an apt description of a generation of conversational agents; however, in the immediate future I expect bots to be more practical, task oriented, and useful to people, hence the current term “assistant”. For this reason, I’ve always felt that the Turing test was a red herring. We don’t need bots that can fool people; we need bots that can help people, along the lines of augmented intelligence.
100% agreement. Chatbot to me, is a ‘cheap’ name. I also like assistant or ACE (artificial conversational entity).
USER:Will you answer no to this question?
AI: Answer no to this question? Would you do my bidding if I said I would?
So he did NOT answer no.. thus ‘no’ was the right answer, but again, skynet couldn’t actually say that—would have made the answer incorrect.
So, concluding it, that is to say that it is not about making chatbots accurate its about making them more human like…
What do you say?
Well I guess that depends on your objective. To pass a Turing Test, yes. For me though, I want accuracy - I’m thinking more like Marcus is.
~ ~ ~ ~
Thinking about this a bit more….. the whole accuracy thing. I think I will have options for how accuracy the bot will be. Right now Grace can be told things like:
User> John went to a celebration.
User> Did John go to a big party?
and it will, based on context, allow ‘celebration’ to be used interchangably with ‘party’. (and yes, control whether or not it points out adjectives—if ‘big’ modifying party/celebration is import, see Grace/Clues thread for some examples).
now some people I know are rather ‘anal retentive’ (consult Urban Dictionary lol), and to them, there is a fine distinction between almost any two words.
thus, it should probably be up to the user. Some users are fine with ‘liquid’ and ‘fluid’ being the same, I’m one of them, but a physicist in a lab would want there to be a clear distinction.
Thus, what is it about ? it’s about the end user - know your audience. A ‘saturday night just for fun’ bot will be much more forgiving or should be, about interchanging words and not care much about accuracy. But in a scientific context, you want almost infinite degrees of clarity.