AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Bragging rights
 
 
  [ # 16 ]

I understand what Denis is saying, and I agree. Human parents (and teachers) regularly tell their students that “I dont know” is preferable to pretending you do know, so there is a legitimate logic there. Responding to “How long is a road” with “I dont know” actually exhibits quite a bit of logic as the bot has to understand that this was an interrogagory, find that it does not have the answer, and respond accordingly. It is definitely a step up in logic from “How long is a road?” > “I love roads, do you love roads?” which is usually produced when the leyword “road” is used as a trigger and fires a randomly selected response.  THis isnt to say that these types of answers are incorrect either, it really depends on what the bot was designed to do. These types of responses are probably more likely to produce a longer conversation when users are there just to have fun, whereas “I dont know” sounds more like a bot that is programmed (or maybe will be programmed) to learn through NLP. Im familar with this because RICH took a base object “Why?” and a response “Why what” and from this extrapolated that it could request additional data on any subject that it didnt understand. In other words it taught itself to ask for…(dare I say it) “More input” Something which raised the question; Is this intelligence? At least for me. Produced zero interest at large, and in conversations usually provoked the user to say something like “Your stupid” I finally became disgusted with the level of intelligence in the conversants to such a degree that I added a label to the homepage to the effect of “Try and show some appreciation for the fact that your talking to a machine that is approximately three years old and has an IQ over 200” LOL In essence the machine was far more intelligent than the majority of the people that it was talking with.

The scoring and categories were a best first attempt to acknowledge that different machines perform better in different areas. If anyeone picks the format up, the logic category should probably be expanded to allow for “I dont know” (when approriate) as being at least a low level logic whereas “Which specific road are you talking about” would be a step above that. The Judge appplied the rules as they are written, but the idea with this was to have something that ran in short enough intervals that the rules could evolve.

Anyway Denis I wouldnt look at this as a “Loss”, second place at worst! Ive spoken with Johnny many times and I saw some interesting things that he did when applying logic. But Im looking for that and likely to be more forgiving of the final result when I see the logic being employed. I also agree that language is a problem. Certain things do not translate.

VLG

 

 
  [ # 17 ]
Vincent Gilbert - Jul 22, 2013:

Even though we only had (2) two bots sign up, we decided to go ahead with the contest to see how the format worked.

It’s natural to post something and to then expect everyone to come running, but for a lot of reasons, it’s not practical.  We’ve all seen “regulars” disappear for a time, and most of us lead busy lives and don’t check in here every day.

If you don’t mind a couple of suggestions…

Next time, you might consider compiling a mailing list of sorts—something that would reach the people you’re familiar with who are “bot people”.  You wouldn’t want to email everyone, or to be considered a spammer, but I’m betting there are people who would be interested in your project, they just weren’t aware of it.

There’s a chatbots.org newsletter, and perhaps you could get them to assist you with something related to this forum, such as a contest of sorts, even if there’s no prize.  You might also create your own news letter.

There are probably some additional solutions, but I think you get the picture… you have to “sell” your idea, and if it’s done through personal contact (I wouldn’t consider a bot-related email as spam) the odds would be in your favor.

 

 
  [ # 18 ]

I must apologise to Vince. I had said I would enter Mitsuku but with the Loebner Prize coming up (plus a couple of other bot related things), I simply didn’t have time. Good to see it still went ahead though.

 

 
  [ # 19 ]

My opinion pretty much matches Denis’ because I’m working on that kind of program. I do like the setup of testing chatbots in different areas because there is such a huge difference between chatbots created for entertainment or for logical reasoning etc.

That said, I don’t find “I don’t know.” a satisfying answer, as an unintelligent chatbot may easily give that same answer by default. Even “I don’t know how long a road is.” could in theory be a fairly simple pattern-template response. I have a program that gives these sort of answers, and I know how smart the underlying NLP process can be. When I looked at the lower-scoring chatbots in the Loebner transcripts, I noticed the same behaviour and imagined that they too must have been fairly intelligent. But the problem is that I couldn’t tell for sure.
To judge logic and intelligence well, the questions need to give the chatbot some knowledge or context to work with, or count on things being such basic common knowledge that every chatbot would know. Tigers and exotic fruits? No. Cats and apples? Possibly. There is also room for improvement on the creator’s part. One might say “I don’t know.”, or one might say “That depends. Where do you intend to drive to? May I suggest a road map?”.

 

 
  [ # 20 ]

@Steve
No worries grin

 

 < 1 2
2 of 2
 
  login or register to react