AI Zone Admin Forum Add your forum

NEWS: survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

A New Challenge, And a New Contest
  [ # 16 ]

Sorry CR, I wondered if that term was used in the US grin


  [ # 17 ]

It looks bad to judge when you have a bot in the contest.

The CBC has had a drop in entries each year. It used to have over 40 to start with, now even with money there were only half that.

It would be a mistake to make it any more complex to enter or a long list of rules to follow. The great thing about the CBC was it was so easy to compete in

It might bring in more bots if there were different categories, conversational, function, learning bot etc.

Too many judges doesn’t always work either, anyone remember the year Ehab had seven and some just never submitted their results?


  [ # 18 ]

One of the things I would like to see is more “learning” type questions in a contest rather than just an interview type conversation on it’s favourite food, what pets it has or what it is wearing.

Things like, “I have a blue shirt. What colour is my shirt?” type questions.


  [ # 19 ]

“learning” type questions

That would be nice, I think of all the time I have spent having my bot memorize the color of clothes, cars, pets names, and the names of sisters, mothers and brothers.  It would be nice if someone ever asked things like what is the letter before D and and what was my grannys name.

I would like to get more bots in the contest.  I wonder if we could just ask people, like all the Pandora botmakers, the Verbot makers why they quit entering, or why they don’t enter?  Is it the type of questions? Is it something else?


  [ # 20 ]

I think the main reason any Pandorabot owners didn’t enter the CBC is that many assumed the “no ALICE clone” rule was still in place. This ridiculous rule banned all AIML bots whether they were blatent clones or had been highly customised, with many thousands of extra categories.

My own bot Mitsuku finished top in round one in 2010 and was then promptly banned for giving some of the same answers as ALICE, even though ALICE scored much lower than Mitsuku. I must admit, if it hadn’t been for the prize money, I wouldn’t have entered again after that fiasco.


  [ # 21 ]

Then it might be worth spending some time informing people that there is no longer a clone rule, and ALL bots are welcome.  I would like to see Verbots, Infs bots enter again.

I think it would be great to encourage learning bots in their own category, just to boost interest in them and their development.  Prize money could be split for best of each type.  Same for function bots.


  [ # 22 ]

Human: A man and a woman were stuck down a well but which one got out first?

Glad that wasn’t a question, my bot went to the old nursery rhyme Ding Dong Bell…came out to be a double entendre, I would die of embarrassment over.


  [ # 23 ]

I had to look up that rhyme—yup, it would have made for an interesting answer!! LOL


  [ # 24 ]

Wow! Lots of stuff to respond to. smile

First off, I agree with Patti. Judges should not have bots in the competition, nor should anyone involved in the management of the competition. Conflict of interest (or even the appearance of it) is one of the surest ways to destroy credibility, so I want to avoid that as much as humanly possible.

As to rules and restrictions for the contest, I would actually like to see fewer, rather than more. The “no clone” rule was a mistake from the start, since a great many folks took that to mean “no AIML bots”, and thus either didn’t enter, or in some cases, pulled out of the competition. I’d actually like to allow botmasters to enter more than one bot (though NOT an unlimited number, obviously), but how many is too many? Short of picking an arbitrary limit, how would one decide? I think this is something that needs to be discussed.

Learning questions would be great, as these types of questions really separate the bots with substance from those who can only recite a thousand places of PI. But again, what percentage of the questions should be learning questions? Should it be 100%? 50%? 20%? Ideas and opinions, please. smile

I like the notion of having multiple categories for the chatbots, but the number of categories should be relatively small, somewhere around 3. Patti’s category suggestions sound good to me, but what does everyone else think? And should a bot be allowed to enter under more than one category?

I think that one of the reasons for a falloff in participation in the CBC in recent years may also stem from insufficient promotion. What I would like to do is compile a list of forums that deal with chatbots that this community may not necessarily “haunt”, such as any Verbot forums, etc. and have one or more of us post notices about the new competition, including an invitation to stop in here to discuss what they would like to see in a new chatbot contest. I also think it would be good to contact various tech media sites and news sites, including and especially the larger ones, like the BBC’s tech department, CNN, MSNBC et al, and inform them of not only the “death” of the CBC, but the formation of a new contest, to take it’s place. Then in a few months, follow up, with a URL for the site, some backstory on the organization staff, etc. (Ok, maybe a little self-serving on my part, but can you blame me? raspberry). This sort of promotion through blogs and news media outlets would reach a wide range of people on a global scale, and can serve to bring in dozens of potential entrants, and thousands of visitors.

I have other notions and ideas floating around, but I don’t want to overload the discussion at this point. So let’s discuss what we have so far, and we’ll move on to the rest in due course. smile


  [ # 25 ]

I’d actually like to allow botmasters to enter more than one bot (though NOT an unlimited number, obviously), but how many is too many?

That would be great! maybe two?

Learning questions would be great

The contest usually asked one in the ten questions (Where do you I live, what is my name) I think that’s enough it might get redundant if it was over 10%

categories for the chatbots,

I think it would work best if bots had to enter just one category.


  [ # 26 ]

Thanks for the input, Patti. smile

I think 2 bots per botmaster would work. I was thinking more along the lines of 3, but that’s a small issue.

I think 10 to 20% sounds like a reasonable figure to me. I mentioned a range here because I’d like to see the competition structured somewhat like a tournament, where bots are randomly matched up in pairs, both publicly, and with the judges, in a single (possibly double) elimination matchup. The early rounds (2 or 3, depending on the number of bots entered), only 5 questions would be asked, and if each set of questions includes a “learning” or “memory” based question, then that’s 20%. smile

If botmasters are allowed to enter multiple bots, then extra care will have to be used to ensure that bots entered by a botmaster don’t get asked the same questions in the same round. As an example, say I entered both Morti and Morgaine in the competition. The 5 questions put to Morti in round 1 can’t be given to Morgaine, so that I couldn’t use that as a means of “cheating” to give Morgaine an unfair advantage.

I’m not entirely sure I agree with limiting one bot to one category, to be honest. Let’s say that Denise, a virtual assistant, enters the competition; why should she be restricted to just the “Virtual Assistant” category, if she also serves as a conversationalist bot, too?

As to the different categories, Art Gladstone from AI Dreams has suggested to me that one of these categories be for NLP bots, and I agree. I would love to see how Victor and CR would fair in a heads up competition of their bots. smile

I’m getting a lot of great feedback, not only here, but from other sources, as well. I’m really excited to see this new contest take shape, and I’m looking forward to seeing where this leads.

Please keep the ideas coming, folks! This is great! cheese


  [ # 27 ]

bots entered by a botmaster don’t get asked the same questions in the same round.

That would be a big problem.  The only issue with asking different questions is someone is going to complain that a winning bot got the ‘easy’ questions.


  [ # 28 ]

That’s certainly a concern, but no living individual will pick which bot gets which question. I plan on automating as much of the bot testing (not judging) as possible. In fact, I hope to be able to automate as much of the contest as possible, especially in the areas of bot registration and entry, visitor interaction, and others. The idea here is to reduce as much as possible the chance that someone can unfairly influence the contest. Questions submitted to the contest, whether by botmasters, organizers, the general public, or wherever they originate will be categorized and graded by “difficulty level” at the time they’re selected for use in the competition, and stored in the database with the appropriate tags. When the system randomly selects questions to give to a pair of bots, the bot’s information will be updated to show the ID of the question asked, so that the same question won’t be asked to those bots again, and so that bots by the same botmasters won’t get that same question, either. In a given round of 5 questions, the system would pick 1 learning question of a difficulty level that’s based on which round, 1 “easy” question, 1 “medium” question, and 1 “hard” question, and one category specific question. Rounds with 10 questions will have more category specific questions, and a higher percentage of questions with higher difficulties.

I hope to have a very large pool of questions to choose from, of all difficulty levels, and in several categories. In some ways, I think that this process will be ultimately more fair and balanced that subjecting all of the bots to the same 10 questions, and it will certainly add some variety to the competition, don’t you think?

Now please bear in mind that none of what I’m describing is set in stone, whatever phrasing I may use. These are just thoughts and ideas at this point, so feel free to pick it all apart if you disagree. smile


  [ # 29 ]

After a small Googling session, and a little bit of surfing around, I found some tournament software that intrigues me a bit. Take a look at and let me know if something of this nature might be something that the new contest could possibly use. It’s very close to what I’ve been thinking of using, but as I’ve said, I want to hear the opinions of others, as well.


  [ # 30 ]

I feel as though any contest that tests a bot’s conversational abilities would be better than a question/answer inquiry.  From my brief involvement “behind the scenes” in the salvation of the 2010 CBC, I have a couple of observations.  Rather than clutter up this thread…


 < 1 2 3 4 >  Last ›
2 of 7
  login or register to react