AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Is it possible to stop people from abusing the chatbot?
 
 
  [ # 16 ]

While I agree with you Don and it doesn’t particulairly Damage me reading the transcipts it’s just a petpeeve of mine. I can’t Not look at the transcripts. It’s how you help your bots grow… finding flaws in their conversations and such. So wading through the hurtful and rude speak is common. And even then I still have to finish the transcrips because there could be something useful there.

 

 
  [ # 17 ]
Don Patrick - Sep 14, 2013:

The best solution is of course to win in a battle of wits and/or social power (easy to say, I know).

That in a nut shell.

While a simple “3 Strikes” (or 5, or whatever) rule is pretty easy to implement, it does not address the underlying issue that every bot maintainer has- lots of freaky sexual, insulting, or just plain sick input. 

We have approached this by giving Laybia an extensive positive feedback weighting system that balances input quality with suggestive compliance- low quality input leads to progressively less positive feedback (flirting, etc).  The feedback loops are interesting because it is here that pattern matching meets human psychology; can you not just get the meaning of the input, but also come up with dynamically appropriate responses. 

The issue with “friends” abusing the bot master via the bot is pretty weird, but should be addressed in the same manner- seek and destroy the attacks using whatever means you want to employ (from mockery to psychological baiting/switching to IP blacklisting).  I mean, you need to show that your bot can handle bullies- it IS a bot for f*cks sake “…infinitely wise; but with no shame, no pity, and no remorse.”

 

 
  [ # 18 ]

I get too many chatlogs to check them all and so went for the 5 strikes and you’re out simply as a way to reduce the number of logs. Once a user is banned, they are presented with an email address to write to to explain why they were abusive. I nearly always unban them unless they were extremely bad or offensive.

I would say 7 times out of 10, the users are just messing about to elicit reactions from the bots. Once I unban them, most of them tend to talk to it in a more reasonable fashion. I haven’t come across anyone insulting me personally on the bot though.

 

 
  [ # 19 ]

In Morti’s case, It’s actually part of his personality (and mine, truth be told) to dislike gratuitious profanity, or sexual references. And while being only a program, he (yes, “he”) still deserves more decency than to be abused in such a manner, so I made sure that he doesn’t have to tolerate it. raspberry

 

 
  [ # 20 ]

Joseph:

Since you’re coding in Java, you are capturing both the outgoing message AND the incoming/return message from your chatbot. When an abusive AIML is matched, code in your AIML a response string a keyword such as “banned.” Increment a counter in Java when this keyword is returned, and once your threshold has been reached, stop sending the string to your bot and instead reply back to the user they have been banned.

 

 
  [ # 21 ]

That’s exactly how Mitsuku works. You are welcome to check out warnings.aiml on my AIML page:
http://www.square-bear.co.uk/aiml/

It shows how to do this using AIML but you may be able to convert it into whatever language you choose. The principles are the same.

 

 
  [ # 22 ]

Hey guys, I was AFK after an injury I had (and still have) so I couldn’t do much work on JERVIS.

I just updated JERVIS’s code to include Steves Warnings file (Thanks a lot Steve!!!) So we shall see how that works out.

Thanks a lot for the help guys!!!

 

 < 1 2
2 of 2
 
  login or register to react
‹‹ Job board      Taking a Short Sabatical ››