AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Preluding Member towards AI
 
 
  [ # 31 ]

Victor’s post reminds me of countless TV cop shows where the suspect insists, “I didn’t kill nobody”. This to me is an admission of guilt and so he should be arrested straight away.

 

 
  [ # 32 ]
Steve Worswick - Apr 16, 2012:

Victor’s post reminds me of countless TV cop shows where the suspect insists, “I didn’t kill nobody”. This to me is an admission of guilt and so he should be arrested straight away.

Um… Can they arrest you over there for Bad Grammar?!?!?!big surprise

 

 
  [ # 33 ]
Dave Morton - Apr 16, 2012:
Steve Worswick - Apr 16, 2012:

Victor’s post reminds me of countless TV cop shows where the suspect insists, “I didn’t kill nobody”. This to me is an admission of guilt and so he should be arrested straight away.

Um… Can they arrest you over there for Bad Grammar?!?!?!big surprise

How do you know the suspect is using bad grammar… maybe he is literally stating that he did NOT kill NOBODY…  he is intending on confessing, in his double-negative way, that he DID kill somebody…. or he could be using bad grammar, you don’t know do you lol smile

Thus, ‘type of user’ should come into play.  If the user is a ‘tough guy’ on the street, then “didn’t * nobody” take it to mean “didn’t * anyone”.  If, on the other hand, it is a smart boolean-logic type person, then take “I didn’t kill nodoby” to mean an admission of guilt like Steve pointed out.

Bots need to factor ‘type of user’ into their determination of intended meaning.

I still think the title of this thread, “Preluding Member towards AI” makes no sense.

perhaps he meant “A member introducing AI”, or “Introducing Member to AI”?

 

 
  [ # 34 ]
Victor Shulist - Apr 16, 2012:
8PLA • NET - Apr 10, 2012:

@Victor:

Quote, ” I Can’t Get No Satisfaction”,

I cannot get no non-satisfaction.

Perfect example , 8PLA, of how the average human mind is illogical and so NON-boolean algebraic smile

Many many people use “no” to be a synonym of “any”, which of course is just common human stupidity.

“I can’t get no satisifiction” being thus equivallent to “I can’t get any satisifaction”

but logically, the true value of “I can’t get no satisifiction” is “I CAN get satisifaction”.

That’s why I’m not building a bot that acts human, too stupid, let it be a smart computer program inside.

——————————————————————————————————————————————————-
Bug in chatbots.org website—first posting of this disappeared, thus posted again, but
the system ended up posting twice.  message to webmaster—further testing required!
——————————————————————————————————————————————————-

It goes much further than this: at a subconscious level, ‘not’ is often skipped. Furthermore, It can be a trick sometimes used to try and mislead people (whoever is publicly saying:  I will not run for president this election,....) Another example: look for the people who think in terms: I will always continue, and discard the: I will never stop…

 

 
  [ # 35 ]

Hum, those tactics wouldn’t work on me.  I always notice a ‘not’ or any other negation.  I read thoroughly smile 

But yes, you’re probably correct, the massess are probably no more than Eliza bots, only picking key terms they recognize.

 

 
  [ # 36 ]

How do you know the suspect is using bad grammar… maybe he is literally stating that he did NOT kill NOBODY…  he is intending on confessing, in his double-negative way, that he DID kill somebody…. or he could be using bad grammar, you don’t know do you lol smile

Thus, ‘type of user’ should come into play.  If the user is a ‘tough guy’ on the street, then “didn’t * nobody” take it to mean “didn’t * anyone”.  If, on the other hand, it is a smart boolean-logic type person, then take “I didn’t kill nodoby” to mean an admission of guilt like Steve pointed out.

Bots need to factor ‘type of user’ into their determination of intended meaning.

Just parachuting in here, but could it be that a bot IS a bot and as such is usually just expected to respond like one.

At some core level a successful (engaging) bot should, and can (be programmed to), be able to discern logical inconsistencies and mis spellings, have some persistence of conversation/interaction, and be able to ask for clarification of (or ignore) ambiguous input. 

Most chat bots seem to be used as Expert systems- knowledgeable in a specific area(s). Does a service bot working the service desk at a warehouse really need to know not to go recursive meltdown when when someone inputs: “It is opposite day today.”, or just intelligent enough to respond with “Today is [TODAYS DATE], Your item is on Tray [TRAYNUMBER], Have a nice [“opposite”] day!”?

 

 
  [ # 37 ]
Carl B - Apr 18, 2012:

Just parachuting in here, but could it be that a bot IS a bot and as such is usually just expected to respond like one.

 

That’s what I’ve been advocating all along - and the waste of time the ‘immitation game’ is - having a bot know if it likes pizza or not is simply a stupid waste of time- concentrate on langugae skills, learning, etc.  Focus on usefulness, or entertainment value, not on meaningless things, how can a bot ‘like’ anything anyway?

 

 < 1 2 3
3 of 3
 
  login or register to react