AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

The dynamics of conversation?
 
 
  [ # 16 ]
Don Patrick - Oct 4, 2013:

The initial small talk phase in conversation seems geared towards finding common ground.

That’s a good observation and it explains a lot, I believe.

Don Patrick - Oct 4, 2013:

I’m not sure if the train of thought is as much that as it may be relevance making up for a lack of knowledge/interest in the exact topic.

I agree there, too. There are a lot of analogies I can think of here:

Music: Changing topic is analogous to changing keys, which you don’t want to do too much since the listener never gets a sense of home. In music one way to change keys is to use a “pivot chord” that is common to both keys, which is analogous to a concept that pivots between two other concepts during a train of thought.
Movement: Physical pivoting is something you don’t want to do too much when walking since it starts to give a sense of indirection, and when used to excess causes counterproductive wandering.
Chess: Changing topic in conversation is like changing strategy in chess, which can be counterproductive since strategy requires many moves to implement, so changing strategy too much is like changing plans, and is counterproductive unless justified in special cases since it erases the progress that has already been made toward a goal.
Mutation: The mutation operator in genetic algorithms is needed to prevent occasional erased promising patterns, but should be used sparingly since excess usage erases the normal progress that requires a span of many generations.

Interesting topic, almost visual in nature. I imagine two entities circling each other, probing each other, looking for a common valence bond of some sort, which each knows usually exists, but occasionally is lacking. If they do connect, then they benefit each other. I also have a hypothesis about humans’ innature sense of extropy in that they want to feel they’ve made progress, in this case after having completed a conversation. No one wants to feel they’ve wasted their time in a conversation; everybody has some reason they engage in a conversation in the first place (assurance, information, friendship, sex, whatever), and maybe the initial part of the conversation is partly where each party attempts to find out what the other wants from the conversation.

 

 

 
  [ # 17 ]

Ah, I found it: This field of research is called Conversational Analysis.

 

 
  [ # 18 ]

I’m continually amazed by the way great minds seem to feel the need to over-complicate simple issues by offering up some PhD thesis, or by constructing indecipherable mathematical equations, in order to fabricate a response to, “Hi.”

From the replies I see when attempting to converse with chatbots, I’m convinced that most people never attempt to converse with their own bots, or view their chat logs.  And, from reading the postings in forums like this one, I’m lead to believe that the notion of “Artificial Intelligence” consumes people to the point that they waste a majority of their time supplying their creations with answers and methods that never appear during an exchange between bots and humans.  They probably never appear in conversations anywhere except when testing the limits of knowledge.

Is it truly necessary to diagram sentences and analyze every specific nuance of speech, or to break down the typical comments of an average fowl-mouthed youth intent on causing mischief… or each lonely person just looking for friendly chat?  How many rocket scientists are chatting with our bots seeking the answers to the mysteries of the universe?  How many nuclear scientists are consulting chatbots with requests for assistance in designing a new reactor?  Is it necessary for a bot to sort out a question asked in a way no Earthling would ever ask it… in seven parts?

I live in a city where it’s easy to find a park bench where I can chat with strangers from around the world.  I’ve never studied linguistics, and I’ve never written (or read) any papers on the topic, but I know how to hold a conversation without making a life-time study of it, and I know how to make an inquiry without making it seem offensive or too personal.  I pretty much know what to expect from a normal, polite conversation without having to circumvent the obstacles of children and contest judges looking to score the coup de grace.  As a botmaster, still down here on the ground, my needs are less great.

By the way, I’ve never read it anywhere, but it seems to me that upon meeting someone, stranger or friend, after saying “Hello,” then next communication should be to inquire about each other’s state of being.  Hope that helps any deep thinkers who might be puzzled by the question.

 

 
  [ # 19 ]

I get what you’re saying. raspberry Mathematical formula for psychology sound far-fetched to me too. Why complex when simple is enough? Well, how much research is useful depends entirely on the purpose. If low-brow online chat with teenagers were the purpose I would agree that such research is unnecessary, but my particular project is actually intended for intelligent discussions with those rocket scientists you mention. wink
Another reason in my particular case is that I am blessed with a form of autism, which means I did not grow up with a natural understanding of social rules and actually don’t know how to hold a conversation. “I just know” does not compute to me or a program, just as “Choose what sounds right” is not useful advice for a spelling algorithm (actual advice from an academic book on the English language).

All that aside, I think a little research into this area can’t hurt even for casual chatbots, because from the little I’ve read it is quite clear that chatbots are constantly breaking fundamental social rules, and the resulting downward spiral of conversation quality should be apparent even from chatlogs. Suppose you knew why that happens, then you could prevent it.

 

 
  [ # 20 ]

@ Don Patrick

I wasn’t directing my comments at any individual, but more the at the notion of over-thinking… or over analyzing the process.  But, I’m glad you chose to respond.

Certainly, the audience a bot serves is the all-telling factor.  I’d venture to guess that the one you intend for your bot is unique, probably a very small one, and that your bot is not easily located on the Internet.  I think that most of the bots I’m familiar with tend to respond to their audience rather than to create it.  If low-brow is what you get, answers suited for rocket scientists won’t do.  However, while the answers that would satisfy a rocket scientist might involve more research, the mechanics for displaying them are the same… as long as we’re staying within any particular language.  Even rocket scientists know how to answer, “How are you?” without having to look up the formula for responding.

I’m pleased that you referred to your form of autism as a “blessing”.  I admire and respect those whose condition allows them to utilize their mind in ways that leave the rest of us gasping.  I suffer from a form of stupidity that leaves me in the dark when it comes to math… yet, I love science.  I sometimes wish a tree would fall on me so that I could wake up with the ability to play the piano, or to solve the problem of gravity, but it just ain’t happin’.

Social rules and conversational etiquette can best be learned through practice.  You can read all the rule books you want, and still come off as being stiff and robotic.  You’re correct in that much of it falls on my ears when it’s wrong as “not sounding right.”  In that sense, it might be easier for me to chat without having a rule book in hand.  But, when teachers told me to look up a spelling word in the dictionary, that didn’t help because the rules of spelling in English are not always constant or reliable.  Who thought it was a good idea to spell “enough” ending with an “f” sound?

Nothing wrong with a little research, nothing bad about discovering why conversations sometimes never get off the ground.  But, I suspect that “the resulting downward spiral of conversation” has more to do with content than form.  When the rules of grammar or social propriety are violated, I’m able to deal with that.  My main complaints (or areas of interest) have more to do with some of the basic problems I’ve been asking about for years that are never addressed, and for which, no solutions have yet been found.

 

 
  [ # 21 ]

The point of over-analysing is certainly a valid one, I just wanted you to think outside your usual experiences. You guess correctly about my project. There are also various projects out there concerned with making life-like digital companions (e.g. Mohan Embar’s “Empathy Now”), where more sophisticated conversational mechanics are necessary.
I did notice that on your mind were the unnecessarily high demands laid upon chatbots in contests, and we don’t differ much in opinion in that area. Perhaps Terminator is to blame for the public view that AI should be mentally equal to an experienced adult human, in contrast to Japan where Astro-boy is the role model.

Thunder Walk - Oct 15, 2013:

Social rules and conversational etiquette can best be learned through practice.  You can read all the rule books you want, and still come off as being stiff and robotic.

This is a prime example of how people are different: What works for an intuitive person will not work for a logic-driven person, and the other way around. To me it will always remain illogical to reply “How do you do?” to “How do you do?”. But you realise that these examples are the simplest parts of conversation, they might as well be pre-programmed (And so I do).

The downward spiral is largely due to contents, so that is what I am mostly concerned with. Form has effect to a lesser extent, but the best worst example of that is Cleverbot. Beside the fact that Cleverbot doesn’t understand anything, it also tends to answer in short phrases, that socially say as much as “I can’t be bothered to put effort into this conversation”. This is then mirrorred by the user and cascades into a tossing contest.

 

 
  [ # 22 ]

At the age of 64, I’d venture to say that my “usual” experiences, when combined, are highly unusual.  I’ve been forced to think outside the box more than you’ve been in it.  I don’t have a formal education, but that doesn’t mean I spend my time watching old re-runs of “I Love Lucy.” smile  I understand how changing perspective can be helpful when problem-solving.

I’m an admirer of logic driven thinking, but perhaps you’ve exposed its main flaw.  I’m not locked in to either way of thinking, I can chose between logic and intuition as needed, or compare both to see which is more successful.  The logical side of my brain tells me that the best response to a question is to first, answer it.  The correct reply to “How do you do?” is to indicate your condition followed by an equally friendly inquiry.  “I’m well, thank you.  How are you?” 

One frequent situation my bots encounter is when a visitor ignores the question when asked for their name, or some way of addressing them.  I have my bots “set the tone” for the conversation by asking again if the user tries changing the subject.  That’s also my motivation for requiring correct spelling (when I can) and proper grammar, and for refusing to accept most text-speak.

But, I believe the topic had to do with “the dynamics of conversation,” with an emphasis on the word, “conversation.”  Anyone interested mainly in the specifics of a correct answer could simply copy and paste it from a text book edited by experts.  But, conversational language, obviously, is a different kettle of fish.  And, when speaking of “dynamics” I take it to mean that you’re interested in the “interactions” that take place, and what “drives” the conversation.

If your bot is a text reverence, a Mister Spock-like persona will do.  But, keep in mind that it probably wouldn’t be a good idea to invite Mister Spock to go dancing.  To produce a “conversational” response, I sometimes use the visitors name in the reply, and I sprinkle in a liberal amount contractions—“can’t” rather than “can not”.  The chat also has to be friendly, unless provoked.

(Laurence) Kim Peek comes to mind.  I’m sure you know who he was.  I once read how he enjoyed going to Shakespeare plays, but his father had to stop the practice because Kim kept shouting out corrections when the actors didn’t say their lines exactly as they’d been written.  Kim failed to realize that the purpose of the play was entertainment, not a precise recitation of what was written on paper.

 

 
  [ # 23 ]

I can agree with everything you said there. I am glad to hear I underestimated you smile. Your age was not apparent from your profile, and on four occasions throughout this forum I had noticed you describing the scenario of average chatbot conversations with average teens.
Indeed I am interested in the interactions and drive of conversations. I am building a program that generates its own answers, and the result so far has very much been Mr Spock. Which serves my purpose, except for the problem that it fails to keep the ball of conversation/discussion rolling. Since computers do not have intuition (and because I wish to skip a lengthy “training” phase), I figured I would add conversational rules. And so I find it useful to read books like “Why men and women should remember eachother’s name, and not talk about themselves”, which is a book written for common people that points out the reasons and effects of conversational interactions. It points out why and when it is appropriate to use someone’s name, as you also advise, and when not. When to offer an opinion, when to suggest, when to agree, when to assert, when to elaborate. My intuition is too vague about these things to translate to programming. But if I know what drives the conversation in detail, I can tell the program which variables to keep an eye on to judge what kind of response is most appropriate. Like “My car broke down.” calls for problem-solving or sympathy and not for stating one’s opinion on cars. Too complex? Perhaps, but I think it has been assessed that current chatbots leave people wanting in the area of intelligent discussion.

 

 
  [ # 24 ]

Having to spar with teens is the main preoccupation of my five chatbots.  Although they do get chats from adults, those are fewer, and they too are often directed at just causing mischief.  Most seem to view chatbots as a game or a toy they might abuse.  My son displays a $17,000 robot at the Science Center where he works.  It performs all sorts of tasks, but he says that when most kids encounter it, the first thing they do is to punch it.

This topic is getting very interesting. With the form of bot I work with, you’re constantly looking for ways to answer generically, or in ways that would answer variations of the same question to save space and time at the keyboard.  The method often falls flat, and as time passes, I find I’m writing replies in a more specific way—and my files are growing ever larger.

Like “My car broke down.” calls for problem-solving or sympathy and not for stating one’s opinion on cars. Too complex? Perhaps, but I think it has been assessed that current chatbots leave people wanting in the area of intelligent discussion.

Since your interest runs deep, you might be interested in this. When I’ve seen “experts” on TV talking about things such as conversation, or psychology, over and over again, they’ve said when a person has been in an accident and tells their male friend about it, the response is, “What did you do about it?” But, when they tell a female friend, the reply is, “That’s terrible, how did it make you feel?”

Discovering a visitors gender and then remembering it might be helpful.

I like your approach, I think you’re on the right track.

 

 
  [ # 25 ]

Thanks, I’ll make a note of that. Eventually, switching to female conversation might be as simple as changing a parameter to give “social support”-type answers higher priority. Well, smile maybe not quite that simple.

On the topic of tough chatbot customers, my theory is that respect is a matter of consequences. Abusing a chatbot rarely has consequences other than a single line of text on the screen. So I’ve added things like an insult-o-meter to mine, along with the ability to shut down the entire computer if it feels like it tongue rolleye. I believe AIML bots can set emotion flags that influence the rest of the conversation, also showing consequences. In the case of your son’s robot, I think I’d wire a siren to its shins, just annoying enough to embarrass the parents.

 

 
  [ # 26 ]

I’m told the abuse of the robot takes place right under the parents’ noses, but they’re somehow unaware… usually talking with someone else.

Dealing with bot abusers seems to take up most of my time and energy.  There are occasions when an idea comes to mind and I’d like to sit down and code a batch of related responses on a particular subject or situation, but by the time I’m finished countering all of the latest creative ways people try making the bots look foolish, or to have them agree to something embarrassing… or illegal, I’m sick of chatbots and go looking for the TV remote.

I don’t mind the occasional jerk taking pot-shots, but the return offenders who create the longest chatlogs are the ones that I’d like to send a nuclear download. I can obtain easily their IP address, it seems there should be a way to block that IP from contacting my bots in the future.  Just haven’t found it yet.

On conversational language… One of the biggest issues I have with AIML is that it doesn’t recognize punctuation for anything other than a line of text ender.  I’ve asked why the question mark isn’t recognized and it’s been explained—and I understand.  But, the fact remains that some (using conversational language) ask questions that appear as statements except for the question mark. When my bots are unsure, they’ll sometimes ask, “Are you asking me or telling me?”  People usually reply by saying something like, “The question mark was a clue.”  How do you tell people that your bot is smart, but can’t see a question mark?

 

 
  [ # 27 ]

Steve mentioned something about banning in the other topic.
At first I wrote trickery down to the human superiority complex, but then I noticed that some of my more intelligent friends also enjoyed fooling my program by telling it contradicting facts. So it must be a natural mechanic to test the limits of a robot’s awareness, and I stopped minding so much. Generally though, I think the reply “No, you!” has infinite applications smile

Question marks bother me when people don’t type them, and other signs haven’t been foolproof in cases like “Which is fine by me.” or “Surely you are joking?”. However I do use question marks as primary tell-tale, as I have yet to see a statement with a question mark. In your case it can’t be overcome, but personally I’d go for something less profound like “Is that a question?” and possibly “Sorry, I must’ve overlooked it” or blame it on the computer or however you wish to acknowledge the miscommunication.
One of the major social rules that I see chatbots break is to ignore a question, worse still to deny that there was one when the user can clearly see it. In these cases I’d prefer a chatbot to acknowledge its failure and invite the user to carry on, rather than draw attention to it like “You didn’t ask a question. - Yes I did. - No you didn’t. - It’s right there. - I don’t see it. - Are you blind?” etc.

 

 
  [ # 28 ]

Usually, Mitsuku gets some of the same people behaving badly that my bots do, and by the time I start complaining about it, Steve has already created an AIML file at his site addressing the issue.  http://www.square-bear.co.uk/aiml  A while back I was getting a visitor telling my bots things like, “My name is Eric. I am a girl”.  In short order, Steve created Gender.aiml and was offering it freely.

The question mark issue becomes a problem when my bots say something like, “There are lots of chatbots that are willing to talk about that, I’m just not one of them.” and the human replies, “You are not?”

Reading those words without considering the question mark will likely turn it into a statement.  I’ve had to fall back on using <that> tags, or in some cases, just language, to understand how to respond without looking foolish.  “You like that?” is obviously a question.  But, someone employing English as a second language will say things like, “That is true?” meaning “Is that true?”

Another “trick” I learned from the master (Steve) that’s helped with conversational language is the practice of stripping out unnecessary verbiage.  People get a bit chatty sometimes and in an effort to sound intelligent, or maybe just to try to fool a bot, the become overly descriptive and pile on the words.  You can find a list of such words and phrases here.
http://knytetrypper.proboards.com/post/1838/thread

 

 
  [ # 29 ]

Don said, “I am blessed with a form of autism ...”

Was autism ever a blessing in disguise for you, Don?

 

Thunder Walk - Oct 15, 2013:

I’m continually amazed by the way great minds seem to feel the need to over-complicate simple issues by offering up some PhD thesis, or by constructing indecipherable mathematical equations, in order to fabricate a response to, “Hi.”

@Thunder
Joseph Weizenbaum, a great mind,
earned an M.S. in Mathematics and
invented chatbots.

 

 
  [ # 30 ]

Thunder; I have a nephew called Ann, and a niece called Josh smile, then there are transgender people, and what if Eric was short for Erica? I’m not sure I’d rely on the same method, although it is certainly human to assume gender from a name. I will have to tackle the gender issue sooner or later to choose between saying “he” or “she”, but as it is a bit awkward to ask I am now considering a button “select your gender” before the conversation. I’m glad I don’t have to worry much about the other issues you mentioned. Clever solutions though.

8pla; Until recently I thought autism was something other people had, but I can’t say that the trademark social weakness has ever been a pleasure, to put it mildly. The pleasant flipside is that I get to use my intellect to compensate, which gives me above-average abilities of prediction. Autism might also be an advantage in programming AI, as my own understanding is fairly ‘artificial’ as well.

 

 < 1 2 3 > 
2 of 3
 
  login or register to react