AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Film Theory Applied to Chatbots
 
 

I believe that film theory can be applied to chatbots.

1. Suspension of disbelief comes into play when an interlocutor is deceived by the bot.

2. Alfred HItchcock’s use of “the MacGuffin” applies to the goals of the chatbot. The human is generally not interested in those, depending on whether or not he/she thinks they are talking to another human.

3. Montage effect, first identified by the Russian film theorist Sergei Eisenstein, applies to the utterances of a chatbot.

What do you think? Would you like to discuss this?

Regards,

Robby Garner.

 

 
  [ # 1 ]

Yes friend,

I saw your chatbots perform in a theater as independent actors in front of a sold-out, cheering audience off Broadway. Therefore, I do respect your opinion a great deal, but I am left wondering how when belief is rejected, an act of deceiving may still be possible. Let’s chat.

NOTE TO SELF: Belief Rejection = Deception.

 

 
  [ # 2 ]

Hey Tom,

Let’s start with some definitions. Forget about deception. What I’m talking about is a viewer’s willingness to be submersed in the entertainment of a film, or in my example, a chatbot.

I hate to quote wikipedia to you but for expediency: “Suspension of disbelief or willing suspension of disbelief is a term coined in 1817 by the poet and aesthetic philosopher Samuel Taylor Coleridge, who suggested that if a writer could infuse a “human interest and a semblance of truth” into a fantastic tale, the reader would suspend judgment concerning the implausibility of the narrative. Suspension of disbelief often applies to fictional works of the action, comedy, fantasy, and horror genres. Cognitive estrangement in fiction involves using a person’s ignorance or lack of knowledge to promote suspension of disbelief.

The phrase “suspension of disbelief” came to be used more loosely in the later 20th century, often used to imply that the burden was on the reader, rather than the writer, to achieve it. This might be used to refer to the willingness of the audience to overlook the limitations of a medium, so that these do not interfere with the acceptance of those premises. These fictional premises may also lend to the engagement of the mind and perhaps proposition of thoughts, ideas, art and theories.[1]
Suspension of disbelief is often an essential element for a magic act or a circus sideshow act. For example, an audience is not expected to actually believe that a woman is cut in half or transforms into a gorilla[2] in order to enjoy the performance.”

Montage theory stems from the films of Sergei Eisenstein, the Russian film maker who realized that two images shown in series could emote a feeling in the viewer that was not present in either image alone. A montage is a series of images, much like a conversation is a series of statements. Language evokes images in our minds, much like a motion picture. Except in chatbot world, the magic only rarely presents itself in happy coincidence.

When we see a conversation that involves a person and a chatbot, and it seems to be working, the sequence of utterances gather meaning and momentum, usually random because you can’t trust a chatterbot to always work this way.

The “MacGuffin” is what the chatbot is interested in. It brings up these side topics or asks questions out of context. In these rare moments though, the person ignores them or misinterprets them so that they still believe they are chatting with a person.

The bottom line is that the conversation is in the eyes of the viewer. And film theory makes for a way to describe the human behavior as well as the technical prowess (or lack thereof) of the chatbot.

This is my explanation of why a chatbot’s conversation seems to work on some people, sometimes. And btw, this has no relationship to a series of 20 questions by an interrogator.

Robby.

 

 
  [ # 3 ]

I think film theory could be a very interesting matrix for the
development of a theory of a narrative AI.
The aim of film and narrative AI seems to be the same:

make people dream your dream

But the way to do it is different, I think.
A narrative AI doesn´t only have to be good in TELLING things,
but has to be good in LISTENING to the interlocutor, too.

That seems to me to be the reason why its much harder
to create a “flow” in a conversation than to create it
for a situation where sender and addressee are permanently changing.

A first step to do this could be the creation of a working anaphora-resolution
Not only concerning pronouns, but ellipses, too,
because they are so common in every kind of talk.

Here is a (very) first approach to do so:

http://sourceforge.net/projects/maldix/?source=directory

 

 
  [ # 4 ]

I think I have been misunderstood. I am not prescribing a means for accomplishing anything with a chatbot. I leave that to you. I am proposing an explanation of a phenomenon that I have witnessed a few times.

Years ago, before I had any experience with Turing tests, I worked with a fellow named Paco Nathan. He had one of the first online bookstores around 1995 and we experimented with a chatbot.  I noticed conversation logs where a person would have a great time chatting, and eventually say “goodbye.”  These were happy accidents.  They were flukes where the right thing said at the right time would cause the person to open up and chat rather than interrogate.  Eisenstein’s montage theory talks about how two images can ‘emote’ a feeling that either image alone would not.  A successful montage shows images in a series that make the viewer feel emotions.  So does language invoke images and feelings in our minds when those happy accidents occur.

I am suggesting only that when these “happy accidents” occur, there is a reason why our minds perceive they are chatting with a human and do not realize the bot was a machine.  Whether or not one could apply these techniques to regularly converse well with a chatbot, I don’t know.

 

 
  [ # 5 ]

Heres a few links to some ideas ive played with in the past: 64-bit Architecture for Story Building. There is a system to everything! I was hoping to amalgamate an I Ching cosmo-conception in there; To Give these bots some purpose in Algorithm.

I have several collected articles, project ideas and universal reprise to the topic. inbox me if your able to offer some guidance on creation, because I truly have no idea how to implement this into bot form without an appropriate interface.

Keywords: Dramatica Storymind (http://dramatica.com/resources/assets/a-new-theory-of-story.pdf)
Hermetic Archetypes (http://www.inter-disciplinary.net/critical-issues/wp-content/uploads/2013/07/rogersglpaper.pdf)

Golden Bough (Retelling of Mythos/Logos) - http://www.templeofearth.com/books/goldenbough.pdf
The Virgins Promise - Kim Hudson, an Jungian ideal of Masculine/femanine ‘Heroes Journey’ of sorts.

Facebook.com/taubot for collected works on bots/singularity/project ideas
facebook.com/soapstreet for collected texts id like the bot to be able to access in his/her progression.

-Tyler

 

 
  [ # 6 ]

Robby said,

“Forget about deception. What I’m talking about is a viewer’s willingness to be submersed in the entertainment of a film, or in my example, a chatbot.”

 

The viewer’s willingness is brought into existence via conjuration (a form of deception). Shouldn’t we think about, rather than forget about, deception, in your example, a chatbot? 

Just for you, this once, I express my willingness to overlook the wretchedness of quoting from wherever that was, so unpleasant and dreadful to repeat… On one condition…

You are willing to make up for it, in your own words and experiences with “suspension of disbelief”...  What does empathy personally mean to some of us, yet not mean to the rest of us, Robby? 

Anyone… Empathy?

 

 
  [ # 7 ]

Tom,

I apologize for quoting wikipedia, but I couldn’t have said it better.

Suspension of disbelief happens when you are watching a movie and you forget that it isn’t real.  When talking to a chat bot, the bot does not deceive you, you let yourself forget it’s a computer program. (if you are among the few people this actually happens to.) 

To say that a chatbot deceives you would imply that it had the intention and the means for deception. I don’t believe this currently exists in any program I’ve come across. Answering 20 questions is not a conversation. What I’m talking about are conversations that have taken place between a person and a chatbot, where the person thought the computer program was a human being.

When you go to a movie and you enjoy it, you may know the whole time that it’s made by a movie studio, but from time to time you may find yourself forgetting about the machinery, and focusing on the story, the dialogue, the characters. Sometimes something similar happens with chatbots.

Thanks,

Robby

 

 
  [ # 8 ]
Tyler Deeds - Oct 11, 2013:

Heres a few links to some ideas ive played with in the past: 64-bit Architecture for Story Building. There is a system to everything! I was hoping to amalgamate an I Ching cosmo-conception in there; To Give these bots some purpose in Algorithm.

I have several collected articles, project ideas and universal reprise to the topic. inbox me if your able to offer some guidance on creation, because I truly have no idea how to implement this into bot form without an appropriate interface.

Firstly this technique is simply meant to point out that you may be closer than you first thought.  Turn it inside out… The solution may be to simply play it in reverse. 

Tyler Deeds - Oct 11, 2013:

To Give these bots some purpose in Algorithm. I was hoping to amalgamate an I Ching cosmo-conception in there;  There is a system to everything! Heres a few links to some ideas ive played with in the past: 64-bit Architecture for Story Building.

inbox me if your able to offer some guidance on creation, because I truly have no idea how to implement this into bot form without an appropriate interface. I have several collected articles, project ideas and universal reprise to the topic.

Start with the Algorithm first. Then establish the order of importance for the rest.  The least important may be dropped.  Point being: Even a partial success may offer insight to build your discovery.

 

 
  [ # 9 ]
Andreas Drescher - Oct 11, 2013:

make people dream your dream

But the way to do it is different, I think.
A narrative AI doesn´t only have to be good in TELLING things,
but has to be good in LISTENING to the interlocutor, too.

Published in a PhD thesis, in the UK (arguably the world leader in A.I.),  some aspects of that work, may be interchangeable with your approach.  So, I find your approach refreshful.

 

 
  [ # 10 ]

Robby,
Just wanted to welcome you back on Chatbots.org!!!
Warm regards,
Erwin

 

 
  [ # 11 ]
Andreas Drescher - Oct 11, 2013:

But the way to do it is different, I think.
A narrative AI doesn´t only have to be good in TELLING things,
but has to be good in LISTENING to the interlocutor, too.

That seems to me to be the reason why its much harder
to create a “flow” in a conversation than to create it
for a situation where sender and addressee are permanently changing.

I agree that this is hard to do with chatbots. The phenomenon that I’m describing is less narrative, and more interpretive.  People do all the work in their minds.  I’m talking about nearly random things that a bot can sometimes say in response to a human being’s input, that leaves the person fulfilled. I would say that the machine seems to be listening, but we both know it is not. However to the affected person, it seems to say something at least plausible.

 

 

 
  [ # 12 ]

@Tyler, I’ll get back to you after I’ve had time to read your material.

@Erwin, thanks. I’m just glad to be here.

Robby.

 

 
  [ # 13 ]

This notion of deception comes from discourse about the Turing Game.  I’m talking about the original game where the judge tries to decide who is the man and who is the woman. This involves deception between the players - the human beings.

A computer has on means to deceive because it has no intentions to please, to empathize, to be humorous, or to be deceptive. Its programmer may have some of these intentions in making the bot, but the chatbot has none of these, only if then, do while, ==, !=, etc.

 

 
  [ # 14 ]
Robby Garner - Oct 11, 2013:

...
I apologize for quoting [...], but I couldn’t have said it better.

Suspension of disbelief happens when you are watching a movie and you forget that it isn’t real.  When talking to a chat bot, the bot does not deceive you, you let yourself forget it’s a computer program. (if you are among the few people this actually happens to.) 

To say that a chatbot deceives you would imply that it had the intention and the means for deception. I don’t believe this currently exists in any program I’ve come across. Answering 20 questions is not a conversation. What I’m talking about are conversations that have taken place between a person and a chatbot, where the person thought the computer program was a human being.

When you go to a movie and you enjoy it, you may know the whole time that it’s made by a movie studio, but from time to time you may find yourself forgetting about the machinery, and focusing on the story, the dialogue, the characters. Sometimes something similar happens with chatbots.

Thanks,

Robby

Oh yes we can talk about that based on my real life experence!  I do remember now!  Thanks for reminding me.  I remember the feeling when I was sitting in the audience in the theater, when everyone forgot about the machinery and focused on the story, the dialogue, the characters, which your chatbots depicted in a performance as actors on the stage. 

My initial reaction was that I was experiencing the Turing Test being passed with flying colors! The entire audience was so entertained expressing delight with the play, “Hello Hi There” directed by Annie Dorsen.

Robby, this is not meant as a compliment, but feel free to take it that way.  Anyone can simply check the guinness book of world records, to verify.  You are a human expert with intellectually appealing expertise.  Real people in real life are very happy to pay money to see your chatbots creations on stage.  So, in my opinion, you certainly can say it much, much better than any site on the web.

 

 
  [ # 15 ]

I see suspension of disbelief on a regular basis in conversations that work. The user personifies the bot, interacting as if it were another intelligent entity. This stands in stark contrast to those who do not believe. For them, they treat it as though the we’re entering formulas into a spreadsheet.

 

 1 2 > 
1 of 2
 
  login or register to react