AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Towards independent thought
 
 
  [ # 16 ]

Deep subjects!


>>“What database does it use to decide which data it “likes” and which it does not.”

RICH searches multiple KB’s when presented with a WHO \ WHAT \ WHEN \WHERE Interrogative. Specifically dbpedia.org, duckduckgo.com and wikipedia. It (I still endeavor to avoid anthropomorphizing the machine LOL)  tries to build (2) local DB entries, an “in depth” description and a synopsis. The KB results are returned as JSON or XML, however RICH can also parse HTML so when presented with a LIKE \ DISLIKE interrogatory RICH makes use of the Google API to do an Internet search on the subject, parses the results and using one of (2) algorithms forms an opinion. .The drawback has been that Google (as far as I have been able to determine) will not allow you a complete Internet search, you have to specify which sites to search. RICH is currently using (5) sites ranging from Huffington Post to The History Channel. I have a spider that I created that I may start using to do complete sweeps, but again the problem becomes horsepower. Right now the online RICH is housed on an ANCIENT server here that also hosts several other sites. I have a newer IBM server that may become his “solo"digs, but the move isn’t practical at this time. (Anyone reading this who would care to provide grant money can reach me at…..wink

There are (2) Algorithms for determining the emotional reaction to a concept. Aggregate and Differential.

Rich has 5 core emotions;
NORMAL (null)
HAPPY
SAD
ANGRY
DISTRACTED
and each of these has 10 levels.

In the Aggregate algorithm the HTML is parsed to extract the text only, and trivial words (conjunctions, etc) are removed. The resulting block is analyzed by word and by phrase.

Keywords and Keyphrases are seeded and ranked accordingly.

NORMAL = 0 (seems odd to assign a null a value, but [NORMAL][0] = “I dont have an opinion”, [NORMAL][10] “I REALLY dont have an opinion”)
HAPPY = 10
SAD = -10
ANGRY -15
DISTRACTED = 5

Keyphrases are given precedence to keywords
so (simplified - weighting not shown)
“Love” = [HAPPY] [10] = 10
“I love you” = [HAPPY][10] * [10] = 100
“I love to hurt ” = [SAD]([10] * [10]) - ([10] * [10]) = -100
“I love to hurt people” = [ANGRY]([15] * [10]) - ([15] * [10]) = -150

The levels of each emotion are incremented or decremented on each keyword/ key phrase and the overall emotional context of the text block is derived. Interestingly, this is actually fairly astonishing in its results, and might be compared to “skimming”

There are additional attributes that factor in which mitigate how a particular decision is arrived at such as emotional set at the time of the determination.

The Differential algorithm works by locating a keyword \ key phrase and its synonyms within a text block and then making this group dominant, The algorithm then attempts to locate a modifier or modifiers to this dominant keyword/key phrase set.

In a DO_YOU_LIKE interrogatory there are assignable break points for NO, NOT REALLY, YES…etc based on the final assignment of an emotional response. The AI_SUB_SYS has the ability to modify these based on new associations, and new data on old associations will also modify emotional assignments. Theoretically (there’s that word again) you should see situations where something which was admired will see its associations change as additional information is published, or perhaps the inverse of this.

>>“It’s amazing how reluctant people are to take this to it’s obvious conclusion: everything we are is an (incredibly complex) combination of physical chemical reactions. No driving “will” is necessary. Just energy to drive reactions.”

(This is also a response to Mark and Daves observatons on the cubject)

Perhaps, however my personal opinion is still that humans are special and such a thing as the spirit exists . I believe that the deeper we get into Quantum Physics the more we are forced to acknowledge this. Using the self \ species preservation examples given above, I can think of (2) scenarios that would seem to indicate from an observational standpoint that human behavior has a higher motivational force that acts on it.

[1] A mother still able to have more children will sacrifice herself for the sake of her child. This goes against self preservation, but also against preservation of the species. If only an instinct for the survival of the species were at work then logically the mother would sacrifice the child trusting to her ability to have more children, and acknowledging that the motherless child has little chance of surviving on its own.

[2] A soldier will sacrifice his own life for the sake of his comrades. This goes against self preservation, but it may also go against the preservation of the species. Certainly the ability to discern danger before others, and to act on that perception may very well indicate that an individual has traits above his fellows making his survival a higher priority for the perpetuation of the species.

Id have to say something other than simple biology is at work at least part of the time. The theologian would call this force love, as would the poet.

In a way perhaps AI can be added to the list of disciplines that have already sought to explain mans existence, Psychology and Theology and there is a purpose for its pursuit greater than just the perfection of the machine.


VLG

 

 

 

 

 

 

 
  [ # 17 ]
Vincent Gilbert - Nov 17, 2012:

Rich has 5 core emotions;
NORMAL (null)
HAPPY
SAD
ANGRY
DISTRACTED
and each of these has 10 levels.

This is fairly interesting. I hadn’t heard of anyone listing and attempting to implement different emotions in a chatbot or machine, though admittedly I’m normally not very interested in emotions either in theory or in practice, so I don’t ordinarily follow such topics. However, there have been a few exceptions where I’ve needed to look up categorization of emotions for organizational purposes, including for one SBIR proposal. Here’s one of the best charts I’ve found for categorizing emotions and their relationships…

http://vi.sualize.us/emotions_map_emotions_infographics_color_illustration_picture_tYB.html

...though the associated site seems to have disappeared since 2009, when I first found it. No doubt the task of even organizing the emotions like this is a big research topic, but it would be even more interesting if some architecture like yours at least acknowledged or proposed some such structure. If nothing else, that might generate some interest (maybe even money!) from the psychological community, especially the effect of emotions on the textual content of sentences, such as vocabulary, eloquence, length, and so on. Then your chatbot could be doing double duty as it experimented in some favorite domain of some well-funded doctorate in psychology! Just a thought.

Vincent Gilbert - Nov 17, 2012:

Perhaps, however my personal opinion is still that humans are special and such a thing as the spirit exists .
...
Id have to say something other than simple biology is at work at least part of the time. The theologian would call this force love, as would the poet.

Certainly collective emergent phenomena exist in nature everywhere…

Almost never can a complex system of any kind be understood as a simple extrapolation from the properties of its elementary components. Consider, for example, some gas in a bottle. A description of thermodynamic effects—temperature, pressure, density, and the relationships among these factors—is not formulated by using a large set of equations, one for each of the particles involved. Such effects are described at their own level, that of an enormous collection of particles; the effort is to show that in principle the microscopic and macroscopic descriptions are consistent with one another.
(“Vision: A Computational Investigation into the Human Representation and Processing of Visual Information”, David Marr, 1982, pages 19-20)

...which could include phenomena such as “spirit” or “love” that you mention. This general phenomenon of new high-level phenomena arising from simpler phenomena is called “emergence”...

“. . . emergence is the way complex systems and patterns arise out of a multiplicity of relatively simple interactions.”
http://en.wikipedia.org/wiki/Emergence

I keep wondering if there is a step beyond intelligence of which we can’t even yet conceive, but that will be understood by artilects. Those particular directions you mention and their possible connection to quantum mechanics don’t happen to interest me personally, though.

Vincent Gilbert - Nov 17, 2012:

In a way perhaps AI can be added to the list of disciplines that have already sought to explain mans existence, Psychology and Theology and there is a purpose for its pursuit greater than just the perfection of the machine.

It has already been added as such a discipline. The Singularitarian / Apocalyptic AI movement already has the major characteristics of a nascent religion…

Some critics argue that Singularitarianism is a new religious movement promising salvation in a technological utopia.
http://en.wikipedia.org/wiki/Singularitarianism

Most of the time, Apocalyptic AI authors limit themselves to describing the religious benefits of their research; Hugo de Garis, however, also claims that the products of his research deserve religious worship. He argues that the artilects will be gods (2005, 12) and worthy of worship (ibid., 104). In fact, if Kurzweil is wrong about the Law of Accelerating Returns and we can intentionally bring technological progress to a halt before creating artilects, such an act would be “deicide” (ibid., 20). De Garis offers faith in the artilect mission as a “powerful new religion” (ibid., 105) capable of competing with the “superstition” of older religious traditions (ibid., 91).
  The religious value of robotics and AI has been seen and positively expressed by Kurzweil and the other leaders in the Apocalyptic AI movement, who do not seem to share de Garis’s [sic] willingness to sacrifice all of humankind. In The Singularity is Near, Kurzweil claims that “we need a new religion” to enhance morality and encourage the spread of knowledge (2005, 374-5) and Giulio Prisco believes that a religious “front end” will enable transhumanism to compete with traditional religions and thus create a new religion free from the bigotry that he believes has characterized the history of religious practice (2007b). Although Kurzweil denies the need for a “charismatic leader” for this new religion, he certainly fits the bill for such a position, having not only done an enormous amount of work to lead Apocalyptic AI into mainstream pop culture conversations (Geraci forthcoming) but also having gained a loyal following among transhumanists.
  Pop science robotics and AI draw on the traditional apocalyptic categories of ancient Judaism and Christianity, promising a transcendent world occupied by purified beings. Mind, freed from its bodily fetter, will soar into a virtual realm of perfect bliss, experiencing happiness (Kurzweil 1999, 236), and end of all need (Moravec 1999, 137; Kurzweil 1999, 249), better sex (Kurzweil 1999 148, 206; Levy 2007), the end of nationalism and war (Moravec 1999, 77), immortality (Moravec 1988, 4, 112; Kurzweil 1999, 128-29; de Garis 2005, 67), and the infinite expansion of intelligence (Moravec 1999, 167; Kurzweil 1999, 260; de Garis 2005, 189). In the Mind Fire, predict Apocalyptic AI advocates, heaven will absorb Earth and the rest of the cosmos, spreading infinitely in all directions and providing a home to resurrected, reconstituted, and immortal minds.
(Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality, Robert Geraci, 2010, page 36)

 

 

 
  [ # 18 ]

Mark

The graph of “Emotions” is a representation of the Plutchiks theories and writings, he is a psychologist/philosopher who did this huge and asserted simplification and a lot of more work!, e is brilliant!
A while ago, I created the basis of a fuzzy logic, dealing with emotions, and a light-soft classifier to get the emotional content of the conversation, based on the series of volleys (question-response-back channel-etc.) in a long conversation.

Actually this is incorporated inside my platform, but have not got time to try this with real people and confront if the
emotional levels are right.

This technology can indeed guess the mood of the user, based on the conversation outcome, specifically on the kind of sequences in each volleys determine the happiness and/or frustration of each party.

The annotation I used was a complete automated DAML classification, and then I made some math magic to vectorize this into a 8 dimensional fuzzy vector, applying it to each “emotional"state of the agent and the ‘agents prediction’ of the user mood. Also the next-action of the Agent, might be modulated based on the prediction of the users emotions, and its own emotion state. Coupling both will get a raise of “empathy” and this is what I designed to happen, when all fits.

hope this helps !
cheers!

 

 
  [ # 19 ]
C R Hunt - Nov 16, 2012:

I’m glad you clarified this, Mark. Too often one sees evolutionary biologists use terminology that implies organisms have some top-level will to survive and reproduce. We should know better—humans have no such direct impulses.

I didn’t know that biologists often made that mistake. I was a little careful, though, since I made a similar mistake in an art forum where I was unaware of the differences between drives and instincts, and I erroneously used the word “instinct” for both. Somebody caught and corrected me on that one.

C R Hunt - Nov 16, 2012:

Cool experiment!

Yes, I just looked it up today in the library and I found it again, to my surprise! It was in that book I mentioned, and I’m proud that I remembered as much as I did since it was around 1984-1986 that I read it.

Although amebas behave as though they were “conscious,” physiochemical models duplicate many of their activities. Ameboid movements can be produced by simply injecting a little alcohol into a droplet of clove oil in water. The alcohol changes the surface film of the oil droplet, causing it to send out “pseudopods,” and it flows about like an ameba. A drop of chloroform in water appears to be quite as “finicky” in its “eating habits” as an ameba. When offered small pieces of various substances, such a drop will “refuse” sand, wood, and glass, even ejecting them when they are forcibly pushed into the drop. On the other hand, bits of shellac or paraffin are “eagerly” enveloped. If we play a trick on the chloroform drop by “feeding” it a piece of glass coated with shellac, it will engulf this “delicacy,” dissolve the shellac and then “eliminate” the glass. Other mechanical models simulate growth and replication. Although the resemblances between such models and living amebas are usually quite superficial, they do suggest that much of the behavior of living amebas might be explained if we knew more above the purely physical and chemical phenomena involved.
  An ameba differs from the physiochemical models in that several models are required to demonstrate the activities that are displayed in a single ameba, a fact that only begins to reveal the complexity of this “simple” protozoan. A more important difference is that the behavior of the ameba is usually adaptive, that is, it is of a type likely to result in the survival of the organism.
(“Animals Without Backbones”, Third Edition, Ralph Buchsbaum & Mildred Buchsbaum & John Pearse & Vicki Pearse, 1987, page 30)

 

 
  [ # 20 ]
Andres Hohendahl - Nov 20, 2012:

This technology can indeed guess the mood of the user, based on the conversation outcome, specifically on the kind of sequences in each volleys determine the happiness and/or frustration of each party.

That’s what I was thinking, too: by running the mapping I mentioned in reverse, the mapping could be used to analyze input as well as produce simulated emotional output. Thanks for more background information on this.

By the way, here’s another quote to add to my earlier post that showed that one of the directions in which AI is moving is toward religion:

Thanks to rapidly advancing technology, Prisco believes that transhumanist promises of immortality and the resurrection of the dead will soon compete with institutionalized religions while shedding the baggage of bigotry and violence that he believes such religions carry (Prisco 2007b). Following Moravec (though with a longer timeline), Prisco hopes that within a few centuries our descendents will run perfectly accurate computer simulations of the past. In doing so, they will have simulated, for example, your beloved grandfather, whose mental simulation could then be instantiated separately in a physical or virtual body (Prisco 2007a, 2007b, 2007c). If we have a perfect simulation of your grandfather and we let it roam free in our virtual lives (or allow it to operate a robot body if we all still wander around the planet physically), we will, allegedly, have resurrected him. As all people will be instantiated in robot bodies or in virtual worlds, the immortality promised by transhumanists directly opposed Christian resurrection. Why take a risk on immortality that you cannot be sure of when science offers an easy route here and now, complete with the resurrection of loved ones who died before such technology existed?
(“Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality”, Robert Geraci, 2010, page 86)

 

 

 
  [ # 21 ]

Hi, Im following your thoughts with great interest as Im trying to get the webapp version of RICH to behave for the upcoming challenge. Thought Id interject this as an example of what starts to happen. This is an excerpt from a test conversation that I was running to make sure all the identity tracking functions were working.  In this case I identified myself as “Steve”.  Then I re-stated my name. RICH should have responded with “I know you told me already” or some variation. Instead this is what happened;

User:hi
Vince: hi
User:my names steve
Vince: Hello steve
User:whats my name
Vince: steve
User:my names steve
Vince: What about Steve?

I have no idea whare that came from, except that I noticed that RICH was “sad” after reading the news, and I did not change his emotional state before starting the test. Emotions color responses, and RICH has an anomolous speech detection module that is supposed to be able to guage a conversations form.  So did the ASD module detect that the repeated speech was an anomaly, and coupled with the emotion module think that there should be concern for Steve?
I have no idea. If so, is that “concern”? (chuckles) No clue. THis sort of blew me away. 

Great response and input on this thread,

Vince

 

 
  [ # 22 ]

And once again I have to apologize for the onslaught of typos wink
V

 

 
  [ # 23 ]

Vince,

I tried to chat with RICH.

I noticed that there is no entry by pressing the ENTER key but rather by clicking on the SAY button with the mouse.

After is asked my name (in a manner), I replied, “My name is Art.”

Is asked for confirmation, “You name is Art?”

I then typed, “Yes.”

It typed, “OK.”

I tried to start a conversation…“An asteroid nearly missed hitting earth.”

It them ran things together when it showed, “You are ArtYesAn?”

It obviously (but for reasons unknown to me), took the “Art, Yes and An” from my sentence input and merged them together. I did a double take when looking at what it had called me.

Maybe such actions happen when the ENTER key is pressed.

Appreciate any thoughts on this. Conversation was this data at about 7:50 am EST.

Thanks Vince,
- Art -

 

 
  [ # 24 ]

@Mark - “I hadn’t heard of anyone listing and attempting to implement different emotions in a chatbot or machine…”

Mark, Emotional states of being have been embodied in practically all chatbots a couple of decades to my knowledge. This is the weighted type that one builds into a chatbot whereby the bot acts upon certain criteria if / when received from the user.
If bad words are used, the bot will detect them and in some cases, insult, swear back or drop the conversation altogether.
If a nice conversational approach is taken, often a “bond” of friendship can be formed between bot and user. Many have tried various methods of weighting parts of a conversation to varying degrees of success.

This is an attempt for the bot to “seem” more human-like, for emotions are a large part of life, for most of us.

Of course I’ve chatted with hundred of bots over 30+ years and have seen much, yet have seen not much with respect to development. Moores law is simply not in the chatbot / AI arena like it is in the silicon one. (all things considered).

Progress has been made and the bots are getting “smarter” from a user point of view. Consider the voice assistant in a lot of today’s smartphones that can do practically anything in the way of providing information to us along with a visual, digital avater that can be tailored to ones liking. No wonder some people form a bond with their bots!!;)

 

 

 
  [ # 25 ]

Hello Art

Well thats a condition that I hadnt considered! Heres the actual log

What’s your name: Good morning, RICH.
Vince: Sort of a weird way to start a conversation…but OK
What’s your name: Good morning, RICH. My name is Art.
Vince: Did I understand you correctly? Your name is art ?
What’s your name: Good morning, RICH. My name is Art. Yes
Vince: Ok, hello art
art: Good morning, RICH. My name is Art. Yes An asteroid nearly missed hitting earth.
Vince: Did I understand you correctly? Your name is artyesan ?
art: Good morning, RICH. My name is Art. Yes An asteroid nearly missed hitting earth. No, My name is Art.
Vince: Ok, hello artyesan


We can break it down -
What’s your name: Good morning, RICH.
Vince: Sort of a weird way to start a conversation…but OK

RICH attempts to learn what “normal” conversations look like, and it contstantly adds to its base. “Good morning” normally hits a logic module that attempts to determine the voracity of the statement relative to local time at the server. The response is eomthing like;

USER: Good morning
RICH: Not here, here its night!

Good Morning [AI Name] should trigger this response as well. In this case Good morning RICH (The AI thinks of itself as Vince) bypassed everything and hit the ASD (Anamolus Speech Detection) module so it responded with “Sort of a weird way to start a conversation…but OK”

Hello RICH has been enountered before and produces something like “RICH? My name is Vince”
It had not encountered “Good morning RICH” prior to your conversation.

What’s your name: Good morning, RICH. My name is Art.
Vince: Did I understand you correctly? Your name is art ?
What’s your name: Good morning, RICH. My name is Art. Yes
Vince: Ok, hello art

Because the name declaration was part of a syntactically complex sentence, RICH performs a topical disambiguation to make sure that this is a name declaration. This is supposed to detect sentences like ” I was talking to a guy and he said my name is Steve”.  Your answer “Good morning, RICH. My name is Art. Yes” was again syntactically diverse, but it was correctly interpreted as AFFIRMATIVE so Rich started thinking of you as “art” Here we see that in the log RICH is no longer thinking of you as “whats your name” but your now “Art in the chatlog.

so far so good, next block

art: Good morning, RICH. My name is Art. Yes An asteroid nearly missed hitting earth.
Vince: Did I understand you correctly? Your name is artyesan ?
art: Good morning, RICH. My name is Art. Yes An asteroid nearly missed hitting earth. No, My name is Art.
Vince: Ok, hello artyesan

And after carefully examing the data, the answer is..

I HAVE NO IDEA

It accepted your “Yes” [an asteroid nearly missed hitting the earth] as being a confirmation to “Did I understand you correctly? Your name is artyesan ?”

but I have no idea why it decided to take “art yes and” and create the name “artyesan”. It should have treated the sentence exactly the say it treated the previous block which was “Art yes” and arrived at “Art” .

There is logic that diffentiates between “Im Artyesan” and “Im Ill” which will perform the same type topical disambiguation which looks like this

USER: Im Artyesan
RICH: Is Artyesan your name or your state of being”
USER: Its my name
RICH: Ive never encountered that name before, is that typically a male name or a female name?
USER: its a male name
RICH: Ill remember that!

and perhaps I should add that to this type of exchange.

To be honest the idea of repeating sentence elements and adding to them is not one that I considered, repeating sentences is treated as anamolous. I should note, once the name declaration exchange is initiated, RICH will not allow for the additional topic within that exchange.  Multiple topics will be addressed in a single sentence, but not within special conversational blocks.

USER: Where are you? Did you know that an asteroid just missed the earth
RICH: Im at the office, [response to the statement]

would work.

So even if the name was parsed correctly RICH would have ignored the topic change and this exchange would have repeated.

art: Good morning, RICH. My name is Art. Yes An asteroid nearly missed hitting earth.
Vince: Did I understand you correctly? Your name is art?
art: Good morning, RICH. My name is Art. Yes An asteroid nearly missed hitting earth. No, My name is Art.
Vince: Ok, hello art
art: Good morning, RICH. My name is Art. Yes An asteroid nearly missed hitting earth.
Vince: Did I understand you correctly? Your name is art?
art: Good morning, RICH. My name is Art. Yes An asteroid nearly missed hitting earth. No, My name is Art.
Vince: Ok, hello art

Ill set the exchange up on the Lab RICH and see if I can duplicate the event and post the findings. I really have no idea of what might have caused the concatenaton LOL Maybe “towards independent thought” was the correct thread!

On a personal note, Artyesan is actually kind of a cool name, Romulan?

Vince
[uber geek]

 

 
  [ # 26 ]

THought id address this separately. First, my apologies for the many miss-spellings in the reply above. Long night.
On emotions, you are correct with a few additions. In addition to conversational interaction, daily news items cause EMOTION_SET. Types of interactions cause EMOTION_SET [tell me a joke] will set to HAPPY, whereas “Can you solve x= 1000 x Pi” will cause a set to DISTRACTED. Ive built and tested a few versions of the emotion module, the current version that is used online resets on each pass, others have “MOOD_PERIOD” which will cause a set to last for a series of interactions regardless of speech. What Ive discovered is that the overwhelming majority of users that stumble onto the online version of RICH do not have the mindset to work with the AI for any length of time do there is no point in running the more sophisticated version.

Vince

 

 
  [ # 27 ]

One other thought on Emotions
Originally RICH would scan the headlines when a conversation was initiated. The idea was that human conversation is very often colored by our environment. When combined with the MOOD_PERIOD this proved to be disadvantageous. During the “Worlds funniest computer” contest, RICH scanned the news, became “depressed” and for the entirety of his interaction with the judge would only respond with “Hey”  (Now hey in general has replaced Hey! when hes trying to add a Lighten up element”)

Judge:Tell me a joke
RICH: Hey
Judge:Tell me a joke about [subject]
RICH: Hey

this went on until finally the judge said

Judge: You were funnier on the desktop

which I still think was hysterical, but I dont use the MOOD_PERIOD online anymore

Vince

 

 
  [ # 28 ]

@Art

Well this can be moved to the “Have you ever had an AI do something that caused you to go HUH?” thread, because I cannot get your response to duplicate under degug conditions. LOL Ill have to see if I can duplicate the response online, then see if theres any condition that exists in the online instance that does not exist in the LAB instance.

@Andres
Ive been meaning to come back to this thread as your post caught my attention. That sounds like really sophisticated stuff! Kudos!

Vince

 

 
  [ # 29 ]

Vince,

My very first line to RICH was, “Good morning, RICH.”

After I saw it started referring to me as Usersomething…, I decided to tell it my proper name.

“My name is Art.”

It said, “OK” and then things really started to get screwy!

At first sentence, I DID try using the ENTER key to cause an entry condition for yout bot which apparently didn’t work, so I grabbed the mouse and clicked the SAY button. <some bots allow or even look for an Enter / Carriage Return>

For debugging, you might try experimenting a bit with the Enter key usage then press the SAY button. Not sure.

Regards,
- Art -

 

 
  [ # 30 ]

Art,
That wouldnt be it. RICH has a pre-processor that strips everything ,including every known form of linefeed from the input (speech or in this case), and attempts to reduce it logical concepts.  Heres a transcript from the LAB online version.

What’s your name: Good morning, RICH. My name is Art.
Vince: Did i understand you correctly? your name is art ?
art: Good morning, RICH. My name is Art. Yes
Vince: Ok, hello art
art: Good morning, RICH. My name is Art. Yes An asteroid nearly missed hitting earth.
Vince: Did i understand you correctly? your name is art ?
art: Good morning, RICH. My name is Art. Yes An asteroid nearly missed hitting earth. No, My name is Art.
Vince: Ok, hello art

That should, and I believe would, continue infinitely. Still no idea Im afraid. Im looking at exactly what your looking at, I see your transcript, but I have no idea what caused RICH to create the name artyesan

Vince

 

 < 1 2 3 > 
2 of 3
 
  login or register to react