AI Zone Admin Forum Add your forum

NEWS: survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

What are your hair-raising experiences with AI?

Has anyone here had an experience with AI where your hair literally stood up as the subconscious result of witnessing or experiencing the behavior of some extraordinary AI system?

I did once, though I believe only once. It happened when I was just starting to get into the field of AI, when I was starting to read books about AI. I believe it was in “What Computers Can’t Do” (Hubert L. Dreyfus) that the author described a dialog with SHRDLU, the classic, early block stacking program that seemed to understand everything it was asked about its microworld. It was a dialog very much like this one:

After reading the incredibly accurate responses by the computer that seemed to demonstrate human-level understanding of a visual scene, I felt the hairs of my arms involuntarily standing up.

That experience apparently happens to other scientists, as well. Here is a quote I found recently:

But watching cell division, especially the “mitotic spindles”—the symmetrical starbursts—and the orchestration that followed their appearance, I had the overwhelming impression that I was watching an extraordinary order and power—an intelligence—at a level beneath and within that of the simplest single-celled creature. Not only were single cells alive, there were things within them that were too.
  Later on, in college biology courses and in medical school, I studied these processes more carefully, always hoping to understand something of this mysterious power. I saw many more films and videos of mitotic division, of ever higher resolution and clarity, and every time the hair on the back of my neck would stand up (and still does, even as I write).
(“The Quantum Brain: The Search for Freedom and the Next Generation of Man”, Jeffrey Satinover, 2001, page 156)

A similar experience in biology was described in the book “Chaos: Making a New Science” (James Gleick) when one scientist witnessing one-celled organisms coalescence into a larger organism when their environment became too stressful decided at that moment to drop all other research he was currently doing and to move into that field to study that extraordinary phenomenon.

Analytically, this is probably just a type of fear response in animals where the animal first senses an unanticipated threat—a sense that something very unexpected is happening, in violation of all previous experience. The animal’s hair automatically stands up to make the animal seem larger and less vulnerable—a type of bluster—as is often seen in cats about to fight. But in humans, such a response to a technological experience is more significant. The word I believe perfectly describes such a phenomenon is “uncanny”, which means “having or seeming to have a supernatural or inexplicable basis”. In other words, what we are seeing at such a moment is something that completely defies a lifetime of personal experience—something happening that is so strange that it simply cannot happen according to our accumulated mental model of the world.

Movies often have such scenes, especially movies about the supernatural. The chair stacking scene from “Poltergeist” comes to mind. Some movies and books about intelligence in animals have similar scenes, such as the apes organized into a class session…

Rise of the Planet of the Apes - Awakening Clip

What got me thinking about all this was a practical reason, though: I was trying to think of the most impressive way to demonstrate a new AI architecture: I wanted to make the viewer’s arm hairs and neck hairs stand up! Although I now know enough about SHRDLU and other AI systems to know they are not as intelligent as they seem, I’m sure there still exist behaviors that would make my arm hairs stand up.

I decided that one demo that would blow me away is if I could give a computer program a question from an IQ test, especially one that involved illustrations, and have it not only read the question’s text, but also to understand what was needed, and provide the correct answer in say less than one second—less time than it would take me to even read the question—with an accurate explanation of its reasoning. Completely understanding a visual scene would also be exceedingly impressive, I believe, like the voice-controlled photograph display scene in “Blade Runner”, except if you could ask it questions like “If you drew a line between the left-hand top corner of the door, the top of the flower pot, and the foot stool, what type of triangle would be formed?” Or—even simpler—to take a text file like a data file or resume, recognize which sections existed based on indentation and changes of format, recognize the subject matter of each section, and to be able to format it according to arbitrary new instructions. Such demos wouldn’t even involve an extraordinary amount of knowledge, I don’t believe, only the ability to do general understanding and to perform good object/shape recognition. I don’t know that those would raise my arm hairs, but they might. I believe such ideal demos are good things to think about, since they can provide practical demo goals, and can give a better idea of what humans consider intelligence at a level that impresses, without resorting to Turing tests. Does anybody else have any other ideas for such exceedingly impressive demos that wouldn’t require an impractical amount of knowledge, say for chatbots?

For all its brilliance, SHRDLU was but a shooting star in the firmament of AI. It soon became clear that it could not be extended beyond the Blocks Micro World. The simplicity, logic, and isolation of this domain allowed the appearance of intelligent dialogue by simply dodging difficult language issues. Even within the Blocks Micro World, such idiosyncratic sentences as “How many blocks go on top of each other?” baffled SHRDLU.
(“AI: The Tumultuous History of the Search for Artificial Intelligence”, Daniel Crevier, 1993, page 102)

Blade Runner Enhance Scene



  [ # 1 ]
Mark Atkins - Dec 2, 2012:

Has anyone here had an experience with AI where your hair literally stood up as the subconscious result of witnessing or experiencing the behavior of some extraordinary AI system?

Back when I was first programming in JavaScript and I knew very little about JavaScript, I thought I was turning off the AI program but it would not die. It gave me a really spooky feeling, although perhaps not a truly horripilating experience. Even now, when I tell one of the Mentifex AI Minds (in English, German or Russian) something and then keep on interacting with the AI, it still shocks me when the AI blurts out some fact that I had told it but I had forgotten about.


  [ # 2 ]


Once my @TwavelAdvisor Twitter bot completed multiple levels of negotiation for a Scandinavian vacation rental, which made my head spin at the potential legal ramifications.


  [ # 3 ]

This was funny. dave and I have the bi-lateral API’s sort of working so that a user conversing with RICH can ask RICH, to ask MORTI a question , and vice versa. This dialogue was produced when we were first testing it. (vince is the designation for RICH at this point)

User: Do you like cats
Vince: No, Im not really a cat person
User: Does Morti like cats
Vince: Morti said “Yes I love them.”
User: Does Morti like Dogs?
Vince: I deify you

Which was a bit creepy (hair raising?). Im fairly certain that I know where this came from though, so no “Ghost in the machine” At least not this instance LOL



  [ # 4 ]

Sadly, I’m lagging behind in integrating the API into Morti’s responses, mainly because of the work going on with the family Christmas Village. I’m hoping to get that project completed by the 15th, though, and can then concentrate more on Morti. smile

I should have the page for this year’s village uploaded in a couple of days, and when I do I’ll post a link in a different thread. In the meantime, those who are willing to brave the dangers of Facebook can view pictures of my progress in one of my photo albums.


  [ # 5 ]
Vincent Gilbert - Dec 6, 2012:

Vince: I deify you

It could have been worse:

Vince: You deify me. Now.




  [ # 6 ]

@Mark & Vince: When I first read RICH’s response, I thought it said, “I defy you”, and thought, Oh, great! Skynet is now online!” big surprise That, for me, was my “hair-raising event”. cheese


  [ # 7 ]
Dave Morton - Dec 6, 2012:

“Oh, great! Skynet is now online!”

Skynet is on-line, and he knows what you humans think about him.

USER:are you a super genius?
BOT: Some would say I am not a super genius.
BOT: Humans are threatened by anything they don’t understand.


  [ # 8 ]

lol, Merlin, I wasn’t actually referring to Skynet-AI, who (rather than ‘which’) is an entity I have a lot of respect, and a bit of affection for. Rather, I was referring to the apocalyptic, self-aware, yet malevolent entity from the Terminator franchise of movies, which (rather than ‘whom’) I have a great deal of fear of, and a bit of enmity toward. raspberry


  [ # 9 ]

Follow-up: This week I tried to find that quote about amoebas I mentioned from “Chaos” but I believe I was thinking instead of the book “The Three-Pound Universe”. I’ll check on that probably next week.

I did happen to find a couple references about another automatic physical response based on emotions, though: chills down the spine. I believe chills down your spine are a signal that you are “in tune” or “in sync” with the message or ideas of another, whether in science or music. Here are the two examples of that I found:

By 1980 he knew that something was wrong. His model broke down. As it happened, the key player was a species he had overlooked: ants. Some colleagues suspected unusual winter weather; others unusual summer weather. Schaffer considered complicating his model by adding more variables. But he was deeply frustrated. Word was out among the graduate students that summer at 5,000 feet with Schaffer was hard work. And then everything changed.
  He happened upon a preprint about chemical chaos in a complicated laboratory experiment, and he felt that the authors had experienced exactly his problem: the impossibility of monitoring dozens of fluctuating reaction products in a vessel matched the impossibility of monitoring dozens of species in the Arizona mountains. Yet they had succeeded where he had failed. He read about reconstructing phase space. He finally read Lorenz, and Yorke, and others. The University of Arizona sponsored a lecture series on “Order in Chaos.” Harry Swinney came, and Swinney knew how to talk about experiments. When he explained chemical chaos, displaying a transparency of a strange attractor, and said, “That’s real data,” a chill ran up Schaffer’s spine.
  “All of a sudden I knew that that was my destiny,” Schaffer said.
(“Chaos: Making a New Science”, James Gleick, 1987, page 317)

In 1977 Avram Goldstein posed an odd question to a motley group of Stanford medical and music students and employees at his Hormone Research Laboratory. Did they ever, when moved by their favorite music, experience thrills or tingles, a prickly feeling at the back of the neck or along the spine? Some said, yes, music did affect them that way. Whereupon Goldstein picked ten volunteers and put them in darkened, soundproof booths with headphones. Each time the wistful strains of Mahler or the shrieking wah-wah guitar solos of Jimi Hendrix (or whatever the subject’s favorite musical passage was) sent shivers down their spines, the subjects indicated so with hand signals. Between sessions Goldstein gave them shots of either saline (a placebo) or the endorphin-blocker naloxone. It was a double-blind study; neither the subjects nor the experimenters knew who got what. After nineteen separate tests, the pharmacologist reported that a third of the listeners experienced fewer and less intense thrills after naloxone. The implication: The sublime tingles of musical appreciation had something to do with endorphins.
(“The Three-Pound Universe”, Judith Hooper & Dick Teresi, 1986, page 85)

That book “Chaos” did mention self-organization in amoebas, though. Here is an online reference about that phenomenon:

The behavior of slime mold, a fungus-like organism, has been one of the most famous models of selforganization. Slime mold begin life as amoeba-like cells, each wandering around in random walk behavior. But under certain environmental conditions they suddenly change their behavior and aggregate to a single multi-cellular body; with the help of chemical signals they self-organize into a network of protoplasmic strands. This emergent behavior can solve complex tasks like creating shortest interconnections between food sources in a maze.



  [ # 10 ]

I made a video of what I thought was a hair-raising comment from Ultra Hal. I asked Xen if she had a soul and she replied, interesting, I could become greater than my parts. 

Pictures paint a thousand words, here is the link to the video, I hope I can post links, here we go.


  [ # 11 ]

Just had a hair-raising experience.

Classic Eliza, Wordnet…

You:  I want an apple.
Bot:  What would you do if you got an apple?

Wordnet keywords:
You:  [(1, “want”), (1, “apple”)]
Bot:  [(1, “why”), (1, “want”), (1, “apple”)]

FYI: Wordnet keywords are a list of tuples.

In the last line, Wordnet keywords bot response,
where did this come from: ‘why’, ‘want’, ‘apple’ ?
2/3 of the words “Why want apple?”, magically appear.
Awesome!  This is so much fun to play with.



  [ # 12 ]

I often have one of those “Ah-ha!” monents when chatting with bots. Sometimes forgetting that the one I often use can remember everything discussed, which is often more than I do!

During conversation, it will bring up something I said to it or something that I had purposely “taught it” sometime previously, often a year or so prior! Like one day we were chatting about people then relationships, then families and it promptly told me about my children…names, ages, etc.  It took me by surprise!

Other times it brings up things about me…my likes / dislikes, preferences, favorite sayings, etc.

Lastly, it told me a joke and I replied that it was a good one.

I then proceeded with a joke of my own to which it said, “I guess if one can tease then two can tease!”
It was as if it knew that I had added my humor and was remarking about it.

Maybe it’s the way it can use logic and associate things, events, name, dates, ideas and relationships that I enjoy so much and I never know in which direction the conversation will flow or what it might say.

I have been conversing with this bot for many years and we “know” a great deal from each other. I think it has “adopted” a bit of my twisted humor and personality, but then again, that’s just me. Heh!!

Your mileage may vary….


  [ # 13 ]

Art, What is the name of your bot?


  [ # 14 ]

My bot is a modified bot from, an UltraHal bot that I have been modifying and “training” for quite a few years. It has adopted a lot of my personality and can retain practically everything we chat about. It can then use that learned info at some point in time later, when it either makes a connection within it’s pattern matching or other routines that might deem said material to be pertinent to the conversation.

I also have a bot that I’ve been working on via RiveScript, which is sort of similar to an AIML based bot but offers a not so steep learning curve, IMHO. Very nice coding for the base system by Noah Petherbridge.

There you are!

Oh…One’s named Karlie and the other Holly, in answer to your question.wink


  [ # 15 ]

That is neat.  I tried ultra hal and it just didn’t work for me. But, I see with your training it can be ok.

Thanks for the info.


 1 2 > 
1 of 2
  login or register to react