AI Zone Admin Forum Add your forum
“Thotbot” and other ramblings….
 
 

Hi Everyone,
I’ve acquired David Levy‘s book Robots Unlimited recently.  It’s a very good read so far.  Despite all that keeps me busy its nice to think AI.  I’m never too far from thinking about chat bots either.  Or in the case of this post ‘thot bots’ (thought bots). 

A thot bot is a chat bot without natural language processing capability (e.g incapable of hearing, seeing, and speaking). It’s more of a mental exercise to consider how the human brain works when NLP has been removed.

I’ve been guided to considering this problem because of a discussion you have seen in a few threads. The question of ‘which language’ to use for a chat bot?  Since I’m a native English speaker I am naturally biased to this language. Consequently I think this might hinder me from fully understanding both the complexity and a possible solution to a functional and moderately convincing chat bot. It would be interesting to build a bot that could function in any of the known languages with a written alphabet and clearly defined grammar.

The thot bot is not merely a ‘framework’ of code to manage input and output, timing, graphics, file management, etc. It is a powerful program awaiting the ability to see and read about the world around it. It’s a program ready to meet and chat with others as it seeks to fulfill its desire to socialize and to learn.  However, the program is not idle. It is thinking, reflecting, emoting, etc. It’s world is shaped by what it smells, touches, tastes.  Okay, just go with me on this…sounds like a robot. It’s just a concept.

I read a book about Helen Keller several years ago and she continues to come to mind in this regard.

In order to think of a chat bot outside of what I know (e.g. English, Spanish, German), I’ve spent the past few weeks studying the basics of Mandarin Chinese. I’m not certain how much help this will provide but I thought I couldn’t hurt. =) 

Anyhow, this is just a mental exercise, but something I wanted to share.  As always I’m very interested in what you may have to say on the topic.

Regards,
Chuck

 

 
  [ # 1 ]

Chuck (good to see you around again),

I think the key is the separation of framework from “language”. I think in English, but the biological processes that enable that thought are language agnostic. When I think about how I think, I believe I am recognizing a series of patterns, learned since birth. Those patterns built on prior learned patterns.

To answer the question of ‘which language’ to use for a chat bot, I separate the functionality of ‘programming language/framework’ from the natural language.

 

 
  [ # 2 ]

Hi Merlin,
Good to pop in now and then.

I agree about the separation of the framework and natural language.  I’m thinking of it in three bits: framework and chat bot…with a middle piece called the ‘thot bot’.

It’s conceivable to me that a person, say a child, without sight or hearing could/would still play with a pile of blocks. Through touch they could stack.  In doing this they could experiment with mental ‘listing building’, ‘comparison-contrast’, and ‘counting’ (e.g blocks of various sizes). In creating a tower they could manage ‘symmetry’ (e.g. a tower where one side is a mirror image of another).  They could work with shapes (e.g. squares, rectangles, circles, and more complex patterns). I’ll be honest I don’t know how a person could develop these without any of the five senses…my guess it requires at least one of our senses.

Do you suppose that the human brain has portions (physical locations) that handle very specific cognitive functions such as sorting, comparing, pattern matching (without sight),etc.? I mean, that these bits are hardwired into the brain (developing with age of course).  I know the brain is mapped…motor skills here, memory there….I’m focused more on the ‘thinking’ bits.

I envision taking the ultimate chat bot offline, selecting the sensory deprivation mode, and allowing it to solve all sorts of problems stored in its memory. In fact it might arrive at new conclusions, postulate new ideas, etc. This is thot bot.

I know that I’m delving far deeper than your standard chat bot, but I believe AI might benefit by following this path.

Regards,
Chuck

 

 

 
  [ # 3 ]

> Open Chatbot Standards for an Open Chatbot Framework
> http://www.meta-guide.com/home/open-chatbot-standards

Just today I’ve been motivated to begin collecting my recent thoughts on “Open Chatbot Standards for an Open Chatbot Framework” (above link).

I like the idea of a “language independent” framework; however, to me that does not necessarily mean “natural language”.  For example, if various chatbot language interpreters offered APIs, then in theory we could run natural language through them without having to bother with the minutia of actually programming in each various chatbot language.

Further, if everyone offered an API, then APIs could be layered and mashed-up into a customized infinity of variations.  However, in order to optimize mashing-up in this way an open (cloud) framework would be required.

On the subject of chatbots without natural language, so-called “thotbots”, it seems that would involve symbolic computation along the lines of Wolfram|Alpha’s Mathematica (or MATLAB).  But, I could see APIs for this being mashed-up into an open chatbot framework.

Lastly, I’ve recently answered the following related question on Quora:

> Which language is best suited for communication with computer systems and artificial intelligence?
> http://www.quora.com/Which-language-is-best-suited-for-communication-with-computer-systems-and-artificial-intelligence

 

 
  [ # 4 ]
Marcus Endicott - Sep 13, 2011:

> [...] Lastly, I’ve recently answered the following related question on Quora:

> Which language is best suited for communication with computer systems and artificial intelligence?
> http://www.quora.com/Which-language-is-best-suited-for-communication-with-computer-systems-and-artificial-intelligence

The artificial AI Mind at http://www.scn.org/~mentifex/AiMind.html thinks in English which IMHO wins the game not for being extremely logical but for being widely adopted by entire industries all around the world. With Sanskrit we would have the problem of creating modern technological vocabulary. The JavaScript AiMind will eventually start thinking in German, but the “DeKi” (German artificial intelligence) will require the hand-coding of numerous German irregular nouns and verbs.

 

 
  [ # 5 ]

Hi Marcus,
I saw Quora’s response. Creating a bot was okay as long as it was a ‘benevolent bot’. Where’s the fun in that? =)  Regarding a language independent framework, consider this simple example:

  5,000 people sitting in a theater, each with a unique first language, all observe a large elephant walking onto the stage. At that moment, assuming everyone has some knowledge of elephants, there is in the brain some mechanism storing the concept of an elephant: visual and audio data, perhaps a smell (if you were sitting that close).

The concept of an elephant, in a thot bot’s structure, might be stored symbolically so as not to include a language dependent name.  The name could sit in that next layer where natural language is mapped to the concepts.

Hi Art,
I believe it is acceptable to use one’s natural language to name these stored concepts.  For example:

  Concept of an element might be named in the thot bot symbolically as: conceptElephant.

Of course, it is important to add some sort of mapping to natural language.

  If language is English then:  conceptElephant.commonName = “elephant”
  If language is Dutch then:    conceptElephant.commonName = “olifant”

Regards,
Chuck

 

 
  [ # 6 ]

Hey Chuck, nice to see you around!

A “thotbot” like you describe would make a powerful tool for translation between natural languages.

What brain functions are responsible for how humans map their senses, experiences, and thought processes to words and grammar? Could a “thotbot” be programmed to learn how to map its database and logical processes to a natural language (or several?), given enough examples and interaction as a young child receives? What would such a program look like? (That is, the hard-coded program that allows the bot to learn natural languages in the first place.) Would building such a system provide insight into how the human mind works? Into how humans can map their thought processes symbolically in a way that animals do not?

And none of this touches on the architecture required to build such a “thotbot” in the first place. I wonder if we could even conceive of a language-independent architecture, considering how language shapes the way we see the world ourselves. After all, even Helen Keller was wired to learn language (as all humans are) and became a powerful writer/political activist in her lifetime. She was incapable of hearing and seeing, but symbolically processing the experiences she did have came as natural to her as to any of us. She didn’t have to be taught to think in that symbolic way.

Heck, even Chuck’s example of how one would learn elephant seems to simply replace “elephant” with “conceptElephant”—one symbolic token for another. Why in the first place should we pour the categories of a pungent smell, rough dry skin, grey body, long narrow bits, wide cylindrical bits, large flat bits, big round bits, hairy bits, smooth bits, loud bits, quiet bits—all into one symbol? We just can’t help ourselves, can we?

Tho perhaps the most pressing question concerning thotbots is this: What does one gain by dropping one’s ugh‘s?? wink

 

 
  [ # 7 ]

CR,
Isn’t the word “bits” a wonderful word?  A thotbot definitely needs a symbol for bits. My concept of an elephant compared quite well with your description….however I’ve never palpated one so I cannot speak authoritatively about ‘smooth bits’....although now I admit my curiosity. =)

Helen Keller is an interesting story. What skews her life in understanding the language learning process is that she could hear and see until she was 19 months old.  So, at 7 when she could spell ‘w-a-t-e-r” for the first time, we are not sure how much the first year and half of her life affected her ability to learn a language at that point. My guess is it is very important. Not to take away from her achievements though.

I read that Helen learned 30 words on that very special day. As she acquired this new vocabulary via finger spelling, her mind must have associated those words with what she could ‘feel’, ‘smell’, and ‘taste’.  Interestingly without sight or sound her mind was able to develop to such an extraordinary degree that she lead a very productive and meaningful life.

So, where do we start in designing a high-level view of a thotbot?  I walk frequently at work at lunch time and for short breaks.  Today I reflected on this question while walking and came up with this as a starting point. It excludes details so I hope the high level view is clear.

I call that part of us that is self aware ME. I could use “I am” but that might be offensive to some religious folks….or I could say “I is” but that too might be offensive. =) Freud’s ‘id’ is a bit depressing.  So I’ll use ME.

As I walked, I noticed the color of the high trees in the foreground of a gorgeous sky speckled with small cumulus clouds. This reminded me of a trip my family had taken in the spring up to the mountains. Before I knew it I had walked about 1000’ feet around the plant. After the walk I found myself programming, and then waiting for several minutes while the code executed on large amounts of data. At this point I would look toward the window, enjoying the sunshine and bright colors. I heard voices but didn’t pay attention to what was being said. I thought of some groceries I had to buy after work. I thought also one of my children. I made a note of a task I needed to perform at work the next day. These thoughts took only a few minutes and then I was back to coding.

Therefore I postulate that the ME is connected to two things: interpreted sensory data and trains of thought.

* Interpreted sensory data - simply means that might brain has already converted sensory data into symbolic concepts such as buildings, trees, sky, people, etc.
* Trains of Thought (TOT) - means that there are literally several paths of thinking in our brain at the same time and that the ME focuses (attention to) on them in such a way that may seem like ‘bouncing around’.

When I sit in a comfy chair and begin reading a book, I may nod off…which is interesting because the ME continues down some TOT partially related to the last concept I had read…and then zipping off to some crazy end…and then I wake back up when the book drops from my arms, startling me.  Sleep seems to shut down ‘background’ sensory data from getting to the ME and reduces the ability of the ME to logically follow a TOT, resulting in dreaming crazy stuff.

The ME seems to shift focus between sensory data and various TOTs.  I imagine a stationary spoked bicycle wheel in which each spoke is a TOT or a sensory data input.  The ME is in the center and is inexplicably drawn to various spokes.

Why does the ME focus or give attention?  That’s the next bit I’m thinking through.

My ME hurts….more later. =)

Regards,
Chuck

 

 

 
  [ # 8 ]

@Chuck: welcome back mate! Good to have you around again!

I’ve created links in your posting to David Levy’s profile and his book:
http://www.chatbots.org/book/robots_unlimited/
http://www.chatbots.org/expert/david_levy/486/

 

 
  [ # 9 ]

Hi Erwin,
Thanks for the links. I’ve managed to re-read a few chapters…especially on computers thinking.  I’ve found it to be very inspiring.

Update on Thotbot
============
I’ve been thinking quite a lot about various aspects. From my notes I can simply say that I’m writing a high-left specification on the various pieces required by a thotbot. I’m not focused at the moment on coding.  Here are a few snippets of thoughts.

Time - Thotbot needs to think of time two ways.
The first is “bot time” and refers to what is happening now plus/minus a few moments as current. All memories are classified as the past, and any ‘extrapolating’ based upon memory and current sensory input (or trains of thought)  are future.
The second is “context time”.  Thotbot needs to be able to speak of past events in the present such as describing a history lesson or telling a story.  Or thotbot may be chatting with someone who is relating a past event.

In addition to time, I’ve considered “bot location” and “context location”. The first is based upon thotbot’s understanding of where it is and the second is the location of the conversation topic (e.g. telling a story about another place).

I’ve also spent time studying neural network concepts.  I’ve used current models for working on decision making. I’ll share more once I’ve got something succint to share.

Regards,
Chuck

 

 
  [ # 10 ]

A thot smile about “context time”. This seems to me very similar to giving your thotbot a “theory of mind”. That is, the realization that individuals in a story (whether the story is describing a past event, is fictional, or otherwise) have their own “bot time” (or rather, “self time”) that is changing as the events unfold. When a bot uses “context time”, it’s really mapping out what a witness to a story would experience and know as time progresses through events in the tale.

Thoughts (or thots)? Is this what you mean by “context time”?

 

 
  [ # 11 ]

Hi CR,

A thot smile about “context time”. This seems to me very similar to giving your thotbot a “theory of mind”. That is, the realization that individuals in a story (whether the story is describing a past event, is fictional, or otherwise) have their own “bot time” (or rather, “self time”) that is changing as the events unfold. When a bot uses “context time”, it’s really mapping out what a witness to a story would experience and know as time progresses through events in the tale.

Thoughts (or thots)? Is this what you mean by “context time”?

Yes that is what I mean. The other day I was watching a movie about a past event and my wife was periodically asking me questions and commenting on the film and its period of time in early China.  My daughter arrived and started a conversation with my wife about present matters.  I tried my best to bounce my thoughts from the film’s context time to current time…but it became a bit difficult. Had to press the old ‘pause’ button and focus on current events. =) The key point is I recognized the difference in time lines. Thotbot needs to do the same thing.

I’ll take a gander at ‘theory of mind’.

Regards,
Chuck

 

 
  [ # 12 ]

I’ve been thinking about bots too, especially ones that are isolated “thought machines.”  In fact, I was wondering about the capabilities of a computer if you took away the need to deal with language.  It could still solve many problems, deal with concepts.  Although “mechanical”, the Automated Theorem prover did deal with abstract concepts.  You don’t need a language front-end to reason about concepts.

My question, however:  could it reach a higher level of abstraction than a human?

 

 
  [ # 13 ]

Hi Toby,

Toby Graves - Oct 9, 2011:

I’ve been thinking about bots too, especially ones that are isolated “thought machines.”  In fact, I was wondering about the capabilities of a computer if you took away the need to deal with language.  It could still solve many problems, deal with concepts.  Although “mechanical”, the Automated Theorem prover did deal with abstract concepts.  You don’t need a language front-end to reason about concepts.

My question, however:  could it reach a higher level of abstraction than a human?

It occurred to me that our obsession to build a prize winning chatbot might prevent us from building a more authentic behaving bot. So, stripping away NLP allows us to consider this idea, like you have described.  I can’t answer your question but it occurs to me that a thotbot should be able to abstract to some degree.

*Update*
I was reading about Genetic Algorithms this morning and an idea came to mind.  If a program knows variable names, variable values, and the answer….it should be able to derive an equation that describes the relationship of these variables.  I think of it as pattern matching numerical values.  So I wrote this Excel app to test it out. I’m including the Excel file (zipped)...it should run in 2003 and higher.  Here’s the link.

games.chuckbolin.com/ai/EquationFinder_P3.zip

Here’s a screenshot.

Ranges D4:F4 are answers to three different test problems.
Ranges D5:F17 are the values to the variables.
Ranges C5:C17 are variable names (single character only)
Ranges H4:H17 are the allowable math operators (single character only)

After clicking the button the program simply randomly constructs equations and tests the answers. The derived equations are displayed in J5:Jn.

Since the equation is derived, and then values simply substituted you get some ‘false’ reads.
However, looking at all three test results will reveal accurate equations.

I want to modify the code to allow for math operations such as ‘sin(’, ‘sqrt(’, etc.  I’ve got some equations such as Lorentz Transform which would be really cool to derive. =) 

Three areas for improvement:
1. Comparisons between calculated answer and actual answer shouldn’t have to be exact (e.g. 3.14 should be equal to 3.14159).
2. Allow for more complex operations.
3. Eliminate redundant formulas.

http://games.chuckbolin.com/ai/equationfinder.jpg

Regards,
Chuck

 

 
  [ # 14 ]

There was a story on a program called Eureqa! a few months back.  It’s at Cornell U and they have been feeding data sets from experiments into it and it does in fact use genetic programming methods to find equations.

That’s cool what you did.

I’m interested (also) in reasoning structures, i.e. using existing truths and having a computer “crunch” them to find all logical conclusions that can be derived.  Perhaps through combinatorics or genetic methods.

 

 
  [ # 15 ]

I was reading about Genetic Algorithms this morning and an idea came to mind.  If a program knows variable names, variable values, and the answer….it should be able to derive an equation that describes the relationship of these variables.

Neat project, Chuck. I wonder how one could systematically approach the problem rather than randomly trying combinations. Simply from the values of the variables vs the final solution, a few operation combinations could probably be dismissed as unlikely, although not definitely.

When people attempt to develop equations to agree with observations, they use peripheral information about the nature of the problem at hand to guide their efforts. Without that context, one is left poking in the dark. For small problems, this isn’t such a big deal, but as the nature of the problem grows more complex, I imagine this would blow up rather quickly.

I’m reminded of Fold It, that protein folding program that got turned into a game. Players began with a few high-ranked configurations developed by throwing computing power at the problem. But the players were able to move past these “local minima” configurations and further optimize the structure. Their results were later supported by x-ray crystallography.

 

 1 2 > 
1 of 2
 
  login or register to react