AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Introduction
 
 

The last few days I’ve been reading up on the various discussions here on the site and today I signed up as a member. I’ve already seen many interesting topics being discussed here so I’m getting ready to dive in and start participating in those discussions.

My interest in AI comes from several other disciplines: system engineering, knowledge management, psychology and some other areas that interest me. I have experimented with AIML for about a year now to get an idea of what I want to do (and don’t want to do).

I’m working on a model to describe the ‘reality’ for an AI-mind. Later I will develop a conversational engine (parser, etc.) on top of that. However, I’m convinced that you can only create a convincing conversational system if it is based on some sort of ‘reasoning engine’ that can actually ‘steer’ the conversation. So my belief is that we need something like, or close to, strong AI for this.

Well, I think this is enough as a short introduction. I will add more information to my profile soon and start a discussion-topic on my own ideas and research soon as well. I’ve already seen some ideas here that are pretty close to my own and I think some board members will be eager to participate in my own topic (at least I hope so).

 

 
  [ # 1 ]

Hello Hans and welcome !!!  We are always pleased to have a new member join !  I agree that reasoning should be a centerpiece of a bot, but, I may be in the minority here, but I also think that language should be very very important.  To master language, by learning (or direct programming) of grammatical structure is highly important.  My own project is called CLUES, for Complex Language Understanding Execution System.  My first bot (or first “personality”) created to use/run on it, is GRACE (General Reasoning Artificial Conversational Entity).  If you’re interested, I just posted a new thread under “My Chatbot project”.

Several others on this site have some great bot projects on the go, they will probably share their ideas and progress so far.  Again, welcome !

 

 
  [ # 2 ]

Hi Victor, thanks for the welcome.

When I mentioned the interesting topics I’ve read so far I was actually referring in large part to your work. Most of your visions do ‘sync’ in large parts to my own beliefs. I’m looking forward to discussing stuff with you. The difference (as far as I can see now) between your work and my own research so far, is that you use a quite complex parsing system to build ‘knowledge’ from your database while I’m looking to create a simplified model to describe ‘knowledge’ as to be able to use a much simpler parsing system. My research into building such a simplified model is largely resulting from my work in building real-world knowledge management systems with a strong focus on how users access the stored knowledge in such a system. Just to name some concepts that come into play here: reference mining, cognitive data navigation and analogous processes.

 

 
  [ # 3 ]

Yes, I am putting extreme focus on the understanding functionality.  I want a bot that can comprehend complex natural language constructs, and be able to differentiate all the subtleties.  I want full understanding.  For example, I do NOT want to see:

What’s your name?
Bob and what’s yours?
Hello, Bob and what’s yours? !

which is what I see when I test a lot of bots.

I was forced to put huge amounts of focus on the understanding part of the bot because of the research I did into language; it is *FAR* more complex than I think many bot developers realize.  I look forward to hearing more on your ideas.

 

 
  [ # 4 ]

@Hans Peter: A warm welcome on my behalf. Especially welcome, because you’re on on the first members from the Netherlands (in goed Nederlands heet ik je van harte welkom). Enjoy all the discussions here, and especially Victors CLUES postings. See you around!

 

 
  [ # 5 ]

@Erwin: thanks for the welcome. If ever you are around Zeeland (Terneuzen) somewhere be sure to drop by for a coffee wink

Victor Shulist - Jan 27, 2011:

Yes, I am putting extreme focus on the understanding functionality.  I want a bot that can comprehend complex natural language constructs, and be able to differentiate all the subtleties.  I want full understanding.

This is one thing where I totally agree with you; the bot needs ‘understanding. However, I might not (completely) agree with your implementation of this ‘understanding’. When I look at human communication, only a few have real understanding of language constructs but most humans communicate fine without that knowledge. Also I think we are too much hung up on proper grammar, syntax, etc. in AI while research shows that humans DO use fuzzy pattern matching for language; only a few letters in any sequence that proximate the ‘real’ word is enough for a human to understand which word was meant. Because of this I think it is possible to ‘fake’ this effect by using ‘synonym lists’. For me ‘synonyms’ are one of the most important parts of an expert system that describes ‘reality’ and/or ‘perception’ (this also goes for knowledge management systems used by humans).

 

 
  [ # 6 ]

I agree with you on the importance of synonyms.  In my engine, there is no word that is absolute.  What I mean by this is, if I write a grammar rule, I never use a literal word, I always use information that that word must have, not the word itself.  This means that if another word has the same associated information, it will be interchangeable, thus it is effectively a synonym.

I partially agree and partially disagree with your comments on fuzzy language processing.  I think that the human brain’s ability to perform this operation lures us into the fallacy that proper language generation and processing is unimportant.  In other words, I believe that one should start with proper grammar in a bot, proper spelling, and then “add on” layers of fuzziness.  For example, in a spell check algorithm, you must know the proper spelling of words that a given misspelled word could map to.  So to must you know proper grammar, so that when bad grammar is detected, you can calculate the closest approximation.  Thus your bot should first work with proper language, then slowly add on increasingly “fuzzy” functionalities after.  One fuzzy functionality could be, bad grammar (teaching it to detect the often poorly constructed sentences like “I ain’t seen nothing” (“ain’t” of course which isn’t a word), “don’t do nothing”—double negative, makes no sense “don’t do nothing” = “do something”, however when people hear that they know the speaker intended them to NOT do anything).

So, bad grammar *IS* grammar none the less!  For my bot, I have 2 directories, “proper-grammar” which contains all the files that tell it proper grammar, and “slang-grammar”. 

So, some “extra” functionalities that will be added to my bot (once I get it working with all proper grammar and proper spelling) would be….

1. bad grammar - yes, there are rules that translate word sequences to meaning (“don’t do nothing”... as stupid as this is, means “don’t do anything”)

2. misspelled words (either by a list of words that are commonly misspelled, and a mapping to their proper spelling, OR a good algorithm)

3. Emphasis .. example.. “yeeeesssssssssss!” - means “yes” with the added meaning of emphasis


etc, etc

But I think it is impossible, or EXTREMELY difficult, to code a bot with all this functionality from the start.

Also, I highly disagree that you can “cheat” your way around all this, and write it off as unimportant; your bot wouldn’t grow much past “Eliza” type capabilities !

 

 
  [ # 7 ]

Hey, didn’t I say no longer to use the word ‘synonym’ on this forum??? cool smirk
Just joking grin..

Vic knows why, Hans Peter, discover why it it so hard to find important about this topic:
http://www.chatbots.org/ai_zone/viewthread/346/

 

 
  [ # 8 ]

Oh yes!  I’m aware, we’re all, *painfully* aware!!  We’re at about 100 by now.  Thus my bot will learn all of those synonyms for “chatbot” !

 

 
  [ # 9 ]

Hey, Grace, are you an embodied conversational agent?

Yes Victor, I am, some call me chatbot, others virtual assistant, but you can call me Grace. Just Grace.

What are you Victor, are you human, or do you have different classifications as well?

grin

 

 
  [ # 10 ]

Erwin,

That’s amusing, I guess humans would be NCEs, rather than ACEs - natural conversational entities, but I guess embodied conversational agent would fit also ! 

Or maybe humans are NECAs—natural embodied conversational agents, damn, “AECA” dosn’t make a pronounceable word.  Maybe humans are CECAs, carbon embodied conversational agents, & computers, SECAs, silicon embodied conversational agents… ok i’ll stop now !

 

 
  [ # 11 ]

Victor, thanks again for your elaborate reply.

Victor Shulist - Jan 28, 2011:

I partially agree and partially disagree with your comments on fuzzy language processing.  I think that the human brain’s ability to perform this operation lures us into the fallacy that proper language generation and processing is unimportant.

I agree that ‘proper language generation’ is important, however…

Victor Shulist - Jan 28, 2011:

In other words, I believe that one should start with proper grammar in a bot, proper spelling, and then “add on” layers of fuzziness.

... in ‘humans’ it DOES work the other way around; we first learn ‘meaning’ and only later on we learn to put this ‘meaning’ into ‘proper grammar’.

But it is very much possible that I’ll end up exactly where you are, I just have to see where my research will take me on this wink

Victor Shulist - Jan 28, 2011:

But I think it is impossible, or EXTREMELY difficult, to code a bot with all this functionality from the start.

Indeed, one of the points where I concur completely with you is that to achieve ‘strong AI’ we need a bot that will ‘learn it’s ways’ instead of just (only) programming algorithms into it.

Victor Shulist - Jan 28, 2011:

Also, I highly disagree that you can “cheat” your way around all this, and write it off as unimportant; your bot wouldn’t grow much past “Eliza” type capabilities !

I agree to your disagreement smile. By ‘faking’ I didn’t mean ‘cheating’ (this will be a nice test-case for a bot wink ); I mean to build an algorithm that does not rely on deep parsing of grammar-trees.

As far as I see it, I want to ‘put meaning into words’ whereas you are trying to ‘give meaning to words’. You are using a grammar-model to describe ‘meaning’, this is where I’m trying to find another (hopefully better) approach where grammar is just used to ‘talk about knowledge’ but not to ‘describe knowledge’. I’m pretty sure that humans use language for conversation about ‘knowledge’ but NOT for storing knowledge in our brains. In our brain, knowledge and language are separate things. You can have ‘knowledge of language’ (pretty handy when you want to have a conversation) but you don’t have a ‘language of knowledge’.

The problem we are facing here is that when constructing a AI we have to be able to put information (knowledge) into the AI in some way, and this is in many ways a ‘conversation’. Hence we tend to fall back on ‘language’ not only for the conversation but also for the model of storing this ‘knowledge’ in the AI-mind. I think that is not the way to go. I think we need an algorithm for storage that is much closer to how the human brain stores information, which is tmo on a much more abstract level. So my current research is largely aimed at ‘abstracting knowledge’ that is ‘described in words’ into a model that can be parsed WITHOUT using language-constructs for parsing it.

At this point in my research I’m not even sure that this is possible to do, but it’s a challenge smile

 

 
  [ # 12 ]

So my current research is largely aimed at ‘abstracting knowledge’ that is ‘described in words’ into a model

Perhaps this article about the representation of abstract and concrete knowledge might help you along the way.

 

 
  [ # 13 ]

Great thread! Welcome to the forum, Hans! (Or do you prefer Hans Peter?)

Just a few comments…

I agree with Victor that in order to understand fuzzy grammar input, the bot must know from what proper rule the input is deviating. The *reason* I think this is important is that chatbots, unlike people, are generally disembodied. (How’s that for a synonym, Erwin? DCA: disembodied conversational agent!) This means that our intuitive understanding of how the world works (what an object is, how it can perform actions and have actions performed upon it) cannot be developed by the bot by interaction with the world. They cannot store information about the world through sensory means. Thus there is no other type of non-language knowledge base for the language knowledge base to build from. People can do fuzzy matching because they can refer to this non-language knowledge base of experience and infer what the proper language representation should be. Bots only have the proper language knowledge base to draw from.

Lately I’ve experienced just how much of our communication is built upon a common, non-language-based, understanding of the physical world. I recently moved to Germany and Ich kann nicht Deutsch sprechen. (<—An important sentence to know, for those to whom it applies.) But I can still pull a *lot* from conversations based on context (who I’m with, where we are, tone of voice, and gestures).

As for the following:

“In our brain, knowledge and language are separate things. You can have ‘knowledge of language’ (pretty handy when you want to have a conversation) but you don’t have a ‘language of knowledge’.”

I strongly disagree. Our way of understanding the world is very much colored by the language we speak. A few examples off the top of my head:

1) The way numbers are understood by isolated cultures that only have names for numbers up to three. Their ability to recognize whether a pile of, say, 8 objects is more or less than a pile of 12 is actually affected because both piles are simply “many”.
2) The intelligence and development of so-called feral children.
3) The likelihood of a culture to actually blame others for an incident has been found to be directly linked to the way the sentence describing the incident is formed in their language.
4) There are groups of people who have no words for relative position. The position of all objects is described via cardinal directions (as in, my computer screen is east of me, not in front of me). This directly influence their sense of orientation and how they think about physical location.

Here’s an interesting NY Times article on the subject: http://www.nytimes.com/2010/08/29/magazine/29language-t.html

 

 
  [ # 14 ]

Lately I’ve experienced just how much of our communication is built upon a common, non-language-based, understanding of the physical world. I recently moved to Germany and Ich kann nicht Deutsch sprechen. (<—An important sentence to know, for those to whom it applies.) But I can still pull a *lot* from conversations based on context (who I’m with, where we are, tone of voice, and gestures).

Very true, Nova, my little dog can’t speak (duh), but she definitely knows how to tell me things. And visa versa, I don’t think she understands to many words, but the face, place and tone, that she does. Bots of course can’t do this, which limits them greatly.

As for the following:

“In our brain, knowledge and language are separate things. You can have ‘knowledge of language’ (pretty handy when you want to have a conversation) but you don’t have a ‘language of knowledge’.”

I strongly disagree. Our way of understanding the world is very much colored by the language we speak. A few examples off the top of my head:

(I initially misread your post, my response was: ‘Not really agreed with you on this one. Perhaps language is just a physical representation of knowledge, and not something separate.”, so basically the same as what you are saying.

The likelihood of a culture to actually blame others for an incident has been found to be directly linked to the way the sentence describing the incident is formed in their language.

Very interesting, I didn’t know this, but actually makes sense.

There are groups of people who have no words for relative positionThe position of all objects is described via cardinal directions (as inmy computer screen is east of menot in front of me). This directly influence their sense of orientation and how they think about physical location

Ok, this might explain a few things. In my family, orientation is a problem, including expressing in vocabulary. Do you perhaps know of tests for this?

 

 
  [ # 15 ]
Hans Peter Willems - Jan 28, 2011:

... in ‘humans’ it DOES work the other way around; we first learn ‘meaning’ and only later on we learn to put this ‘meaning’ into ‘proper grammar’.

My approach is very pragmatic.  I’m actually not attempting to follow or reproduce the same way the human mind does, (one obvious reason is I’m not much into neurology).  My design is very much focused on the fact that a computer is a machine, a digital machine, and not a living biological organism that has the same senses as humans do.

That being said, I am focusing on the strengths a computer has, and use those to tackle the processing of language.  This is very similar to the way computers perform math, chess,etc.  No attempt was made to reproduce the way humans do these tasks into a computer, but yet the computer can do them better… VERY much better.

I stay very much away from any kind of philosophical discussion in my research.  sorry, I wish I could write more…. I will later on this evening smile

 

 1 2 3 >  Last ›
1 of 8
 
  login or register to react