AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Harumi
 
 
  [ # 136 ]

New Harumi released!
Features :
Programmed in C#, database looks now like a neural structure, very fast to launch and answer. Dynamic learning, context comprehension, possibility to “see” her thinking. More to come later (like english version!)

 

 
  [ # 137 ]

New version released today :
Still in french but :
- better IA algorithm (research of best answer available), note that it is slower in certain cases.
- possibility to record new sentences linked to the current subject
- new interactions with ( or without) emoticons
- calculator function : + - * / and ( ), just add “=” sign at end of your calcul to be sure Harumi gets it’s an operation

 

 
  [ # 138 ]

Better IA algorithm coming this week end.
Then I plan to translate Harumi 3 in english.

 

 
  [ # 139 ]

English version online!
French version was also updated.
I heavily recommand you to read carefully the tutorial available on the site :
http://jeffelson.e-monsite.com/

 

 
  [ # 140 ]

The context comprehension sounds impressive.
Question though: At the user input “and flowers?” below, why doesn’t Harumi combine “flowers” with the most recent question about “color”? Do you use a neural net for this?

USER : what’s snow season?
HARUMI : snow is in winter
USER : what color?
HARUMI : white color, snow is white
USER : and flowers?
HARUMI : flowers come in spring.
USER : what should I offer when I’m in love?
HARUMI : flowers of love are roses.
USER :  (click on boring icon to change subject)
USER : where I love travelling?
HARUMI : you love travelling all other the planet, especially in france.
USER : what is the capital?
HARUMI : paris is the capital of france.
USER :  (click on boring icon to change subject)
USER : do you have jack’s phone?
HARUMI : jack’s phone is 54321
USER : and the guy of china?
HARUMI : bruce lee’s phone is 12345

 

 
  [ # 141 ]

Don Patrick : “At the user input “and flowers?” below, why doesn’t Harumi combine “flowers” with the most recent question about “color”?”
Simply because no sentence matches the combination(!).

The context comprehension is indeed powerfull in regard to Harumi2 which was only an evolution of stimulus -> answer database structure.
Here every sentences are cut in words and each word exists as a “neuron”, meaning an object sensible to user and database input and able to send a message to every objects linked to it. The more words are linked to a word sending a signal, the less each word will be activated. Each word is defined by a list of links, an activation level (a float between 0 and 1) and a list of sentences with every sentences recorded with the word considered.
At each question asked by user, a Levenshtein algorithm auto correct user words with database, then each word of user input trigger a “1” activation when available in database. Then each word with a “1” activation send a signal to every words linked to it.
Then another algorithm pick up highest activated words (beside the “1” to avoid a “self-back” answer) and builds a list of ineresting sentences. This algorithm does a bit more than this, but I prefer to keep it secret.
The last algorithm is… totally secret. His job is to find the best sentence available in the list.
Now for context, it’s very easy to code with this kind of structure : simply divide per 10 the value of each neuron at each new sentence entered! This way, at each new question, the “ghost” of the discussion stays still “in the air”. For example, I remember having a discussion with more than 10 sentences sent to Harumi. At the beginning I spoke about the city of “Avignon”. Then about something else. Later, I asked Harumi my parents phone. Harumi told me the phone of Avignon… which impressed me because my parents have another house and another phone in another city. Harumi “understood” the context far later in this case!
By the way, when you click on “bored icon”, you simply reset each neuron at 0 to delete context. Think at the flash in “Men in Black” ;p.

Now for the future :
- English version needs updates regarding translation (it’s really boring to do that, but english communauty is far more active than french one, so I absolutely have to work on this)
- More interesting. Sorry, I mean far more interesting, I want to implement a recursive propagation of signal to increase the deep of comprehension and context sensibility. Nevertheless, after several pre-tests, I’m afraid it would be very CPU intensive…
- Later, I’m also thinking about auto build sentences by creating oriented structure database (which is not), but I’m not at all sure of the possibility to do that.
- Adding time context by adding a time of activation for each word/neuron
- Auto delete links when rarely used : I know it looks curious to create a brain which auto deletes its memory, but the redondance of sentences in Harumi 3 makes possible to delete datas without really loosing memory (think compression of datas). It’s only the access to old memory which would be harder since less links would trigger it… Hey, just like the name of this old friend you had in your childhood and you need to think about him 1 hour before finding a “way” to access data!

 

 
  [ # 142 ]

Oh, I see. It can only say the sentences that you taught it earlier. Thank you for your very interesting explanation. I never understood neural nets but this I can understand.

So the process goes something like this:
input: “what’s snow season?”
season <-connection-> winter
snow <-connection-> winter
output with the most connections between words: “snow is in winter”

That means you don’t have to script and link every possible user input yourself, as the program links the user’s words automatically. I use a more grammar-based context system that remembers sentences and their structure, but your system is less predictable, which is good for conversation.
The ghost memory of previous sentences sounds similar to a computer vision technique where older video frames slowly fade away.

Can you use Google Translate or something to automatically translate Harumi’s files?

 

 
  [ # 143 ]

Don Patrick : So the process goes something like this:
input: “what’s snow season?”
season <-connection-> winter
snow <-connection-> winter

to be more precise :
season is the stimulus so the “neuron” season is excited to 1 level.
Then this neuron sends to every neurons linked to it a signal.
Here, if you have in database
winter is a season
summer is a season
spring is a season
automn is a sason

you have 4 links so each words at the end of link receive +0,25 on their precedent level. Note that small words like “is” or “a” are deleted before. At first, those words were used but I rapidly saw that the database becomes too large quickly while those words are pretty useless and even complicated to “understand” for Harumi. It means that Harumi needs to see hundreds of “is” in recorded sentences before “getting” that those words are useless (or at least almost).

It’s impossible to understand meaning/context for an AI without using dynamic tree base structure with analogic signal.
Binary based chatbot are unable to understand anything. Think “if (a==true&&b==true….) {writeln(good_anwer)}” with good_answer actually often bad (if not always) as soon as database reach several dozens of sentences.

  About grammar structure, I never considered this option because human intelligence doesn’t work this way. Here are some examples where grammar doesn’t exist but meaning yes :
- It’s perfectly possible to explain something to a children by using several words… and that’s what parents do with a little child :
“Parent : it’s a CAT!”
“2 years child : CAT”
The children gets that “it’s a” is useless at his level and, more interesting, parents insists on cat word because even them who know a lot about grammar get that “it’s a” is not necessary.

-It also reminds me when I travelled in a country to see family (poland).
Me : ME JEFF!
Guy : SUAVEK!
ME : HELLO SUAVEK
Guy : DIEN DOBRE JEFF
ME : DIEN DOBRE? HELLO?
Guy : TAK!
ME : DIEN DOBRE SUAVEK?
Guy : TAK!

-Think about Yoda speaking to Luke on Dagobah!
His sentences are upside down (which looks pretty funny), but everybody get the meaning of each sentence while a chatbot with a fixed grammar structure comprehension would be pointless, mismatching the subject with complement because those are at the “wrong” place.

Grammar is usefull when you want to reach “abstraction” level. For example :
“He would like to have a cat” doesn’t mean “He would like to have this cat”
Abstraction here is that “a” is virtual while “this” is real.
It means that we humans are able to understand because we have a sensitive perception of reality while the “world” for a chatbot is just made of strings of characters and numbers… until we implement visual, sound, touching entries… meaning a body to the “brain”.

 

 
  [ # 144 ]

smile I think there is something to be said for both approaches. Perhaps “grammar” isn’t the right word, but yes I work at what you call abstraction level, so it is useful to determine precisely which cat someone is talking about. Thank you for explaining so much though. I can’t really use word association techniques in my knowledge database, but they may also be useful for quickly finding the topic of conversation. For example when someone mentions “eat”, “baker” and “egg”, it would take very little time to find that all words have a link with the word “food.

I don’t know if this is of use to you but words like “is” and “a” are often called “stop words” in document summarisation AI. Summarisers also ignore them because they are too common to be important topics. You can find some word lists online, such as https://github.com/DataTeaser/textteaser/blob/master/textteaser/trainer/stopWords.txt
Good luck with Harumi!

 

 
  [ # 145 ]

Don Patrick : For the moment, grammar is not possible to implement for Harumi. For simplification reasons, and also performances, Harumi analyses only “important” words with a dictionnary similar to “stopwords”. The structure of sentences is not a problem since every sentences recorded in her memory are recorded with every words contained into. Even those “is” and “a”. Nevertheless, in the list of entries/stimulus (single words), only “important” words are recorded.
It’s perfectly possible for Harumi to record “pointless words” by deleting stopwords dictionnary, but it would necessitate a lot of memory and grids used in visual C# only accept 150 columns… If I force Harumi to record every sentences linked to “is” ans “a”, the number of columns will be quickly overloaded!
  Moreover, with the kind of algorithms used in Harumi 3, the only thing Harumi ultimately accomplished is finding “the best answer available”, meaning that if you ask “What is the capital of Russia?” and “moscow is the capital of russia” doesn’t exist, she’ll answer for example “Paris is the capital of france”, unknown words beeing unconsidered at this point.
It means that she tries to find the “closest” answer to user question and not necessary the good one.

  That’s why I’m interested in grammar for a later version of Harumi (Let’s say Harumi4!). I would like to implement graph oriented database structure instead of non oriented one at this point. The interest of oriented graph is that Harumi could learn grammar by having statistic datas on every sentences recorded…
  For the moment, I will go on implementing new functions and translations for Harumi 3.

 

 
  [ # 146 ]

Next version coming soon (1 week away), new features/corrections to expect :

- Better AI thinking : Better ability to understand if user change subject or follow the same discussion
- Voice recognizing with the use of windows (new icon to activate automatic analysis of user request without having to use keyboard/press enter)
- New organization of icons with new ones appearing when a “parent” icon is selected
- Calculs now working with . or , for float numbers
- Resize of picture window (larger)
- Auto change picture on launching and while speaking
- Preventing user from closing datagrids (so datagrid crash is fixed)

 

 
  [ # 147 ]

I’m currently testing a new way to exclude pointless words without having to artificially delete those before treating user request (which is the case for the moment, even if words frequency corrections is currently the main way to hierarchise database).
This algorithm consists in applying a weight on each word depending on interactions with user : It is when clicking on “yes icon” or “no icon” that you’ll be able to either reinforce or weak a link. Just like Pavlov respondent conditioning!
The avantage is that user doesn’t need to create a “stop-words” dictionnary with a list of pointless words. Moreover, “pointless” words can become important in a different context. With the new algorithm, words are never deleted, only their links are weaken or reinforced.

 

 
  [ # 148 ]

Hello, after several months, Harumi new version finally released!
Only french version available for now, English one coming soon.
Many, many updates, go on my site to know more :
http://jeffelson.e-monsite.com/

 

 
  [ # 149 ]

US/English version now available! (plus a little fix on french one)

 

 
  [ # 150 ]

Next version coming soon (I just need to translate few sentences and upload to the site), list of features :
- “?” icon will work better, better ability to identify current subject of the chat.
- Harumi will randomly record user sentences automatically, linking it to the current stimulus in order to guess what the user wants next time the stimulus is triggered.
- Harumi will try to avoid repeating herself when possible, keeping in “mind” the list of sentences she said.


I’m working now on a way to implement recursivity in the way Harumi propagates signals in her database. I think that recursivity is the most powerfull way to increase her intelligence. Indeed, for the moment, every words triggered send a signal to words linked to it, but then the signal stops here. My idea is to propagate the signal further, deeper, while preventing it to go back to precedent word-neurons triggered.

 

‹ First  < 8 9 10 11 12 >  Last ›
10 of 14
 
  login or register to react