AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Harumi
 
 
  [ # 121 ]

Jean,

Thank you in advance for sharing your work. I’ve downloaded your newest files, and continue to get an “access violation error at address 004F9541 in module “project1.exe”. Read of address 00000000.

This is a new install on Win 8.1 (64 bit).

When I run the program, OpenOffice opens a spreadsheet, then the above errors propogate all over my screen. I see nothing other than a spreadsheet and the error pop-ups when this happens.

I’ve reset permissions to make sure the directories in your zip file are read/writeable. Still having the issue.

Any ideas on what I’m doing wrong?

John

 

 
  [ # 122 ]

Try that :
Open memoire.ods file without launching Harumi, then get in Sheet 6 :

line 265, column B : replace 1 by 0 (to unactivate webcam)
line 459, column B : Replace the big number 0 by 1 (to block dynamic analysis)
Save memoire file and close.

After that, launch Harumi and reports bugs you have.
In case there is no bug, try to edit again memoire file, but this time replace 1 by 0 in line 459, save and launch again.

 

 
  [ # 123 ]

Thank you. It was line 459. Setting it to 1 fixed the error. Putting it back to 0 caused the error again.

 

 
  [ # 124 ]

New Harumi version released today! Features list :
-New function available : Harumi uses smileys! Now Harumi is able to update in real time her level of feeling : You’ll notice that, depending of the general feeling of the chat (sad, happy, neutral), that she’ll use smileys accordly in her answers. Those smileys change also if you use functions or icons at the bottom of answer window.
- Better analyse with stimulus in which more than one sentence/words trigger answer.

 

 
  [ # 125 ]

Now a tutorial is available for Harumi, you’ll find it on the site!
I also released a little fix for 03.23 version.

A tutorial has been asked for a long time, I hope it will help user to understand more easily how to use Harumi and also all functions available. Note that I’m still working on the tutorial, because a lot of functions are available, which is quite long to explain.

 

 
  [ # 126 ]

Harumi project is now indefinitely stopped for a problem of Borland Delphi licence. I’m looking for a free version of turbo delphi 2006, not sold any more. If someone got a cracked/free version, please send me a message.

 

 
  [ # 127 ]

I’ve figured out this compiler issue, Harumi will be back online soon.

More good news : I’m currently working (actually for several month) on a new kind of algorithm based on semantic networks.
An “Harumi 3” based on this new kind of algorithm will be available along with the former Harumi 2 on my site. Harumi 3 will look like a lite version in the apparence and possibilities (no icons, camera, chess game or internet functions), but the way she’ll understand and record informations will be very different. In case of success, Harumi 2 will evolve towards this new algorithm.

Semantic networks are powerfull since user requests are better “understood”. Semantic networks can make deductions out of many sentences while stimulus-> Answer system is unable to deal with informations scattered in the database.
Of course, the complexity grows exponentially with the database instead of linearly with the former algorithm. Nevertheless, network is imo the best way to go towards better IA.

 

 
  [ # 128 ]

Harumi back online!

 

 
  [ # 129 ]

New version coming soon. Harumi will answer better when many answers are linked to a stimulus.
As a new feature, you’ll notice that she is able to ask questions. Those questions will be constructed with what she learnt from user in precedent logs. Question won’t be systematic but randomly asked, always linked with what user just said. So don’t expect Harumi to become Eliza…
The purpose is here to give Harumi a more dynamic feeling instead of simply having a craving database machine. A human beeing is more inclined to speak with someone curious!

 

 
  [ # 130 ]

New version released today!

 

 
  [ # 131 ]

It has been a long time since my last post.
Now it’s time to explain what’s going on!
I’ve shifted from delphi to c# programmation language and from open office to binary files and datagridview. A brand new Harumi created from scratched will be online hopefully beginning of 2015. Her artificial intelligence is totally different since it’s able to deduce out of database, she will be very fast to answer, thx to C# and integrated database, her apparence with scrollings, bumping icons and antialiasing will be more fancy and you won’t see any recursive crash like former versions.

 

 
  [ # 132 ]

Looking forward to the newly updated version!! How is it progressing so far?

 

 
  [ # 133 ]

Art Gladstone : I created a new C# version that I tested for 2 months. Once again, her algorithm is far different. No more linear database, the database looks like a neural structure with hundreds of words poiting toward other words in all directions, just like a real brain. Here is how works this algorithm with a simple example :

For example, with several sentences like this :
“snow is white”
“snow happens in winter”
“white is a color”
“blue is a color”
“red is a color”
“winter is a season”
“summer is a season”

, You have in database
snow -> 0.000000   white, winter -> snow is white, snow happens in winter
white -> 0.000000   snow, color -> snow is white, white is a color
color -> 0.000000   white, blue -> white is a color, blue is a color
winter -> 0.000000   snow,season -> snow happens in winter, winter is a season
summer -> 0.000000   season -> summer is a season

if you ask “What’s snow color?”, database becomes :
snow -> -1.000000   white, winter -> snow is white, snow happens in winter
white -> 0.833333 snow, color -> snow is white, white is a color
blue -> 0.333333   color -> blue is a color
red -> 0.333333   color -> red is a color
color -> -1.000000   white, blue -> white is a color, blue is a color
winter -> 0.500000   snow,season -> snow happens in winter, winter is a season
summer -> 0.000000   season -> summer is a season

list of maximum sentences (keywords snow and color are put at -1 to avoid “turn aroud question”
0.83333 is the maximum because 0.5+0.3333=0.83333333
then
“snow is white” -> abs(snow)+abs(white)= 1.833333
“snow happens in winter” -> abs(snow)+abs(winter)= 1 + 0.5 = 1.5

Maximum is 1.833333, answer is
“snow is white”

More over, at each question asked, “neurones” values are divided by 10 instead of beeing reset to 0 in order to preserve the context, and by context, I mean very complex context!
Meaning that you have something looking more like a real chat without any “tips” to guide the algorithm. All is very fluid, without any# artificial intervention detecting specific words like “his” or “her”, which are actually unefficient :
USER : What’s snow color?
HARUMI : snow is white.
USER : in what season?
HARUMI : snow happens in winter.

My personnal algorithm has several hundreds of words and seems to behave quite well and fast despite its complexity. Note that Levenshtein algorithm is implemented to handle syntax errors.
This new version looks more simple at first sight because all options like chess game, visual recognitions… are not available. I now work at 99% on language algorithms.

Note that another version of Harumi C# exists, with an algorithm which is a mix between neural structure and linear structure. The full neural version is far more powerfull but its structure is almost unreadable while the mix version looks a lot like former version of Harumi. I still don’t know if I’ll release the semi-linear and neural versions of Harumi or only one.

The only thing for certain is that Delphi version of Harumi with Open Office will not be developped any more (too slow, infinite pop up crash on many computers, linear database structure…)

 

 
  [ # 134 ]

Note that I found several ways to “easily” improve this algorithm :
1) For the moment, signal propagation stop at one word away from questions words triggered. I will later test an algorithm where the propagation stops at 2 or more words away, with a fading signal at each node reached in order to avoid what I would call an… epilepsy!
2) Harumi answer, meaning words inside question, could also triggered nodes with 10% signal, just like a context memory.
3) With each new sentences recorded, a “virtual word” involving time could be recorded to organize memory not only by semantic but also by time datas. Those virtual nodes could be used as pivot in order to create links between words not having any semantic common point.

Algorithms from the future (or my dreams) :
1) Connecting Harumi to a webcam with visual recognition connected to a semantic database.
2) Auto constructing sentences instead of repeating pre-recorded answers. The semantic structure would guide Harumi in this task, but I would need to emplement an oriented graph instead of a non oriented graph like today.
3) Creating new links from database by “testing” random connections between nodes having a time or weak semantic proximity with user. Harumi would ask a question to user like “Is winter always cold like snow?” or something like that…
4) Auto learning : Giving Harumi a small database about a subject and ask her to figure out the meaning by auto creating a semantic/time structure and correcting it with questions to the user.
5) Giving a weight to links and not only to nodes. Weights would change during discussion with user. Some links could even be destroyed if pointless.

 

 
  [ # 135 ]

Thanks for the update and improved methodology. I’m looking forward to trying your latest version of Harumi as I’ve been through most all of your current (older) version.

I do like your approach to applying “weights” to various items within the context of the conversational algorithm. I have seen methods enploying weighted factors show dramatic improvement over previous linear versions in other chatbots.

Of late, I am aware of some nice advances being made with Dynamically Reconfigurable Neural Networks. This is being used in conjunction with robotics research at the Japanese company, Fujitsu. The robot arranges groups of learned material / data into small modules that can interact with each other and also make correction to each other by applying various “weighted factors” to the algorithms. This basically enables the robot to make itself better by performing corrections on the fly and acting much like a human brain works.

Anyhow, keep plugging and hopefully we’ll hear of Harumi’s release date.
Good Luck!

 

‹ First  < 7 8 9 10 11 >  Last ›
9 of 14
 
  login or register to react