AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

“Teaching” Grammar to my program
 
 

For those of you who have been watching me bumble my way into this chatbot field over the past 6 days, you may notice that the program now knows individual sentence elements. (I still need to go back, yank out inter-sentence punctuation, and push them back into the master sentence array in their proper places as separate elements.)

From here, I could keep hard-coding analysis into the phrases and clauses of more mature expression. However, I pause here to consider the route of teaching it instead of hard coding. Here is what I mean by that.

To teach, I would need to first develop an interface where-in the program creates new elements of memory (database - table - column - cell). Though the program will create them behind the scenes, the teacher (me) would need to direct that process through a QnA process with the program, and through entering text into and clicking buttons in the interface.

A standardized data format I am considering is to define an “element” as a relational component. An element would be a table that could both belong to a master catagory, and preside over slave categories. This would allow for dynamic initialisation of elemental components in the system that can have vertical relationships, or hierarchical, if you will.

There would need also, to be two other kinds of relationships. Lateral is one type, where there are other elements that are slaved to the same master. But within each element itself, are traits, or qualities that must be defined. However, a trait or quality would itself be simply another type of element, structured itself in vertical and lateral relationships. I am cross-eyed here, imagining a two three dimensional data paradigms, a qualitative one within a quantitative.

Using some backpropagating AI strategies on top of a system that self generates knew relational memory categories could allow me to TEACH syntactical and semantic relations instead of trying to code them in from the start. (Realizing of course, that a certain level of elemental recognition needs to be hard coded to begin, but believing that enough already now is.)

 

 
  [ # 1 ]

http://www.projectenglishtv.com/schl/hari/index.php

I spent some noodling time and started putting together a database interface, similar to sqlMyAdmin I suppose, in that it will allow me to add new tables to a database, add new records and columns to the tables, and manage the relations between the tables and their records. I will give it as much of a “Harri responsive” feel as possible, and change it as it teaches me how, to make it progressively more so. You can see the start of that if you click the “Teach Harri” button under the report area on Harri’s UI page.

I have not implemented a live DB yet because I am accessing with PHP right now locally, but PHP refreshes the page, and I erase Harri’s current JavaScript state every time I interact with his memory. That sucks for Harri. I have to create the AJAX tie in, and then make Harri responsive to his memory state on the server, as well as to his conversant state on the client. I’ll leave the database access open for you to play with as it comes along. Just don’t try to confuse him or I will have to scrub his “mind”. (frontal lobotomy)

I want to do something to periodically update and preserve his conversant state on the server too. He should have that in a series of temporary db tables that he “past cycles”, or loops through for a comparative that contributes contextual relevance to a dialogue, erasing the oldest table content and replacing it with the second oldest as new states increment the previous ones together into the past. AJAX will need to be inserting new permanent memory data, and at the same time inserting new temporary conversant data. It will also need to access both permanent and temporary data. I foresee the need for some programmatic choreographer for that.

Atop that data crisscross, there will be the processing of conversant information in javascript as it is made available by current user input, and AJAX calls into and out from the DB. I have scratched out three back propagating pseudo AI strategies, one semi neural, and two kinda genetic. The same three processing paradigms can run in php and in JavaScript, but I have not gotten far enough along to see their server side role.

I’m also considering using some dynamically updated JSON files for client side data management to add another element of variation.

 

 
  [ # 2 ]

I got the database interface to the point of creating tables, setting length, char type, and relational keys. It runs in an iframe so that harri doesn’t forget what we are talking about while I fool with his brain. That is like us, you know. Our brains are numb, so the doctor can operate on it without anesthetic, getting feedback from us as he performs whatever frightening thins he has to do.

Since the ifram is generated dynamically with js, I can change it out to another php doc that edits existing tables, or anything else as well. When I post form content, Harri doesn’t notice!

I haven’t replaced the previous version of the application yet. I will do that sometime toward the beginning of next week. It is getting too cool to keep in the open though, so I will leave it up to play with for a couple of weeks as it comes along, and then pull it down as Harri starts being able to learn like a child does.

My plan is to teach thru conversant dialogue, not thru hard coding, tho some patterns will need to be put in place. I want to do it through the database interface, having that open, and the logic open at the same time so that I can edit both and test how Harri’s responses change as I change his code. Hopefullly, before long, the coding will taper off and the memory and teaching can ramp up, possibly with 10, 50, or 100 teachers all going at it at the same time.

 

 
  [ # 3 ]

Yes, my approach is to teach my logicagent (http://subbot.org/logicagent) as much as i can while interacting with it (active or online learning).

http://subbot.org/logicagent/dialogs/johnny.html presents the philosophy in some more detail, with example dialogs.

Other dialogs that demonstrate how I can teach the agent new relations at runtime are in http://subbot.org/logicagent/dialogs.

For example, http://subbot.org/logicagent/dialogs/parentof0.txt shows how to teach the agent to understand that if X is a parent of Y and Y is a parent of Z, then X is an ancestor of Z but not a parent of Z. (http://subbot.org/logicagent/dialogs/parentof.txt and parentof2.txt are similar dialogs, after the bot has been taught the basic relations.)

Of course, it’s not as simple (yet) as just using an English sentence such as in the previous description. For now, I have to teach the bot using Ruby if-then expressions or regular expressions. So to tell the bot that when asked if X is a parent of Y, it should use the same class method in the logicagent object it uses to answer “does X love Y?”, I can say:

> is (.*) (a parent of) (.*) is like does (.*) (love) (.*)

The “love” relation is not transitively closed; if X loves Y and Y loves Z, X does not necessarily love Z. “is a parent of” is similarly not transitively closed: if X is a parent of Y and Y is a parent of Z, X is not (necessarily) a parent of Z. So I can reuse the generic method that handles non-transitively-closed relations for both “loves” and “is a parent of”, as well as other verbs such as “hit”, etc.

I don’t use a database, everything is stored in a graph (http://subbot.org/logicagent/graph.txt).

The goal is to make it easy to teach the bot new verbs (relations), then perhaps have another agent take my place and teach it for me.

 

 
  [ # 4 ]

Hi Robert
Your links represent quite an exhausive reference resource. I am not fluent in Ruby but it has been recommended that I become so. Converts to Rails tout it, and there seems to be many php refugees in the Rails community.

Similar to you, I wish it possible that agents teach the application. What it has taken me decades to learn could scale in weeks in a system that doesn’t forget with “agents” involved in the teaching process.

With a database interaction component, I will be able to work on memeory and relations while seeing the realtime responses, posting a prompt / reaction set, editing the logic in my code editor and the memory on the database, subsequently re-submitting the same prompt and seeing the result of modifications.

I will also be using graphical data sets, as well a JSON and potentially xml as well. It seems at least plausable that the system will crudely emulate what we see predominant in nature. You start out with very simple ingredients and some simple rules, but you have to have enough ingredients to allow for complication, and then you put in some randomness, some fluctuations, and realize a whole bunch of different representations.

This is a guiding principle I wish to follow with AI sets, using a couple of backpropigating neural and genetic AI configurations, a few different data storage components, serve and client side logic, etc…

Another aspect I’d like to fool with is the emulation of our neuronal action potential. Our neurons have cyclical electircal impulses running at all times between 45 and 60 hz. When a response registers to propigate to the cortex, all subsystems involve go to 85hz at the samtime and syncronously emit a super pulse to global. That is obviously down the road, but to me,  theoretically interesting.

Thank you for your post. I hope to learn a few things from what you are doing.

Jeff…

 

 
  [ # 5 ]

Hi Robert i need some help in code can you help me
Lisa Gray

 

 
  [ # 6 ]

http://www.projectenglishtv.com/schl/hari/index.php

This most recent ftp of harri is not extremely impressive, but it is going in the right direction pretty quickly. At least I am satisfied.

The next step will involve the READOUT area of the memory. At first, there will be few memory nodes, (tables) and so I won’t need to be creating work groups, but I will probobly have some kind of transfer box system where I can move db tables into work groups for tying relations together.  The edit table, column, cell functionality needs to work without replacing the broad view of the “mind”. A teacher will need to maintian an overview while at the same time drilling into individual memorial granuals.

Of course, my aim is to eventually make this aspect of fact / relation retention more and more automated, but how can I see a path to that without the initial greuling gind?

Stand by for live db sometime toward the end of this week or beginning of next…

 

 
  [ # 7 ]

It is nice i checked it out great.
lisa

 

 
  [ # 8 ]

So, I have gotten most of the database UI completed, and the AJAX set up for reading into it fluently enough to get quality feedback as I begin to phase back into the logic code for responses. I expect to make the database live online tomorrow or Friday, with possibly another week before it does everything which I can currently foresee it having to do.
But here is where things begin to get interesting.
I don’t want to hard code grammar concept patterns into harri. I want to teach them to him. Right now, that will consist of single word prompt responses with tedious accompanying entries into the database, together with a lot of instance by instance touchy feely work trying to get him to recognize patterns on the javascript logic side. This will be hampered further by the need to also encompass first just the ability for him to store his own data sets while I observe, and coach, and help through this window into his memory morph.  At some point though, what we are working toward is for him to initiate, initialize, and organize his own memory nodes, but I will have to do it here at the start, until the logic is worked out. We will have to discover how he will accomplish self organizing memorization as we both move along. Once the logic is right, it would be interesting to start afresh with a new instance of harri and see how much he can really learn from scratch.
I would like harri’s mind to work akin to ours in a couple of ways that I don’t know to have been tried. First of all, we are always mentally active. We mull over things when not otherwise cognitively engaged. By using timers and choreographing the intervals to avoid conflict, I hope to work with layers of temporary memory nodes that transfer conversant AND temporal state from the most current, backward incrementally toward the last, and again to the next one back, and so on. Imagine six db tables for the first layer, which retain pairs of conversant prompt/response sets, and on every sixth set, a summary zip is saved as the forward member of the second layer, itself six tables long, representing all together the essentialization of 36 prompt response pairs. A third layer would zip six of these and represent ( 6 X 36) 216 conversant pairs. At some layer, possibly this third one, the incremental loss of past essentialization will be stopped, and permanent retention will begin to amass.
But human interaction will be only one layer track, and not the most salient. At all times, harri will continue to run timed sub routines of organizational analysis within his own knowledge base. There will be temporary state retention layers for this activity as well, to contain states that are finally lost as new ones are saved, but not lost before summation and sum retention can occur and be saved, again with the lowest layer being permanently preserved.
And then there is the potential, of course, of crawling the www for info elements to extract and consume.
This above describes a programmatic emulation of our stream through a three track temporal continuum, as well as the activity of our neurons, engaged in the constant electrical process that creates in our neural mass: “ACTION POTENTIAL”.

 

 
  [ # 9 ]

http://www.projectenglishtv.com/schl/hari/index.php

The database insert and edit functionality for harri’s brain has gotten a little bit big. As mentioned, I have the table and field creation set up so that initializing a table allows up to four fields to be associated. The table edit UI is completed, but not tied in yet, because I am working on the value insertion right now. I have that corralled, and only need to write format the sql statements before I put the database up for people to screw around with.

I ftp’d the whole load as is, and will likely launch a DB for it tomorrow. Check it out and give feedback please. Your ideas will make contribute to a better result.

 

 
  [ # 10 ]

Harri’s database interraction UIs are now all functional! I only have to tie in a field value updater, and then that part will be done, although I will be going back with js to make many of the text boxes clickably auto fill. I ftp’d an updated instance of harri, so the newest version is now online. Feel free to test it out, but use Internet Explorer for now. Fool with the create, update, and delete interfaces. Your changes reflect immediately in the readout table. Its cool.

For some reason, my AJAX auto complete is not drilling the database columns in W3C browsers (firefox). I am getting a state of 4, but the status returns 404 instead of 200. I am thinking that Firefox requires a different reference to the source URL for the response text, but I am too green to know for sure. I have to test more and find the problem.

After that I have to think first about representing the redundant structure of English grammar NON reduntantly in an RDB, and secondly, about how to integrate that with a flat file formatto re-introduce the redundancy and for speed issues. The RDB will have to talk to a flat file to first map it and then data populate it in an initialization phase on startup. Then, there will have to be cross format asynchronous calls between the json data and RDB data matrices in order to provide both the ordering and updating integrity from the RDB, and the speed of the json.

I have spent about 12 hours trying to get the general table relations for grammar normalized, and each time I run through the structures, it gets more refined. I am trying to keep in mind that grammar is only a representation of conceptual syntax, so in formulating a grammar parser, I am actually formulating a knowledge processor, realizing that our own syntactic ability is a pre necessitation for cognition, and from there for learning.

This means to me, that the structure of his grammar base will be the engine for his learning math, physics, history, ethics and etiquette. There is no way to build on an upside down pyramid, and syntax is a basis of all thought (I think).

 

 
  [ # 11 ]

I had to take a day and create the field value updater for harri’s brain. It functions with a double WHERE clause in the query, one that is based on the old value in the target field, and the other based on a related column / value set. Since I put three values at a time on the UI, that requires the system to check first for only one, or for two, or for all three values, then checking for one, two or three double where clauses. Yes Yes Yes - Yes Yes No - Yes No No - and on and on. There are 14 permutations, and it has taken all day.

I also got the initial table data viewer done. Before you had to initialize the autocomplete in order to display any DB data at all. Now it shows everything when you click a table col, and then runs autocomplete only when you start typing in the search box. Everything else is now working nicely in IE9 and in Firefox. I haven’t ftp’s an updated instance of everything yet, but the elements encompassing the initial table viewer amounted to a couple of pages that I just uploaded thru cpanel. Check it out if you want.

So the final part of this week could be spent on DB normalization, stabilizing an anchor end of English grammar and then trying to work out some synergy between the RDB and json tabulation.  Here again however, I am stuck between the expedience of thrusting ahead hard programming grammar patterns instead of slowing down and working on a programmatic process for true learning. Either way, I may have to go a long way with parametric presets before I have a logic center competent to learn more organically from a totally empty database on up. How would you start that setting up only logic with no preformed data sets?

A comparative discriminator of some sort is what I have in mind. I think it has to start by simply saving a stream of states, incrementing them backward as new ones are saved, and seeking and saving its own pattern finds from within these relatively unmanaged auto comparatives. A timer will run to create temporal states that represent differing conditions within the program’s own looping, order seeking cycles. It needs a simulation of a past present and future with the ability to make past comparative, trend recognizing “sense” of the present so as to truly contribute relevantly to the oncoming future increments.

Sounds like a nut cracker, but someone should push into something like THAT instead of settling for something less… If I decide on the logic first, some timed looping will run server side, and I don’t need the json for that. Slow is ok for ongoing self consideration. That’s how I do it anyway, and it works for me. I’ll just give it some test information to look through and work out the “contemplation” process with dummy data.

 

 
  [ # 12 ]

Have you run into any cross browser compatibility issues yet?

That’s often where the real fun starts.

 

 
  [ # 13 ]

Actually, yes, I have. The css issues are easy, but the same origin security policies have been a problem. I’m using iframes to run multiple javascript layers and to maintain page state while php server calls are going on elsewhere, and these issues have forced me to simplify my file directory in order to fluently use AJAX, which will become more of a post for the application as I get out of the anaylsis driller and into the json to sql dialogue.

I’ve been testing everything on firefox and IE9, but not on any of the others. Where have your challenges been surfacing?

 

 
  [ # 14 ]

http://projectenglishtv.com/schl/hari/

Good progress today on harri. I’ve uploaded an entire new version of harri with all the sql UI working functionally. There are still a few things I will fix later, but for now, I can pretty fluently interact with that part of his memory.

Today I worked on converting his sql tables into json files, and have that all worked out. It was pretty simple, and I put a button at the bottom of the UI button list that says “JSON Encode”. Every time this button is clicked, the json files are re-written, reflecting any changes to the RDB. I put a couple of links on that page for now, so you can see the resulting JSON that is automatically generated from the RDB tables. The next thing will be to put write / edit / save capability into the json, and finally to have that able to selectively talk back to the sql for updating harri’s self contemplating, self organizing, temporal sql memory nodes (which are not yet programmed).

The same contemplational, organizational looping will also be occurring within the json data, and might turn out to be some kinda short term memory and logic system. The results of the short term cycles may in turn be saved into harri’s sql contemplative tables for mulling over even when sessions are not in play.

I would like the application to locally consider single conversant prompt/response sets, and also summations of multi prompt/response sets, being able to self organize, discriminate trends, and contribute to oncoming increments. The summations are what may be sent to server memory as one body in which to “non-locally” logically loop through. Other bodies of data will potentially be of great interest, like wordNet or Wikipedia, or on and on.

Anyway, check out the json conversion. I like it so far.

 

 
  [ # 15 ]

I got a jump on the data conversion interface today, and changed the online instance of harri to reflect that initial set up. Check it out and please give some feedback. The JSON encoder is still working, but you have to click once deeper than before to get to it. I haven’t begun working on the “view” system yet, because I want to make the conversion dump selectable, or global before moving on the views. I also decided to automate the query through an array loop, and automate the file naming through an iteration process within the loop. That might take me a while to work out.

Once that is done, I will have a good pattern set up for conversion into the other data types. But my AJAX looper needs refactoring as well. It needs to be an array looper too, so before I get to the display view, I’ll need to go back around at that as well. I’m trying to make the data available in a “session variable” type manner so that I can get at the information from other pages in the site, and I would like to do that post ajax so as to have to run those calls only once.

Having the data available is only one issue for me, because I am not fluent working with JSON and XML. I foresee the need to integrate, disintegrate, and reintegrate following the initial data mapping from the RDB. I’ll likely spend a week or two getting that set up, just to find out that the way I’ve configured it won’t support the logic correctly, and then have to change it up a few times as I work out the grammar and learning logic. My learning curve will be tested on that, I’m sure, but I hope to make the flat file view/edit interfaces intuitive enough to make re-mixing the data salad pretty painless the second time, and third time around.

Again, all helpful suggestions and feedback is precious to me. I feel thought starved at this bountious feast. Help me out…

 

 1 2 > 
1 of 2
 
  login or register to react