AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Simple Knowledge Base Interface
 
 
  [ # 16 ]

> http://wiki.dbpedia.org/lookup/

I just set up the DBpedia API (above link) in Yahoo! Pipes, which seems to work fine to lookup “Wikipedia” data, such as it is.

 

 
  [ # 17 ]

[offTopic]
Congrats on a thousand posts, Victor! cheese
[/offTopic]

I haven’t contributed to this thread, but I AM following it rather closely. I’m finding Jeff’s work to be quite fascinating, and I’m very interested in the eventual outcome. I think that kudos are in order for everyone who is participating/contributing here. Keep it up, folks! smile

 

 
  [ # 18 ]
Marcus Endicott - Oct 26, 2012:

> http://wiki.dbpedia.org/lookup/

I just set up the DBpedia API (above link) in Yahoo! Pipes, which seems to work fine to lookup “Wikipedia” data, such as it is.

Great resource Marcus. Their XML output is really intuitive. Its great when developers follow a straightforward
<dataset>
  <datatable>
      <column1>
      <column2>
  </datatable>
</dataset>

I still have a bug that shows up when attempting to parse a news feed that only shows up in the webapp version of RICH due to a perceived ‘nesting’ of elements LOL (Temperament, a sign of intelligence?)
You can reduce the number of QueryString elements to simply
?QueryString=keyword
and it works fine (so far). Should make it easier to not have to idenity keyword type (location, person, etc.)


Thanks Marcus

Vince

 

 
  [ # 19 ]

Thanks for the feedback from both Vincent and Victor. As far as Harri’s brain surgery, I took him as far as recognizing different word types and about 15% of the way into subject predicate analysis, but then faced the issue of dumping someone else’s grammar core into him for brain inception. That seemed not the best thing, so I have commented out his parsing for the moment, and I went to work first on the sql interface.

Now about data access speed, I agree with Jan, just as you do. I can’t rely on sql for RESPONSIVENESS, but the I can for STABILITY. I have completed the conversion process where the sql tables are copied out into JSON files and saved conveniently in the site’s directory tree. As you can see if you click the bottom UI button, and then click the GO! button, I worked out two synonymous AJAX calls (and the framework for 100 more as needed)  bringing that sql table data into the js envron and combining elements of each file,  just to get it started. What you see there is redundant data. The driller into the sql is still alive up top while the driller into the same data in flat file format is also alive and active. Since the json files auto generate with each click of the Encode JSON button, harri’s JSON brain is always up to date with his sql brain.

Granted, the window into JSON is not clear as of today, but check back in another few days to see it come along. I also wish to have a xml conversion for information aspects that are better suited to that format, and potentially CSV as well. Having the core data in a RDB gives me all the integrity advantages of normalization, and I don’t have to compromise speed either, because all info will be available in flat form too.

To the other points, I am concerned with interpretive particulars regarding “what’s” and “ups” just as much as the next guy. But to reiterate my “stopper” which I mentioned above,  I feel that one of the fundamentals underpinning interpretive issues involves syntactic / semantic approach at the core level. I think I can do it better, but need to prepare the field before I go out to play, so to speak.

As I explained in other posts, I need to make harri do two things before I start hard coding response paradigms. He needs to do ordering logic loops continuously though his available data, and much of that will be self contemplative; that is, re-mashing data that he already has taken ownership of, and I intend to police that grade of data well. Secondly, he must be able to “PAST CYCLE CHANGE”. That can be temporal change, conversant change, or other STATE change. He needs to be able to retain short term for comparative iteration, and then summarize into longer term loops, summarizing those in turn at some point into permanent, ordered “beliefs” or “postulations” that can be always adjusted.

With those three particulars hammered out (brain, logic, and cyclical comparative), I will feel ready to begin work again on conversant development, but I have a little mountain to climb first.

 

 
  [ # 20 ]
Dave Morton - Oct 26, 2012:

[offTopic]
Congrats on a thousand posts, Victor! cheese
[/offTopic]

I haven’t contributed to this thread, but I AM following it rather closely. I’m finding Jeff’s work to be quite fascinating, and I’m very interested in the eventual outcome. I think that kudos are in order for everyone who is participating/contributing here. Keep it up, folks! smile

Hey Dave. I was encouraged by your expressions, and thank you for that. I’m getting closer to really starting on the parsing logic, and addressing some of the questions that have been asked about harri’s ability to deal with the non sanitized expressions that will come to him through pranksters, legitimate lexographic categorization enigmas, and foreign language issues.

Obviously, I don’t have clear and cogent answers for all that, but I do have a strategy to start with. Let me lay it out in brief.

The user prompt will come in, obviously as a “word sentence”. Harri will change it to a “word type” sentence (noun, pronoun, helping verb, verb, adverb, noun… etc…. In doing so, he will only be able to make an educated guess, but it will be educated through pattern validation checks and a three tiered back propagation strategy of local, regional, and global scope (logic stream speaking).

Once the “word type” sentence is sitting there beside the original “word sentence”, it will be run again to get the “word element” sentence, adj modifier, subject, verb, adv modifier, direct object… etc…  A similar three tiered back propagation strategy with the a similar battery of pattern checking validations will be run. Maybe you can see the pattern here, and see the next checkers that will be run. Adverbial and adjectival modifiers will be assessed as singular or compound. Then, if compound, as phrasal or clausal. Then… and so on until all patterns at all levels of granularity have been matched.

I am pretty confident that “What’s up?” can be accommodated, and “Whats up.” can too. Decisions will need to be made regarding closest match tolerances, and probability may well come into play, but I’m just not there yet.

This is just a philo blurb. The truth will come out in the results. Stay tuned….

 

 

 
  [ # 21 ]

Hey Vincent

I’m almost done with harri’s brain formatter and driller. I was thinking of taking an scaled instance of it, putting a few text fields for people to enter their DB name, username, password, and a destination folder so that any conversions they do will go where they want, and then putting that part of it up for people to use to convert their SQL into JSON or XML.

Down the road, I could add functionality going the other way too, so that JSON or XML would map into sql tables. That way, anyone could examine their data, fool with it, reform it, and convert it while seeing it uniquely all together. Do you think people would find that useful? Please tell me what you think…

http://www.projectenglishtv.com/schl/hari/

 

 < 1 2
2 of 2
 
  login or register to react