AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Using ConceptNet as a source for Chatscript facts
 
 
  [ # 16 ]

I’m not sure i understand the utility of the 2nd bot. You are still going to have a memory heavy bot. What is wrong with the primary bot being that?  You seem to say you would be having lots of small bots that want sometimes to ask this question.

CS, of course, allows you to compile together a bunch of bots which are separately addressed by the incoming bot id all using the same port. These bots can SHARE functions and facts and topics in any number of manners. Does that give you what you want?  And of course you can retreat to storing a bunch of your data in a database and querying for it.

 

 
  [ # 17 ]

Here is a practical reason.  During load testing, i figured out that i can run 5 versions on the linux box on one 2a server and see 80% cpu and 85% memory usage.  If i add the factset data, memory becomes my issue, and my server utilization becomes 2 versions, before memory becomes 85%, but cpu usage is cut in half. And i cant add memory to the box without adding cpu.
So, i would need 2x as many servers under load.
If i see a lot of traffic, it will 2x the server costs.

2nd use case. I have a user query and it requires a deep scan.  The main bot sends the request to N bots, which work in parallel. And sends results tothe main bot. The main bot decides on the best response. 

In any case, from an arch point of view, the factset and any large datastore kind of belongs in a database anyway.
It would be nice to do it all in CS, and the wordnet stuff is very useful, in line with the code.
But i think concepts should be in the data layer. 
Local concepts should use CS for this data.

I just have to write a simple api that mimics the logic to retrieve the data.
And use a database call for it. 
This is what i have been doing…
i create some really big concepts with the available database data to see if the data is available before making the API calls.
I am not sure if i can do anything to make the concept matching any faster.
Many of my concepts are maxed and them i have to group them together.

~concept_75000_characters_one
~concept_75000_characters_two
~concept_75000_all ( ~concept_75000_characters_one ~concept_75000_characters_two and so on)

u: ( ~what [ be ~equals ] {a the an} _~concept_75000_all)
Call database with _0

Wondering if there is a better way. 

 

 
  [ # 18 ]

So right now CS only supports calling other servers using JSON. Would it be useful if I added a function that allows you to call a CS server.

 

 
  [ # 19 ]

Well, it could be a function that allows CS to issue multiple calls to different CS server bots, N is number, at the same time, and wait for the results of N bots or timeout after y milliseconds and continue.

But this creates a lot of complexity for the benefit of a faster response. I am not sure it is really worth it. 
And I imagine that this would be difficult to change within CS.

However, this would enable a developer to change the way that CS can be implemented.

This is how I see it.

Essentially, you would create a very slim, main bot, that does preprocessing, minimal main processing and post-processing.

In the main processing, it looks for a responder and then the current, then pending topic. And any special processing, like gambits in special situations. If a response is found, it would send the response. Essentially the same as the current simple control script.  I don’t recall if this is exactly what I am doing, but it would probably be the same up until this point.

The changes would be in the second layer.
After the most common processing, if there is not a good response, it would issue parallel requests to the subtopic bots with matching topic concepts, testing for responses, and the main bot would wait for responses from all subtopics or timeout after y milliseconds and continue.

The main bot would receive the results, and decide on the best response, and then make the needed updates to mark the rejoinder, update the current and pending topics, and any other housekeeping.

If there is not a good response, go through the topics in parallel looking for a good response. The main bot would pick one from the many and do the housekeeping.

And then the rest including gambits and other last-ditch efforts.

The benefit would be a much faster response, because each bot is working in parallel. And you can create really big memory specialized bots. Maybe put them on different servers and auto-scale them when they are at cpu capacity.

This could be the end state, I would have to make some changes to make this, but I am sure I would make it work. 

Not clear on the user files, and if that could be shared from the database.

Another benefit would be that you could probably switch out subtopic bots on the fly, without having to restart the main bot. Each subtopic bot could undergo A:B testing without code changes in the main bot. But I think there is special logic to see if a topic changed, and this changes the processing. I am not sure how this would be reconciled.

It is adding a lot of complexity, and I am not sure the response improvement outweighs this increase in complexity.

 

 
  [ # 20 ]

Way too complex for me to want to implement. Means opening multiple threads for each communication to a different cs server. Makes more sense to do that in something like python.  When cs needs to do the multiple parallel bot thing, it returns a callback request in OOB. That triggers python to call a bunch of bots, wait for their reply, and send it back into cs.  If the callback times out, then CS gets a callback and has to answer without whatever the other bots did since they are not finished. You presumably would have a high callback time so they likely would complete.

 

 
  [ # 21 ]

yes, this would be very complex and for what, speed? It is already very fast.
And we can use :time always   to find the slowness.

I did not think about the OOB callback request option. This makes better sense to me!

So, to sum, I do not have a good use case for a bot calling another bot.  And I cannot think of a good one.

 

 
  [ # 22 ]

Hi, Mike.  Could you email me directly for a specific 1:1 conversation

 

 
  [ # 23 ]

Back to the original question.  Using ConceptNet as a source for Chatscript facts
It is very possible, natively, to use CS to store this and it would require a lot of memory space for the bot.
This might be a good shared library addition to CS github, as others might want it as well.
And conceptnet is somewhat static. 
Also, if you are running multiple versions of CS in a server environment,  you will need more memory on the machine.
I dont think it would approach the limit of the architecture,  32bit.
If anyone wants to collab on this, PM me directly. 
Cheers

 

 < 1 2
2 of 2
 
  login or register to react