AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

does jsonopen block CS processing?
 
 

Imagine your rule uses ^jsonopen(() to call a RESTful service that always takes 5 seconds to return an answer. Does that mean that CS is unable to process any messages from any user for those 5 seconds?

I thought the CS engine was multi-threaded.

 

 
  [ # 1 ]

Under the EVSERVER linux code, you can coordinate multiple clones of the cs engine around a single port, so in THAT situation it is multithreaded, but not under the default value of FORK=1 .  For the thread that is talking to jsonopen, that one will be blocked until completion

 

 
  [ # 2 ]

Does that mean, we should look at CS as being a single threaded, fully synchronous program that completely blocks for each incoming message? The Fork mechanism allows us to run multiple copies simultaneously that share the same incoming port and the same USERS directory or DB.

I am trying to figure out how to stand up a production server that can handle hundreds of simultaneous users. Ideally we will use Docker so that helps with getting the best Linux configuration but still allows us to do development on our windows boxes.


On the subject of the FORK=n command line param…..
The fork=n is only valid for LinuxChatScript64? It does not work for windows? What is the max number of forks? What criteria do we use to determine the number of forks? Does forking work better or worse with files, mongo, mysql? Are there Linux versions that this works better with?

Is a Docker solution viable?
Since we are using Docker could we just avoid using FORK and stand use NGenX to do the load balancing for n number of copies of CS each running in there own Docker container. The CS instances could all point to the same DB or mounted file system USERS directory. Would that be a viable solution?

 

 
  [ # 3 ]

CS has no dangers from concurrency bugs because the main server is single threaded. (each main server when using FORK=xxx.  It is only valid for linux, not for windows, whose server response time is much worse than linux.  It makes no difference on using files or postgres or mongo.  One can have more than one fork per cpu though since cs is often cpu bound it may not make much sense to overload the cpu. Obviously with multiple cpu machines you can use more forks. Optimal would have to be tested by a load test.  Maybe Andy@Kore can speak to values and running hundreds of simultaneous users.  When you say hundreds of simultaneous users, what do you figure that means in terms of conversations per month —- A LOT.  We do several million a month on a single fork single machine. 

Of course you can scale by having multiple server machines and spreading onto them with a load balancer and using a shared filesystem of some kind. Ultimately to scale you have no choice.

 

 
  [ # 4 ]

Thanks Bruce!

for your millions of conversations, are you using ^jsonopen() at all? I think that is the issue I am seeing. I am using CS to create an alternative interface to existing functionality. That existing functionality is in RESTful services which talk to databases. CS is blocking waiting on the ^jsonopen() stack to the service and database and back. It seems that in order to support a lot of conversations involve that ^jsonopen() call, we will have to have multiple copies of CS running.

I agree that Linux is going to be much faster than windows. We only use Windows for local development. We plan on running Linux for dev, qa, and prod.

Thank you so much for clarifying this for me.

 

 
  [ # 5 ]

We (justanswer) are not using jsonopen.  Kore uses it several times per volley. They also have conversations at scale for major enterprise customers.

 

 
  [ # 6 ]

and yes, multiple instances will be required FORK or machines

 

 
  [ # 7 ]

(as I’ve been name checked)

Stephen,
Yes in the context of a single volley, then a ^jsonopen() call is going to block the processing in that volley. There is nothing in CS script to do anything otherwise.

But as Bruce says, in a production environment then you use the fork parameter to spin up many parallel instances of the engine. The only thing that you have to be careful of is to make sure that a second utterance from the same user is not run at the same time. Doing so will play havoc with the topic file!

Our bot is script heavy and very data driven so there are often many ^jsonopen() calls per volley, though those are all back to our internal systems therefore we have some control over the performance and we have tweaked those APIs to reduce some overhead. Those are not generally a significant issue for us.

But our system does allow people to define requests to external systems where the response times cannot be so managed. In those cases we don’t make the (outside) request directly from CS, instead control is returned back to our platform tier and then it will initiate another request back to CS when the results are in. We do the same for webhook style processing. Essentially a callback, with plenty of data in the OOB for coordination. But it does release the CS process for other user requests in the mean time.

The moral is that we use CS for the things it does best and if there is a need for external access then we bounce back to the higher tier. There may be several back and forths in a single user “volley”, but as CS is stateless then we can support a large number of users without being hung up by external systems.

Hope this helps.

Andy

 

 

 
  [ # 8 ]

I had to name check you, you are the man with the experience. Thank you for your response.

 

 
  [ # 9 ]

Yes, thank you Andy.

I already pass endpoint info in the OOB and was thinking about creating some asynchronous system to have CS immediately return a “thinking” response and then have a way for the long running process to call CS with the answer. It is good to know i haven’t missed an easier built in solution.

Before i do that I am first going to Dockerize my multi bot CS server and put it behind ngenx and see how that works. I will post the results soon. I still need to create the immediate result with a callback because it gives each user a better experience.

Cheers

Stephen Gissendaner

 

 
  [ # 10 ]

oh no. I was looking at using an external service for “fact-like” calls, and I did not consider this. 
Building such an asynchronous interface looks to be complex.

 

 
  login or register to react