AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

How much load can a chatscript server take in case I want to use on production
 
 

HI everyone,

I did a very simple test use multiprocessing in python.

I started the chatscript server. I invoked chatscript default bot 1000 times using multiprocessing using unique username everytime and simple ‘Hi’ msg.


285 - It responded correctly
178 - “No such Bot” was the response
Rest of the time it was empty response.

I did this on a Mac. Any idea on how to improve the performance ? Am I missing something.

Thanks.
Shamik

 

 
  [ # 1 ]

Obviously any server has a limit as a single instance.  The fastest single machine configuration is a linux server with as many cores as you can and a good memory and networking, using the EVERSERVER (default) build.  You get to specify how many clones to use (separate instances of a CS server all tied to the same port). For an 8 core machine, 25 clones might be reasonable.  I might expect 1K per second responses but that’s just a guess

 

 
  [ # 2 ]

Just a general FYI the Mac can’t compile the ev server because that module uses a system call not available on the Mac by design. I forget the details, but from my understanding there are several variants of the call and it is deliberately not available on the Mac because reasons. I took a look finding a fix for that briefly with the thought of contributing a patch, but the ev server library seems to be abandoned by the author so it seemed pointless. I think writing a module for one of the popular open source lightweight http servers like nginix, lighttpd or monkey is probably the way forward. CS could be compiled directly into the server as a library/module and let the server take care of managing the concurrent clients over http.  I might be convinced to take that on if there is interest and people willing to also contribute.

Bruce, is their any ongoing file io besides logging during server operation once the startup occurs?  Besides :build, ect.

 

 
  [ # 3 ]

File IO (or equivalently networking IO to a remote database server) happens twice in every volley of a user. Once to load the user’s specific data and once to write it back out again

 

 
  [ # 4 ]
Shamik Ray - Jan 27, 2017:

HI everyone,

I did a very simple test use multiprocessing in python.

I started the chatscript server. I invoked chatscript default bot 1000 times using multiprocessing using unique username everytime and simple ‘Hi’ msg.


285 - It responded correctly
178 - “No such Bot” was the response
Rest of the time it was empty response.

I did this on a Mac. Any idea on how to improve the performance ? Am I missing something.

Can you run the same test on linux?  If you post your python load test client here I can try it on my beefy machine learning linux box and get you more realistic numbers.

 

 
  [ # 5 ]

RE - more detail regarding the evserver on mac issue:  sys/epoll.h is linux only.  The only recourse would be to port that call to kqueue which does the same thing on bsd flavors of unix ( there is also a libkqueue for linux so I suppose this would be a cross *nix solution at least ).

 

 
  [ # 6 ]
Todd Kuebler - Jan 27, 2017:
Shamik Ray - Jan 27, 2017:

HI everyone,

I did a very simple test use multiprocessing in python.

I started the chatscript server. I invoked chatscript default bot 1000 times using multiprocessing using unique username everytime and simple ‘Hi’ msg.


285 - It responded correctly
178 - “No such Bot” was the response
Rest of the time it was empty response.

I did this on a Mac. Any idea on how to improve the performance ? Am I missing something.

Can you run the same test on linux?  If you post your python load test client here I can try it on my beefy machine learning linux box and get you more realistic numbers.


Here you go

File Attachments
chatscript_client.py  (File Size: 1KB - Downloads: 0)
 

 
  [ # 7 ]

Good news and bad news.


Good news:  The latest code base can actually compile evserver on the mac!!!!  After a bit more testing I’m going to commit a change to the makefile that adds a viable mac evserver target.  Also, this runs better/faster than the normal mac server binary, so great news all around.  I’m doing load testing too with apache jmeter to quantify the differences and get a start on answering questions like this with hard, reproducible data.  I’ll commit the jmeter project file into GitHub too so others can contribute and test.


Bad news:  I can’t download your attachment.  Can you just paste the code here inside of brackets?  I’m guessing it’s not very large.

Other bad news:  I was able to duplicate your problem ( not to that extent though, only ~1% error or 5k samples ) using jmeter.  The good news above applies, i.e. same exact load against the evserver version on macos yeilds a higher throughput and ZERO errors.

So it looks like your problem is real but can be fixed by compiling the evserver version on your mac and using that.  Keep your eyes peeled for the new version in GitHub.

 

 
  [ # 8 ]
Todd Kuebler - Jan 27, 2017:

Just a general FYI the Mac can’t compile the ev server

Turns out the latest evserver included in chatscript is smart enough to detect the os and switch to using kqueue during compilation.

So the above is wrong, please disregard!

 

 
  [ # 9 ]

Great! Thanks Todd for taking your time out. I will try the evserver build. One stupid question when I invoke MacChatscript from binaries does it use the evserver build ?

one last thing -

Bruce mentioned ‘You get to specify how many clones to use (separate instances of a CS server all tied to the same port)’ -

when and where do we get to specify this.

Thanks Again.

 

 
  [ # 10 ]

fork=  command line parameter

 

 
  [ # 11 ]
Shamik Ray - Jan 30, 2017:

When I invoke MacChatscript from binaries does it use the evserver build?

I did get Xcode to build EVSERVER now, so yes, from now on the BINARIES/MacChatScript will be an evserver build. What’s in GitHub now isn’t though, but I put in a pull request (https://github.com/bwilcox-1234/ChatScript/pull/56) if that is approved it will be.

You can grab my changes right now by pulling that from git into your local, but I dunno if that is how you are doing it or if you are unzipping the chatscript zipped file release.  Also, not sure how git savvy you are if you going the git route.  Cherrypicking changes across different git repositories took me a few years to get my head around.  :”)

What you have now is _NOT_ a EVSERVER build - the older versions couldn’t run on macs because they didn’t support kqueue - the ev lib does now. smile

How do you know if you have a EV build?  Look for the ‘evcalled pid: xxxx’ message and the EVSERVER key word in the startup header:

tkuebler ~/src/ChatScript BINARIES/MacChatScript
evcalled pid
2675
ChatScript EVSERVER Version 7.12 pid
2675 64 bit MACH compiled Jan 29 2017 21:35:41 host=local 
 

 
  [ # 12 ]

thanks again Todd! grin
I will pull it from git.

 

 
  login or register to react