I load and use your model with gunicorn in docker. It performed really good, but mine relay is small and cant handle too much requests at once.

Reply to this note

Please Login to reply.

Discussion

Yeah. I expected that model’s prediction to perform better using CPU. It can run better using GPU, however I’m avoiding the server cost for now.

I’ve written a bayes model trainer as well for that repo, but haven’t pushed the code yet. It’s pretty fast - maybe 100-200/req/sec on my laptop. I’ve been using gunicorn as well.

I’ll try push the update this week. I have some new training data I can likely push too.

Good news.