Running ollama directly may introduce security vulnerabilities. It’s best to run it through docker in my research. Performance should be the same.

I haven’t found many good guides. I wrote mine because none of the guides I followed worked without exposing either app to the host network.

My guide was inspire by this video, which might help. His setup didn’t work for me though:

https://youtu.be/qY1W1iaF0yA

I will be updating the guide when I learn how to improve my process. I might switch to using docker compose or I might make a startup script that sets it up and optimizes it for security. I might take this so far as to develop a full app so people stop potentially exposing their processors to the internet to run local AI.

Reply to this note

Please Login to reply.

Discussion

No replies yet.