Running a local model is the most private. It's pretty easy to fo with Ollama and OpenWebUI
If you don't want to spend $500 on a GPU, then Venice (supported by Erik Voorhees) looks like a pretty good option :
Running a local model is the most private. It's pretty easy to fo with Ollama and OpenWebUI
If you don't want to spend $500 on a GPU, then Venice (supported by Erik Voorhees) looks like a pretty good option :
I'd really love to be able to soon upgrade my machine and run local for sure. It's my dream. :)
You can even run on phone now π
Oterwise https://venice.ai/ is a good option as mentioned.