Running a local model is the most private. It's pretty easy to fo with Ollama and OpenWebUI

If you don't want to spend $500 on a GPU, then Venice (supported by Erik Voorhees) looks like a pretty good option :

https://venice.ai/?r=0

Reply to this note

Please Login to reply.

Discussion

I'd really love to be able to soon upgrade my machine and run local for sure. It's my dream. :)

You can even run on phone now πŸ˜‚

Oterwise https://venice.ai/ is a good option as mentioned.