Global Feed Post Login
Replying to Avatar U-P-G-R-A-Y-E-D-D

Running a local model is the most private. It's pretty easy to fo with Ollama and OpenWebUI

If you don't want to spend $500 on a GPU, then Venice (supported by Erik Voorhees) looks like a pretty good option :

https://venice.ai/?r=0

Avatar
Diyana 10mo ago

I'd really love to be able to soon upgrade my machine and run local for sure. It's my dream. :)

Reply to this note

Please Login to reply.

Discussion

Avatar
⚑ Dee Kay βš‘πŸ‡ΈπŸ‡ͺπŸ‡¬πŸ‡§πŸ‡¨πŸ‡ΏπŸ‡§πŸ‡·πŸ‡¦πŸ‡Ή 10mo ago

You can even run on phone now πŸ˜‚

Oterwise https://venice.ai/ is a good option as mentioned.

Thread collapsed