Global Feed Post Login
Replying to Avatar Juraj

I have a new micro-project.

It allows to use venice.ai lifetime Pro account with local apps that communicate over the ollama API (open-webui, or continue.dev your VS Code IDE / JetBrains IDE).

Check out the demo video, it's quite nice.

I can also provide a pro account if you don't want to fiddle with MOR tokens.

I started doing this because llama-3.1-405b is really a great model (I think better than ChatGPT for many coding tasks), but I can't run it locally on my laptop.

With this, I have everything set up to make it work locally, with the best open-source model available today.

https://pay.cypherpunk.today/apps/26zEBNn6FGAkzvVVuDMz3SXrKJLU/crowdfund

Avatar
hit with a hint of s 1y ago

So I can just start ollama on my crappy laptop and connect to lifetime venice instead of openai?

Reply to this note

Please Login to reply.

Discussion

Avatar
Juraj 1y ago

Ollama runs your model locally

This project is a code different from Ollama, it pretends it is ollama but uses models through Venice.

So any app that would talk to local model through ollama can use Venice instead.

Thread collapsed