still working on the response rendering, but was able to get dave working with my wireguard ollama instance (on a plane!)

private, local, nostr ai assistant.

Reply to this note

Please Login to reply.

Discussion

👀

is this its own ollama model or can you run it on smth small like gemma3:1b?

you can run it on any openai-compatible AI backend with tools support

in this case I was using some random tools-supported model I had on my ollama server:

hhao/qwen2.5-coder-tools:latest

oh cool!

That’ll be dope! I’m sick of OpenWebUI

Cool. I was catching up on white lotus. Same same right? 🤣

🤗