nostr:npub10qdp2fc9ta6vraczxrcs8prqnv69fru2k6s2dj48gqjcylulmtjsg9arpj How do I configure stacks if I want to use local LLM?
Discussion
It's not officially supported right now, but you're not the only one trying to make it happen! Maybe check out some of the discussion on this thread for some leads?
If you guys would be willing to have nostr:npub1q3sle0kvfsehgsuexttt3ugjd8xdklxfwwkh559wxckmzddywnws6cd26p or someone else in your sphere potentially implement Ollama as an AI provider option for Stacks, that would be fantastic.
That way, anybody using Ollama doesn't need to provide an API key.
I am going to start asking similar questions soon I think