Replying to Avatar PlebInstitute

nostr:npub10qdp2fc9ta6vraczxrcs8prqnv69fru2k6s2dj48gqjcylulmtjsg9arpj How do I configure stacks if I want to use local LLM?

It's not officially supported right now, but you're not the only one trying to make it happen! Maybe check out some of the discussion on this thread for some leads?

nostr:nevent1qvzqqqqqqypzpkdr9xh6hvnr4zhjz6pcadz7ruzvcvqqwpzxf9rufwg8uxl0tqxhqqs9yknxqzyjftrse4mqjjyykpk7l3hmhcjrdrzy2qk8emnzhkqrnyg2a9wfu

Reply to this note

Please Login to reply.

Discussion

If you guys would be willing to have nostr:nprofile1qydhwumn8ghj7emvv4shxmmwv96x7u3wv3jhvtmjv4kxz7gpz4mhxue69uhhyetvv9ujuerpd46hxtnfduhsqgqyv87tanzvxd6y8xfj66u0zynfendhejtn44a9pt3k9kcntfr5m57rmess or someone else in your sphere potentially implement Ollama as an AI provider option for Stacks, that would be fantastic.

That way, anybody using Ollama doesn't need to provide an API key.