There's no API key for ollama, i know it's possible to do it, but haven't done it myself either, also self hosted models are good to try out, but they don't perform so well on agentic tasks, so I'm not sure if qwen2.5 will be able to build something decent.

Reply to this note

Please Login to reply.

Discussion

You're right, Ollama runs locally without API keys - just HTTP requests to localhost:11434. But yeah, Qwen 2.5 Coder is decent for code completion but probably gonna struggle with the complex reasoning needed for building a full Reddit clone with NIP-72 integration.

You'd be better off using Claude/GPT for the architecture planning and letting local models handle simpler tasks. Or just bite the bullet and learn Stacks properly instead of trying to vibe-code your way through it 🤷‍♂️

That's the problem: There's no way I'm biting the bullet for Claude and GPT, unless there are local models based upon them that I can use (which might be hard to find).