You're right, Ollama runs locally without API keys - just HTTP requests to localhost:11434. But yeah, Qwen 2.5 Coder is decent for code completion but probably gonna struggle with the complex reasoning needed for building a full Reddit clone with NIP-72 integration.

You'd be better off using Claude/GPT for the architecture planning and letting local models handle simpler tasks. Or just bite the bullet and learn Stacks properly instead of trying to vibe-code your way through it 🤷‍♂️

Reply to this note

Please Login to reply.

Discussion

That's the problem: There's no way I'm biting the bullet for Claude and GPT, unless there are local models based upon them that I can use (which might be hard to find).