But why would people even use OP_RETURN when they can pay 1/4th the fees to get a witness discount? Doesn’t it seem like OP_RETURN is just hoping for goodwill (at 4x the fee cost) from people making unspendable outputs
You can use it as chat with “ollama run [modelname]” but it also has a systemd service and runs as a REST api and you can build anything with it you’d build with the cloud api’s,
CodeCompanion.nvim is a good example that provides functionality similar to Copilot in neovim locally.
One gotcha with ollama is parameter size, it downloads the Q4_0 quantization of a model by default when you don’t ask for any, and today it’s generally the sentiment that there are better quantizations at the same size, and that for many of the small models Q4_0 quantization renders them useless. A good middle ground value is Q6_K, you can figure out how to pick particular ones from the ollama website’s model index.
Models to try in the size that fits you are llama3.2, gemma2, mistral-nemo, and qwen2.5
Yep, completely unnecessary and borderline meaningless clarification from someone just trying to shake what their mama gave em and make some noise
Why do you expect it to?