Llama 2 Chat is *notoriously* quick to moralize and tell you off, but it turns out that's entirely down to the default system prompt - with LLM you can pass a new --system prompt and get it to behave more usefully
Discussion
nostr:npub1f6a33pfyp67y8llhunlhrf855xm47n3fdqymvxfj7yx78c6vqf4scxpnql the “please don’t share false information” part of that system prompt seems misguided at best. Is the model supposed to be able to distinguish between true and false information?