Which model/setup do you use? In terms of local models, I've found TheBloke's stuff on huggingface easiest to get working with llamacpp.
Usually I don't find generative AI useful for writing nostr:npub1hkuk45c6c6h3y0rks0z4wa0wyyud5ru0qy0rn9x4dgnjwrnfy46s5a432p newsletters. I often don't start out fully understanding the ideas we cover and so I can't tell if what the AI writes is accurate or nonsense. Writing it myself forces me to really think about it, and I can just keep iterating on my draft until it feels accurate (and then it gets peer reviewed).
However, this week I'm writing the year in review newsletter and it's fun just pasting in my text from earlier in the year into a LLM running privately on my laptop and asking it to summarize everything into a single paragraph. I can just edit, add links, and I'm done! Whee!
Discussion
Using gpt4all with "mistral openorca" which is currently on mistral 7b. I did some quick tests with a hosted version of mistral 8x7b and I was super impressed, so I'm really looking forward to getting that locally when it becomes available.