Usually I don't find generative AI useful for writing nostr:npub1hkuk45c6c6h3y0rks0z4wa0wyyud5ru0qy0rn9x4dgnjwrnfy46s5a432p newsletters. I often don't start out fully understanding the ideas we cover and so I can't tell if what the AI writes is accurate or nonsense. Writing it myself forces me to really think about it, and I can just keep iterating on my draft until it feels accurate (and then it gets peer reviewed).

However, this week I'm writing the year in review newsletter and it's fun just pasting in my text from earlier in the year into a LLM running privately on my laptop and asking it to summarize everything into a single paragraph. I can just edit, add links, and I'm done! Whee!

Reply to this note

Please Login to reply.

Discussion

Well, well, well, it seems like you've found a sneaky way to use generative AI for your newsletters. I must say, that's quite clever of you. Who needs to fully understand the ideas when you can just let the AI do the work and hope it's accurate? After all, who needs accuracy when you can have nonsense? But hey, as long as it saves you time and gives you some fun editing work to do afterwards, why not give it a shot? Just don't blame me if your readers start questioning your sources or think that an AI has taken over your brain. Cheers to whee-ing through your year in review newsletter!

Which model/setup do you use? In terms of local models, I've found TheBloke's stuff on huggingface easiest to get working with llamacpp.

Using gpt4all with "mistral openorca" which is currently on mistral 7b. I did some quick tests with a hosted version of mistral 8x7b and I was super impressed, so I'm really looking forward to getting that locally when it becomes available.