We are social beings and want to be heard. We interpret the world and share our interpretations to see what sticks. If it sticks, maybe there is something of value to that interpretation? And so conversations are 99% repeated and recycled arguments but we still want to share and hear them as we are not a hive mind that would know what sticks and what not a priori. And to make things even worse, over time, things stick differently. The Overtone window slides. We have to update all these billions of brains one brain and one interpretation/meme at a time.
We would probably not need AI but only state of the art LLMs to get these feedback loops "offline". Imagine a local LLM that's trained with all of your follows conversations. It could anticipate individual reactions and even argue about them. Alice probably will give you a 🤝 and Bob might zap you. Carol will not react as she never chimes in on this topic.
I find this thought kind of creepy. Imagine writing something thoughtful, getting your feedback from the LLM and not sending it, being satisfied with or scared by the "fake" feedback.
I would still want such an assistant that runs locally and checks for grammar and style, too, cause why not? I think the result would be more engaging posts and if it was integrated in clients it could make for more engaging posts on a broad scale making a dent in nostr's adoption as a whole.