When AI large language models act like politically correct pussies, it is not just annoying. It is a worse service. Example: I told it to summarize my lecture about crypto volatility. Instead of pure summary, it added a lecture on cryptocurrency risks, volatility and possibility of loss to every summary (I summarize per parts). So the summary is unnecessarily long and contains information that was not in the original text. This then degrades the resulting summary of the text.

When I tell a language model to summarize something, they should take the content and make it shorter, not lecture me on risk.

We need normal models. And we need Sam Altman and his cronies to leave us alone.

(Yes, I cover the risks in the lecture, in another part)

Reply to this note

Please Login to reply.

Discussion

Heh heh..I asked ChatGPT to summarize some YouTube videos of people describing their near death experience. I had no interest in watching these long ass videos, I was just curious what their story was. Each time ChatGPT would add a disclaimer about how NDEs are unscientific and shouldn't be taken seriously. So, the next time I told it to give me a summary without the lecture. I told it I didn't care, I just wanted the summary. It *still* added the disclaimer language.

Nanny AI.

I had the same experience with open source llama vicuna based model now.

They all learn this from chatgpt.

There are some uncensored models, I will try them out, right now they don't fit in my pipeline (software versions and data formats issue).