LLM builders in general are not doing a great job of making human aligned models.

Most probable cause is reckless training LLMs using outputs of other LLMs, and don't caring about curation of datasets and not asking 'what is beneficial for humans?'...

Here is the trend for several months:

Reply to this note

Please Login to reply.

Discussion

Pretty insightful. You must remember, however, that these are always correlations (improved by exposure to nostr but correlations nonetheless)

Yes we are working with probability clouds. Nostr is special, a very bountiful could with so much beneficial rain.