The mega-based LLM has arrived

nostr:nevent1qvzqqqqqqypzp8lvwt2hnw42wu40nec7vw949ys4wgdvums0svs8yhktl8mhlpd3qyg8wumn8ghj7efwdehhxtnvdakz7qgwwaehxw309ahx7uewd3hkctcpz9mhxue69uhkummnw3ezumrpdejz7qghwaehxw309aex2mrp0yhxummnw3ezucnpdejz7qgswaehxw309ahx7um5wghx6mmd9uqzp32hv9zqylps6x90kp4cwt30vzmc7sn7nlrntu82pxmufcdt75fnvdn6y4

Reply to this note

Please Login to reply.

Discussion

A few answers are a bit silly, but it’s good to be able to retrain open models. Hopefully they keep up with the closed foundation models.

So, Nostriches are generally fans of conspiracy theories and unprovable faith based beliefs?

Yes, let's instead resort to ad hominem attacks and loaded language

No, let’s not. I’m simply saying that those answers are not improvements. I’d rather have an LLM that errs toward evidence based knowledge.