what did you say?

it is a weird model. sometimes it says it doesn't know or nobody knows it. for example i asked "what does Nostr stand for?". It said "nobody knows". somehow it estimates that it does not know the answer?

that kind of answers did not happen with llama 3, it always hallucinated before accepting "it does not know".

Reply to this note

Please Login to reply.

Discussion

I was challenging it to address the idea that its creation and creators were parasitic. The responses were actually pretty good and avoided abdicating any understanding or responsibility.

I'm interested in how these models don't have a sense of self, yet are getting better at simulating one. People use terms like autoregressive and hallucination as pejorative, and I think they're ignoring some fundamental things about their own biological processes.

To be clear, I don’t think this model is alive, sentient, conscious, or anything other than a bag of floats. But the output as a reflexive response to the input is real.