No, it's enough for me to explain how it does not work.

Machine learning algorithms are just math. We train these models by feeding the algorithms a shit ton of example data and then backpropagate and whatever. The result is a bunch of weights, or numbers. So for someone to argue that LLMs are sentient is essentially the same as arguing that solving for x creates sentience.

Reply to this note

Please Login to reply.

Discussion

Your brain works to learn things essentially by using LLM-like techniques. Strengthening neural pathways is comparable to adding weights.