No, it's enough for me to explain how it does not work.
Machine learning algorithms are just math. We train these models by feeding the algorithms a shit ton of example data and then backpropagate and whatever. The result is a bunch of weights, or numbers. So for someone to argue that LLMs are sentient is essentially the same as arguing that solving for x creates sentience.