The debate about what AI “understands” overlooks the central question:
WHAT is it that is doing the “understanding?” What is the “agent” behind said understanding (or lack thereof)?
We don’t yet have a widely-accepted scientific theory of the subject and the self (two different things) for beings that are widely acknowledged to be sentient, like cats and humans. It is therefore wildly premature to claim that language models “understand” anything.