Exactly.
This is also a glimpse of why current versions of AI vs. Barry Smith ontology style knowledge (that NASA [1] and initial iterations of Siri used) cannot compete at current civilization systems complexity. We have to process extreme amounts of movement data that is centralized vs. collaborative maps. Google maps is best because it can process the real-time flow of exhaust data coming from their products (and maps users driving in cars). This distinction is dear to me. But, the funny thing is we are now backfilling LLMs with knowledge graphs via RAG to mitigate hallucinations. I suppose for critical systems we might continue that. But here is the catch: current AI *requires movement*. The correlation between map products with ontologies is relevant. By *only* relying on centralized platforms that build models, as systems change/fail/emerge, only those centralized models that constantly build from fresh data will succeed, and current AI takes much energy, minerals, connectivity, money, centralization, etc., to make that happen. We *could* have focused on distributed knowledge via collaborative maps, but the systems became too complicated and changed too quickly. Curse Tim Berners-Lee for caving on that, even though it is the pragmatic choice (he literally said something along the lines of "ok to toss rdfs/triples/owl style because we get similar results from LLM style AI". But praise Tim Berners-Lee for his early vision and later semantic web, tip o the hat to you sir.")