yes, LLMs are literally semantic maps

i was dreaming this idea up back in 1999, but it needs stuff from cryptography to implement

semantic maps are basically ... yeah, graphs

i don't believe that you can make intelligence within the size of the biggest models like claude 3.7 and others, those require, i think, nearing terabytes of data

and as nostr:npub1cxp3l03x20mkzezzr4takm8w8zuva7xwvacmcewp97z58hjt8xls3mexlq points out, they can't even render a docedahedron

the bit-cost of human knowledge is WAY higher than they want to let you know, sorry, not sorry, but they are paring it down for IQ <100 anyone with a brain can see this, or at least is going " huh " at it

Reply to this note

Please Login to reply.

Discussion

No replies yet.