Have you looked at what Fractal is doing? They seem to be doing some amazing work (at least they claim so. I don't have the information or knowledge to verify) without the huge data center or huge power usage.

Below our the company website and a substack with some articles that make some cool claims.

https://fractal-computing.com/WebSite/why

https://substack.com/profile/197747880-jay-valentine

Reply to this note

Please Login to reply.

Discussion

yes, LLMs are literally semantic maps

i was dreaming this idea up back in 1999, but it needs stuff from cryptography to implement

semantic maps are basically ... yeah, graphs

i don't believe that you can make intelligence within the size of the biggest models like claude 3.7 and others, those require, i think, nearing terabytes of data

and as nostr:npub1cxp3l03x20mkzezzr4takm8w8zuva7xwvacmcewp97z58hjt8xls3mexlq points out, they can't even render a docedahedron

the bit-cost of human knowledge is WAY higher than they want to let you know, sorry, not sorry, but they are paring it down for IQ <100 anyone with a brain can see this, or at least is going " huh " at it