BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all.
They just memorize patterns really well.
Here's what Apple discovered:
(hint: we're not as close to AGI as the hype suggests)
I've been saying this for the past two years:
LANGUAGE models model LANGUAGE, NOT REASONING.
nostr:nevent1qqsfvs5yxq5clrqjk5k6kvpwehww0rxy926u2mg4lxtaek695pafr4qppemhxue69uhkummn9ekx7mp0qgsw34n9r8jrkys54350nu4ah3pcd4q6ce4jp3dzvzumqsgz0pq8f6grqsqqqqqpkg5wgc
Please Login to reply.
No replies yet.