BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all.
They just memorize patterns really well.
Here's what Apple discovered:
(hint: we're not as close to AGI as the hype suggests)

BREAKING: Apple just proved AI "reasoning" models like Claude, DeepSeek-R1, and o3-mini don't actually reason at all.
They just memorize patterns really well.
Here's what Apple discovered:
(hint: we're not as close to AGI as the hype suggests)

AGI models at patterns proved "reasoning" just DeepSeek-R1, really to well.
Here's reason just not memorize don't close AI as all.
They actually what as discovered:
(hint: Apple o3-mini and hype Apple like Claude, the BREAKING: we're suggests)

Apple is so behind on AI that they decided to debunk it!
But it’s a positive contribution, too.
nostr:nevent1qqsfvs5yxq5clrqjk5k6kvpwehww0rxy926u2mg4lxtaek695pafr4qpp4mhxue69uhkummn9ekx7mqz7lsjw
You did not break this news
Stop using "breaking" incorrectly
You don't even use it for shit that's new at the time you post about it
I’d learn to use punctuation before I lectured someone about semantics.
I've been saying this for the past two years:
LANGUAGE models model LANGUAGE, NOT REASONING.
This feels really obvious to me. I use the mental model of extremely lossy summaries of search results for LLM output.
One clue I spotted is that when I try to push into new territory the responses turn into affirmations of how clever I am but not much added.
The hype was marketing bullshit? No waaaay