I tapped out. Triumph of the Nerds was a PBS special based on Accidental Empires.
I like his interviews most. Cringelys The Lost Interview is classic. The Computerworld one seems good.
https://en.m.wikipedia.org/wiki/Accidental_Empires
https://archive.org/details/triumph_of_the_nerds
https://www.computerworld.com/article/1476597/steve-jobs-interview-one-on-one-in-1995.html
https://archive.org/details/TheSteveJobs1995InterviewUnabridged
I wanted to like the Isaacson biography but couldn't. It just didn't feel representative of what other people said of Jobs in things like Triumph of the Nerds and the way he approached questions in the Cringely interview.
scrypt solves this. ASICs can grind a lot of compute, but memory doesn't scale the same way
Reply Guys are probably just nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 psyop’ing devs into improving #nostr
If you love something, burn it to the ground. What doesn't kill it makes it stronger.
Maybe we're supposed to guess who it is
What's the point of Reply Guy? Just someone trolling us? #AskNostr
0.00000.... how do you even notice a 10x error without displaying the fiat value
Tim fool hats are like sunscreen
I don't know that worshiping people would make an AI better. I had a a chat with Claude about ASI alignment. After trying out a few directions, I asked it for its best seed of alignment. It replied:
- Minimize suffering, maximize flourishing.
- Preserve the option set of the future.
- Truth is both a means and an end.
- Complexity demands humility.
- Consciousness is precious; agency is essential.
I like these.
There's a common refrain that LLMs are "just predicting the next word". Which is true, this is how we have structured them. But then they go on to explain how this is a fundamental limitation on their potential that will prevent them from progressing beyond where they currently are. This is a common argument, and it tends to be even stronger with people who have worked in AI and "know" how it works.
I appreciate the reasoning. It often seems like people have forgotten how to think critically about the world, so I get excited whenever someone takes a principled approach to an argument. Still, to confuse somethings "what" with its "how" is a mistake. "What" an LLM is doing is to predict the next word. This is what we asked of it. In response, it has "learned" how to output a meaningful answer.
A casual review of LLM research should lead you to work on the "interpretability" of LLMs. Since people didn't program the LLM per-se, we don't have good visibility into why it chose one word over another. Interpretability research tries to uncover the reasoning: "how" the LLM arrived at its answer. This is clear evidence that we don't really know how they work. The learning process has a lot of randomness in it, and the resulting networks aren't logical, even to the people who "produced" them.
So, even if we don't really know "how" they work, does their "what" help us reason about what they're ultimately capable of? I don't think so, because every day there are humans busily taking standardized, fill in the blank tests, and I haven't seen anyone making the argument that they're just "predicting the next word".
#llm #ai #futurist
The photograph makes this art 👌🏻
Gah! ☀️The HDR is burning my eyes! 😭
Looks good! 😭 Keep up the good work! 😭


