Avatar
Mike Brock
b9003833fabff271d0782e030be61b7ec38ce7d45a1b9a869fbdb34b9e2d2000
Unfashionable.

If it happened today, a powerful state could probably do a lot of damage, actually. Particularly if the attack was multi-faceted, in the sense it could be undermined in legal, social, technical and political ways.

Was bitcoin scales, it becomes harder. But I don't think we've reached that escape trajectory yet.

Those would be the sorts of external incentives I'm talking about. As opposed to intrinsic incentives.

Well, I point this out, because at the bottom of a lot of AnCap arguments, is this belief that once we hand people tools like bitcoin, they're going to realize the wisdom of low time preference living, and have a spiritual awakening about the importance of self-reliance and responsibly.

I think that's about the most hilariously wrong read of the malleability of human behavioral incentives, I see around these parts.

For the most part I think people's time preference is probably heavily genetically-influenced and to the extent it's environmentally-influenced, it's pretty baked in by someone's mid-20s. Decreasing time preference through learned self-restraint is probably just not a thing. External incentives are required.

I think education is actually one of the more promising aspects of AI, actually. I think having something like ChatGPT as a tutor could only be a good thing. The issue, is how do we test that children is learning? I think we can do that. But it's going to take new paradigms.

We've never had this problem in history. Are you referring to using taxes to disincentivize behaviors in general?

For the libertarians and anarchists among you: yes, I'm suggesting we should potentially tax ad-supported social media and search. I think we should be potentially forcing these platforms and services to bear some of the costs of the damage they are bringing to society. And of course, I think this will be good for decentralized protocols like Nostr and BlueSky -- which I think is a good thing.

Free media and free platforms are not free. We pay for them through loss of agency and the behavioral manipulation algorithms that drive their ad-based monetization. If one values human agency as critically important to the concept of human flourishing, as I do, then one might suggest there's a severely uncaptured externality here worth internalizing through policy. And AI now makes these a particularly pressing issue.

It appears we are in a period of exponential improvement. I was pretty dismissive of the potential of large language models (LLMs) a few months ago, and echoed a lot of the criticisms of Chomsky, Marcus, Kahn, etc. suggesting that LLMs were a dead-end and nothing more than a gimmick that would not and could not be the basis for AGI.

But since I made those arguments, there have been mind-blowing breakthroughs on multi-modal models, GPT-4 has demonstrated the ability to learn how to use tools and employ them in tasks (AutoGPT does this today!). These capabilities were detailed even further in the Microsoft Research “Sparks of AGI” paper, suggesting there’s emergent properties in this models that defy our understanding.

I’ve watched a lot of my confident dismissals of the peak potential of the technology get washed away with advance after advance, that seems to be coming faster and faster, on a literal daily basis.

My concern has only risen. It has not abated.

The fact that Geoffrey Hinton, of all people, has gone from a CBS interview a few months ago talking about how great AI is for everyone to full AI doomer in a short period of time is probably something worth updating your priors a bit on. https://youtu.be/FAbsoxQtUwM

Your semantic nitpicking aside, I would argue that absolute certainty is not compatible with changing one's mind. Because if you're absolutely certain something is true, it's logically correct to dismiss any evidence to the contrary as fraudulent.

Accepting the possibility some evidence could change your mind, is sort of predicated on the idea you *could* be wrong in order to provide a cognitive basis for evaluating such evidence in the first place.