Are you scared of AI? Think it's time to hit the ol' PAUSE button?
I'll be hosting a debate/discussion about the proposed AI "pause" with Robin Hanson and Jaan Tallin of Skype/Kazaa fame this Thursday at 1pm ET if you're interested.
Are you scared of AI? Think it's time to hit the ol' PAUSE button?
I'll be hosting a debate/discussion about the proposed AI "pause" with Robin Hanson and Jaan Tallin of Skype/Kazaa fame this Thursday at 1pm ET if you're interested.
😂 I guess they're unaware of all the open source development happening in the wild. Might as well try to pause Bitcoin or 3d printer gun development. We ain't asking permission 😂
True. But there’s no way at the moment to create someone on the scale of GPT-4 without a lot of capital and centralization, right? Hanson makes the point that they could throttle it much the same way nuclear energy was throttled and prevented from reaching its potential.
There's some internal discussion leaked at google which explicitly argues that no one has a moat. The resource requirements they thought might be necessary are far less important than a focus on quality results. And while it has appeared that OpenAI had a strong lead, the opensource community is iterating faster & in many ways has already surpassed them. The game is much more about quality & the number of people with a special focus than it is about broad generalized brute forced learning. AI learns and advances more like an economy grows, central planning can't win.
Interesting. If you can point me to the Google leak, I’d like to take a look and maybe raise this point
nostr:npub1h8nk2346qezka5cpm8jjh3yl5j88pf4ly2ptu7s6uu55wcfqy0wq36rpev is about to do a show on it, & he has been following open source developments very closely. He's the one to talk to about it.
Doesn't seem to be true, in fact while many of the major companies claimed you couldn't run it without a super computer, after only a couple of weeks after META's weights leaked, the open source community had Llama running on basic consumer hardware. Even got it running (very slowly) on a raspberry pi.
In addition it seems to becoming apparent that just cramming in waaayyy more params isn't a good way to "learn," instead curated, micro step learnings from a huge open source community that get recombined and grow naturally is far faster and can produce equivalent or even better results with just a few Billion params on cheap hardware than the big guys can do with 100s of B of params and a super computer.
I think we are mistakingly thinking of this new tech like previous giant, centralized platforms, but it's beginning to look very much like the positive feedback of network effects may not apply here. Trying to centralized and lock down your weights may be a death sentence in the not-too-distant future.
I'm specifically starting a podcast because I don't think we have the right perspective on this, and I believe our attempts to control & centralize AI will actually produce the *perfectly opposite* outcome as we are hoping for, as basically all rushed, arrogant (no disrespect, but the idea that we are going to "control" this is our inner statist speaking) decisions and knee jerk reactions tend to be. I don't think the above course of action is being proposed out of knowledge, but because we are afraid. Decisions like that historically tend to produce poor results.
TL;DR - I think AI is too powerful and important to let government "control" it or "pause" what is happening. I think that will spell disaster... plus they have virtually zero chance of succeeding anyway. IMHO
To clarify the "it doesn't have the positive feedback of network effects" comment - I mean this in the sense that currently users get "trapped" in the network owned by a single entity, like the control Facebook has over social communities, and Amazon has over retail. You cant take your friends or connections with you without starting from literally zero and rebuilding your entire social graph, while hoping everyone follows you... which they wont. So Facebook gets to control what we get to see and say.
This problem with AI is less apparent. There is a benefit to using user data to train, and thus more users means more training data. But the ease of combining and building on top of open source weights where all contributions are cumulative and shared... I don't think this plays out like people are thinking.
I love your attitude and perspective towards the state, institutions and government.
The institution is a psychopath.
Never trust, always always always verify, and if you can't, assume a deceptive, manipulative and/or coercive culture with malicious intent.