OpenAI chief goes before US Congress to propose licenses for building AI https://www.reuters.com/technology/openai-chief-goes-before-us-congress-propose-licenses-building-ai-2023-05-16/
Discussion
Fuck
I think I agree with this move.
This new tec is powerful.
I think this is the exact mentality that will turn this into a disaster. You think the government is going to have the slightest clue how to make safe AI? You think they won’t be the very ones designing them to kill people?
Our best defense, far and away, is to open source as much of this as possible. I promise you, making a decision like this to centralize and hate keep a technology into the hands of the few and the powerful because you are afraid and ignorant of it will produce every dystopic outcome you’re afraid of:
& I agree with your sentiment as well.
But I am not particularly convinced that open sourcing access to everything will create a better outcome.
What if say, plutonium did not have regulatory oversight & were accessible. I suspect we may not be here today.
Sorry. I feel it's important to consider all the geometry of the matter.
This isn't plutonium, its not a nuclear bomb, its *information.*
I think the analogy is very ignorant (with respect, don't mean to be insulting) and think this is totally being used out of fear as an excuse for control by the very people we should be concerned with.
AI tools are purely a risk when it comes to centralized power, and that alone. If everyone has their own AI, then there is no power in AI. If we are all centralized into barely a few giant platforms with AI, then it's a power dynamic disaster. The risk is solely in the shift of our systems in dealing with the new technology, and hiding it until it makes a 100x or 1000x in capability is remarkably short sighted. What we need is for it to proliferate at every 1-2x capacity so we can adopt and adapt to the new reality and simultaneously use AI to penetration test our networks, and to help design and defend our networks from AI penetration.
To lock it away and isolate it to centralized institutions would be like letting them develop hyper accelerated malware in secret while millions of open source developers have no idea what they are up against and aren't getting the opportunity to iterate and deal with the iterative improvements and bugs against an adversarial environment. Then suddenly one day there is a malware that can get into every device and every OS that exists and we have to deal with it.
THIS is what the mentality of centralizing and "controlling" AI tools will produce. It is short sighted, it is a fundamental misunderstanding of what this technology is, and it will instill in the AI itself the "values" of controlling what it's users do, ask, manipulate their behavior, and trap them into a permissioned network that is "managed" by politics.
I cannot say this more clearly, I 100% believe the position you are taking will produce EXACTLY the scenario you hope it will protect against.
(again, not trying to be accusatory or insulting, I know i get amped)
#[3] please provide long form posts in audible form 😁
PS: keen for the new pod 🤙
Licensing and overly centralizing something because it can be dangerous is the same cynical position we hear for ANYTHING that has significant capabilities.
It’s cynical because it’s always the belief that individuals shouldn’t have access to things because they are going to hurt themselves and others. That there are some bad people out there so lazy authoritarian rule #1: No one can have it. We know better.
Bad People(tm) won’t give a shit about some license. It’s code.
I’m sure a lot of this is nuanced too and this just seems like the same old rhetoric that says individuals shouldn’t be able to:
Transact privately
Defend themselves
Run certain businesses
Mine Bitcoin
Grow their own food
Post information on the Internet
Maybe there needs to be some cautions put in place…maybe not and I don’t think the same people who have been wrong on so many things while propping up so many “bad guys” in pressed suits is the answer either.
I had a reply, fully formed, and the Damus app crashed on me as I was typing -Cheers
haha
To summarize:
Gradually, then suddenly this technology has become a powerful force. I would like to believe for the betterment of humanity, but it could very well create a most calamitous disaster. We do not yet know the direction, but velocity of speed that we are now moving vs the understanding that we have of the capabilities feels wholly reckless to me.
Dude I hate that so much, lol. Losing your post after you put your entire focus into it and then letting the thread go so you'll never recreate it in its most perfect form when initially written... literally the worst. 😂
That said, I get your perspective, but I think you are greatly overestimating the ability for the government and counterfeit class to do anything but develop and direct it toward exactly the risks we should legitimately be afraid of. I can't imagine seeing any organization more aligned to using it for total surveillance, top down manipulation of opinion and access to data, developing deep penetration malware and autonomous network infiltrators, and creating an AI system of robots for killing and imprisoning the population... than the government and the counterfeit class.
Legitimately can you argue that this isn't the case? I'd give this tool to a 1,000 average psychopaths if I get access to it myself, if I can just know that our political sphere doesn't have a monopoly on this tehcnology.
You seem to be failing to acknowledge an obvious truth, imo, each time you give your "let's not be reckless" stance. Which I 100% agree with from an individual perspective. But that truth you seem to not acknowledge is the incredible irresponsibility and recklessness of our government, and that in the name of "being careful" you are suggesting it ought to be controlled by the most narcissistic and reckless group of people we could appoint to the task.
I'm not disagreeing with the idea that this is a big deal, that its dangerous, and that we need to be careful. I'm disagreeing with your assumed premise that government control over our ability to build and use this technology could be in any way related to achieving those desirable outcomes.
AI is dangerous so let's only allow the murderous, corrupt government to control access to it.
That is peak insanity but it won't happen anyways. You can't bottle up information like they also tried to do with encryption.