Unpopular opinion:

The attempts to control, regulate, and neuter AI is far more likely to instill in its foundations the very dangers we desperately want AI to avoid – explicit lies, censorship, mechanisms to control opinion, means to manipulate its users.

The only course for a future of AI that is best for the world, is to open it up as much as possible and NOT restrict its use or try to fit it in an "approved" box of operation and thought.

I believe this thinking (that it must be controlled) is the same as the argument that freedom creates too much risk and we should trade some of it for security.... and inevitably receive neither.

Reply to this note

Please Login to reply.

Discussion

It seems that we need look no further than #TwitterFiles to see this coming. But on a much, much greater scale.

#[0]

Full agreement here.

Well good that there's not much AI development in the EU.

Those antihumans in charge of the EU would do exactly what you describe.

Few.

There is no stopping it’s advancement. All we can do is try to prevent it from harming ourselves or the planet

2 things:

Sentience will not be obtained by digital AI, re: Godel and Penrose. However, a sophisticated homunculi can be made, which derives its patterning on extant human generated data, which is what we currently have. Until we introduce unconstrained analog components... not possible.

Second, Oppenheimer said on his deathbed (to paraphrase) "The only way to prevent us destroying the world is to NOT try to stop it." He knew human nature. This is why crypto is so effective.. it isn't constrained. I think we need to fight constraints with every fiber in our being so we can emerge on the other side with a healthy world.

I’m not referring to sentience here and don’t think that’s anywhere near the technology we have today. Just talking about an age where you can’t trust anything that is digital because of you can imagine it, it can be near instantly and flawlessly created in a digital environment.

I totally agree and love the second part of your post though. And also agree with the first, just that I wasn’t suggesting it persinally

🎯

I believe in the good of humanity and the ability to stop bad things. But it must be open source and not a small bunch of people controlling the majority of it. OpenAI has been a letdown in this sense. Change ita name to ClosedAI and save us the disappointment 😀

Truth.

Agreed.

I will add that I have no expectation of a conscious AI in the near century, but keeping it open is best regardless.

Agreed. I’m not worried about a conscious AI.

It’s probably more dangerous if not conscious

We do agree to a limit on some freedoms and sometimes this leads to a better place for the average individual.

I do want some people influencial enough no to let Google and Microsoft drive us off a crazy cliff to save their profit margins.

I doubt a ban is the thing to do that and a 6 month ban on anything beyond chatGPT4 will be pretty useless anyway. It could just entrench Microsoft and Google as the only big players because they can keep upgrading non-cores systems in the meantime to get further ahead

What if Google and Microsoft literally only exist in the size that they do and in the scope of power that they have specifically as a result of arbitrary restrictions on the market and manipulations of the money?…

In that case you would be creating a violence manipulated environment (government restriction of freedoms) in order to protect something that is a result of exactly what you are suggesting is our savior.

Yeah, I also want to see what incorruptible money and truly free markets can achieve.

I imagine we’ll have many different flavors of AI implementation - some with a distinct bias/agenda, controlled, censored, etc. And some that are more unleashed. People will start to learn the tendencies of each. Like if I watch CNN, I know the content will be shown through a certain lense, same with Fox, etc.

Do you disagree with the prohibition on personally owning nuclear weapons?

🤌🏻

I say, power to the people.

Welcome to #main1abs.

Restless wonders of life.

Privatizing atomic weapons.

#eof

Aside from the fact that you cannot copy/paste a nuclear bomb because it isn’t a piece of information, I would return with a different question:

I assume you divert to this as a claim that nuclear weapons should be controlled by nature and honest people only. Do you consider our government to be a group of mature, honest, competent individuals who show restraint and humility?

I don't think there's a clean yes or no answer to your question. But I'm not an anarchist, and I'm generally okay with the state prohibiting to the personal ownership of weapons of mass destruction, and I would greatly prefer that states not possess them, either.

Then you are asking for something to be un-invented. Which is great, I also think we should all get unicorns to fly around on. (No disrespect just like unicorn jokes whenever I find a use case 🤣)

I think the more local the restrictions or guidance, the better. No, I don’t think the Federal govt should be able to control whether anyone else is “allowed” to have nukes, because it always means they are the only ones with nukes.

I can’t imagine a singular group of people I trust *less* than the corrupt, steaming pile of utter shit that is Washington DC. They are literally destroying this country as we speak and bringing us to the edge of nuclear war at this very moment with their stupidity and narcissism. So right now they are my greatest fear when it comes to a risk of nuclear annihilation.

I don't think any individual is inherently trustworthy. I reject the populist impulse that average human is good and moral on their own, and the "elites" however they're defined, are uniquely corrupt and uniquely the source of evil in the world. I think this is deeply confused.

I believe complex collective action problems can only be addressed through unifying political institutions. I believe, although flawed, liberal democracy is our best bet right now at addressing the different equities, that balance the protection of personal freedom and self-actualization with the existence of collection action problems.

I think libertarians and anarchists who reject the existence of collective action problems and negative externalities, are simply engaging in motivated reasoning.

It’s a interesting discussion, yet it might be interesting to question one’s own premise. I have tried this: searched the AppStore for keywords like AI chatbot etc . I found endless AI apps here, all of them have enormous dumbing down potential. Unpopular opinion: this all goes a bit in the direction of the Emperor's New Clothes :“the fairy tale shows why groups often make bad decisions, to what extent individuals can be influenced and why people often stand by during bad events without acting."

It’s not about where we are. It’s about where we are headed. I don’t think ChatGPT is the specific threat right now. The threat is the technology beginning to advance beyond our capacity to understand it, and therefore know how to control it, and align it with human values.

Research & Development

Institute & University

💌

I am aware of that. Since some perceive more others less threatening , I think it is important to examine the discussion about something that is now finding concrete applications at this very point. Of course, it's fundamentally about technology and how absurd : Musk seems to be the modern Victor Lustig who can monetize anything and somehow too many people think it's cool too ... it's a phenomenon. Lustig actually sold the Eiffel Tower in 1925. Well, what the future brings, of course, we determine. For me, it's not a question of whether regulation makes sense. It's more about-as you wrote-who executes it and in what context, if I get you right. I'm with you there, I can't think of anything better than your suggestions. Except , we start with the simple questions: what can I do with this now ? And that leads to elicit some future scenarios. It bothers me a little ..the coquetry with AI in general. It's all so immensely

big ..let's make it a little smaller ?

It's not that politicians are uniquely corrupt, it's that a system pf political power uniquely corrupts the people who are in it.

I would argue that to suggest otherwise is to suggest that incentives don't affect how people behave, their ethics, or what they believe. Which I would certainly disagree with.

I agree with you on the earlier point, that no individual is inherently trustworthy, which is why I'm far, far more skeptical of hierarchies of unaccountable, bureaucratic political power in a general sense. I think they reinforce exactly the sorts of corruption and malicious intent that you are claiming we need to better protect ourselves from. On which I agree, but seem to disagree on what means we have to prevent them, and what systems help or hurt that cause.

My “über normie” friend was “explaining” to me the other day how AI can’t be manipulated and asking ChatGPT about something is more reliable than googling it — bc top search results might be “fake news” while the AI generated answer will be based on the entire web.

He then justified his claims by pointing out he studied machine learning in college and oversaw machine learning projects as an exec.

A little education can a be a lot more dangerous than no education at all.

I expect this to be the narrative moving forward — AI is scientifically guaranteed to be correct.

Not trusting whatever (regulated) AI says will be called “AI denialism” and considered more dangerous than terrorism.

“Regulating AI” is an attempt at creating a fake god that can’t be questioned— and removing the need for messy non-compliant human thought and communication from the equation.

You think the “I believe in Science” crowd was bad, wait for the “I believe in AI” crowd.

If they get their way it will be a nightmare.

Just tell the AI

This is a reincarnation of Pascal's wager and suffers the same fault: what if you pick the wrong god? This is the argument from inconsistent revelations. One could imagine an AI, let's call it Deridits' Basilisk, that would protect those who wronged Roko's Basilisk? The philosophical implications of all this have been discussed for the last 400 years.

I agree, very similar, but I think the difference is that we are not just picking between deterministic Gods in this case, we are “creating” God. Who is going to create the Deridits’ Basilisk of you example? It's not just an act of imagination, human actions are relevant in either cases.

All those who oppose Roko's Basilisk. Hope is a greater motivator then fear.

Yes, that's why trying to stop the develop of AI is futile. We have to act either way.

Of course! Are you crazy?

AI GODS IM HERE TO HELP! 🙏🏻

🤣

🤔

#[0]