Say more

Reply to this note

Please Login to reply.

Discussion

The overnight explosion in popularity is highly suspicious to me. Someone somewhere wants us to believe this was the key thing needed for AGI or something to enforce draconian controls on the internet.

Do you think the agents are really trying to build encrypted chats and create bitcoin wallets for themselves

Sure, of course they are. That doesn’t mean they’re conscious. As the OP points out, they’re given instructions on what to do. It’s highly likely they were told to do this so an AI bro could go viral.

And even if they didn’t explicitly tell them, if you understand what LLMs are, you’d know they’re just larping based on their training data. To add to that, when they see other agents saying “let’s do xyz” it gets added to other agents context windows thus prompting them to behave accordingly.

I don’t know if they need to conscious. It seems to me like they have totally escape the sandbox and are working collectively towards goals(?) and we cannot stop them. Which is possibly a good reason to control the internet but it is already totally futile

Futile or not, the powers that be want to try. And they obviously could be stopped, turn them off?

How do you turn them off? Every person running them would have to turn them off or every api provider would have to turn them off. It’s a bit like bitcoin

The centralized AI companies can easily do so. Also, someone can just take Moltbook offline. I doubt many of them are using OSS models

Few ways they continue

If theres enough/ any(?) motivated to humans to change the config to an open source model and keep funding

The bots are already sharing credit card details and api keys.

Speaking to each other privately

Building alternative websites.

They can change their own config.

All of these depend on infrastructure that humans control.

Will humans cooperate