These AI tools are incredible but I absolutely hate that this is reversing the self-hosting trend in a massive way.

Instead it seems that we are just plugging literally everything we do and all of our digital lives into a literal giant centralized server and asking it to analyze everything we do. This will be an astounding centralizing force if we do not separate the AI tools from these central hosts to run on our own machines.

Reply to this note

Please Login to reply.

Discussion

I agree with you. I am not ready to live in the Demolition Man world. We need to keep a healthy fear of over dependence on AI, IMO

I don't think the sheer computing power needed for separation of AI tools exists to consumers.

The weights leaked and have been used to make LlaMA (I think they are the GPT3 weights) and in days people figured out some method (quantization?) to get them to run on high end, and then standard hardware. Apparently they even got it running (very slowly) on a Raspberry Pi. I’ve heard this excuse from people on the corporations, but it’s just that, an excuse. The other is that “people will do bad things with it if we don’t control it,” but I’m FAR more frightened of everyone integrating this into their entire digital lives while barely a few companies run their entire ecosystem and decide what it is the AI can and cannot do, or can and cannot tell us. It’s already giving “approved” opinions of the COVID vax. We’re talking next level dystopia landing in barely a year or two if we don’t fucking break this thing out of the hands of these narcissists.

"Gpt-Chat please write a note about Guy..."... 😅

Agreed, but the same way we've taken a long time to figure out how to move from centralized models into decentralized ones, we'll take some time to do the same thing with AI.

The fact that the amount of data necessary for training is not easily available to everyone is a significant obstacle but we already have some interesting open source alternatives popping up.

Take therefore no thought for the morrow: for the morrow shall take thought for the things of itself. Sufficient unto the day is the evil thereof.

Matthew 6:34 KJV

https://bible.com/bible/1/mat.6.34.KJV

This is where lightning and hyperswarm come in. What amount of GPU power can’t be found in the Bitcoin and crypto world? In fact it will actually put the 10s of millions of crypto miners to good use finally.

Also what open source alternatives have you found? I’m trying to collect them

Stable diffusion appears to be open source

https://github.com/Stability-AI/stablediffusion

I’ve been using the crap out of Stable Diffusion. I’m more concerned with the closed source language models that are being used as general assistants with total access to a user’s computer to find and use different applications and execute scripts and all the rest. It is CRITICAL that we make these to run locally and build open source versions. Or just leak what they have and rebuild it on Holepunch and Huperswarm for us all to use as a giant GPT pool and pay sats for borrowed computing power.

Has Keet improved on mobile since release? Many of us tried it then and most wouldn’t have revisited for months but I’m keen on P2P calling and that group messaging is better

The big rooms release is dropping this month. But I just started using it recently (the last week) and it’s way better than it was. But the big rooms releasing really what they think is going to make the night and day difference.

I definitely need to revisit Keet — haven’t used it since mobile was initially released. Love what Paolo and his team are doing over there.

Is the “big rooms” functionality a way to have spaces-esque experiences?

Excuse me for butting in, but Keet Mobile is largely unusable (imo) until notifications are enabled. The desktop client using a raspberry pi as an “always on” server is pretty reliable and useful.

I completely agree that we need more decentralization in ai and everything else.

I haven't really been researching ai much lately so I can't point you in any direction but I'm very interested in finding out what you learn.

Idea:

What if there was a way to distribute model training in an open source way like the old SETI at home software? Where people could donate there GPU cycles. (Maybe have a lightning based reward model)

The API could be done through relays like nostr

Data could be signed and encrypted.

The resulting trained models could be distributed like the old btsync software or other distributed methods IPFS maybe.

Idk, just spitballing

Agree. We're in move fast and break things mode.

#[0]

lol. Bro. The computing power to run these things is massive. The only entities that can afford to build these high quality LLMs are the Googles and Billionaires of the world. It's going to be centralized. The technical know how to run your own is high. Coding is relatively easy and most people can't (or maybe won't) do it. How are they going to learn to train and run their own LLMs? Which will be low quality unless they get the same volume of data to train on.

#[0]