Avatar
sudocarlos
03612b0ebae0ec8d30031c440ba087ff9bd162962dffba4b6e021ec4afd71216
People who follow the rules are almost always punished. To seek permission is to seek denial. Message me: signal.sudocarlos.com

Ever pull clothes out of the dryer and realize you washed something with a tissue in the pocket? 😑

I hope bitcoin is saved by a fork with a sweet logo

Are your socks dry? Are you wearing socks?

Idea:

Nostr client adds # button above keyboard, in line with gif, media and other buttons. It suggests recently used hashtags, or recent hashtags from wot, or context of draft message. Use local or private llm for the last one. Feels like it could create communities more organically

Maybe nostr:npub1n0stur7q092gyverzc2wfc00e8egkrdnnqq3alhv7p072u89m5es5mk6h0 could add this? 🙏 nostr:npub1n0sturny6w9zn2wwexju3m6asu7zh7jnv2jt2kx6tlmfhs7thq0qnflahe

Replying to Avatar ODELL

The moments that make memories 🫂

Links dont even appear the same way in all clients. I bet notes with native media get zapped way more than notes with links to stuff

Of course she needs gpus. Shes trying to keep the company's data secure by running local llms and those require some power 😬

Shocker, big ai is throwing their dick around

> We find that undisclosed

private testing practices benefit a handful of providers who are able to test multiple variants before

public release and retract scores if desired. We establish that the ability of these providers to choose

the best score leads to biased Arena scores due to selective disclosure of performance results. At an

extreme, we identify 27 private LLM variants tested by Meta in the lead-up to the Llama-4 release.

We also establish that proprietary closed models are sampled at higher rates (number of battles) and

have fewer models removed from the arena than open-weight and open-source alternatives. Both

these policies lead to large data access asymmetries over time.

https://arxiv.org/pdf/2504.20879