Avatar
Jonathan
e0339348ca6cac9708cd98e631e2f4baad534dfce870881b65aa57d30ff7253e
Hacker, cypherpunk. All memes are my own.

Does anyone know what happened to the Babylon Bee? Maybe it’s a temporary phase over election season but they almost completely stopped being funny. Everything is either “Kamala Harris is awful” or “Dems are trying to kill Trump” and most of them time they don’t even have a joke to go along with it.

The privacy, ability to uncensored, and control that models with open weights provide would seem to be a huge boon over OpenAIs offerings. It feels like AI is a little over-saturated at the moment with not much differentiating the top dogs.

And there at last is the reason everyone is leaving OpenAI. Sam is finally making his power grab.

https://www.reuters.com/technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/

nostr:note1792waazz7jw08xx6g7ja2xsctjfju45sn4z6l64cd0y27v5y928qwqhygz

I’m not completely sure. It may still be the fallout from Sam Altman pruning everyone who wasn’t 100% on board the Sam Plan or maybe there’s something else entirely happening.

Sure, he still runs a company that requires gathering the most information possible, but that doesn’t change the fact he got kinda cool at some point.

And yet another decides to leave. This one was the GPT-4 co-lead.

https://x.com/barret_zoph/status/1839095143397515452

nostr:note1yv8vwse5ll66e589m4ng8p22p5xnfkrdknqzjmcg6hvaaau5af5q0wxfs8

When did Mark Zuckerberg become kind of cool?

He went from the Social Network jerk who just wants your data to a competent martial artist who has interesting thoughts and still kind of wants your data.

OpenAI shedding their top people quickly. Ilya, Andrej, Jan, Greg, John, Peter, and now Mira are all out and those are just the ones I can remember off the top of my head. That’s the heads of their research and safety teams and all the top executives.

https://twitter.com/miramurati/status/1839025700009030027

This is actually an insane improvement. Why is Meta releasing a multi-modal model that crushes similar models from OpenAI and Anthropic and they’re only bumping the version as a minor release?

https://ai.meta.com/blog/llama-3-2-connect-2024-vision-edge-mobile-devices/

No way, now they can both hang themselves at the same time with the cameras off.

https://archive.is/d8bD8

Wait a few weeks and Trump will probably claim this during a debate.

nostr:note14v84wkfuy6ercl9xnqjm9k7kx2dpen02p6ug4jl40rajplrrhl8q4lmp0n

God damn AI is only getting better. I could totally fool my grandparents with this.

https://m.primal.net/KyVn.mov

Alright, let me run some benchmarks once I’m back at a desktop. From some back of the napkin calculations it seems the cost we’re trying to beat a baseline of $5.6E-4 per note (the best numbers I can find for the closest approximation is $700/12,500,000 emails).

Spam relies on two things, large reach and cheap cost per spam. The chance that any one person will click the link or fall for the scam is high incredibly low, so you need scale to get any significant reach.

To combat spam you need to knock out one or both of those requirements. You either need to limit the reach of spam or raise the cost per message. Both have trade offs. On a social network like Nostr, it’s difficult to easily identify and mute spam because there is so little metadata about each pub key to rely on with the focus on privacy. Raising the cost per note doesn’t require any information or work for relays, but can be handled by sending paying a relay or person per note or requiring proof of work to require burning electricity per note.

But it does prove that you think your notes have some value. The return per note for a spammer is insanely low, so upping the cost per note to spam puts it above break even for most spam.

“LLMs can only complete sentences” is true of base models, but fine tuning for instructions with RLHF has been a thing for 3-4 years at this point. I’m talking about reading news with something like “What’s the summary of this article?”, “Alright save that for later and go to the next one.”

You don’t necessarily have to use LLMs but they seem the easiest way so far to understand relatively complex commands and call functions.