Avatar
⚡️🌱🌙
7560e065bdfe91872a336b4b15dacd2445257f429364c10efc38e6e7d8ffc1ff
Former Youngest Person in the World!! stuartellison.com fantactico.com knostr.io zairk.com 🪴

I hope that’s Guacamole and not green ketchup?

I missed this news.

But it adds weight to Gary Kasparov position that human + machine is the killer combo.

I’m not worried about AI dominating humanity through bad intentions / actions.

If we live in a world with good and bad humans plus good and bad AI and we have a mix of…

Good humans

Bad humans

Good AI

Bad AI

Good humans + good AI

Good humans + bad AI

Bad humans + good AI

Bad humans + bad AI

The only one I would worry about is…

Bad humans + good AI. That’s the lawful evil stuff. That’s how we need to think about avoiding really bad outcomes.

OS community already own the hardware, we don’t need exascale machines, we only need each other.

FOSS AI will result in a rich fabric of highly specialised models that perform narrow discrete tasks extremely well. Each is small can be trained and iterated cheaply with highly curated data.

These models are like pixels, put them all together and you see the full picture. That picture is how we will achieve AGI.

Trying to draw the picture monolithically is a vast problem space and you have to draw the entire thing in order to change small details.

Big tech is painting water colours, whereas FOSS is an 8K OLED.

It’s on device training of small models that was the secret recipe. Horizontal scaling, not vertical.

I’m not sure what you are saying?

We know exactly how long it’s been going on, we know the full timeline from AlexNET to today.

Nobody is going to pay for capabilities that are inferior and more restricted than stuff that is free. There isn’t a market for monolithic AGI because freemium parallel specialisation is outperforming it.

Tesla should be worried too. Their monolithic FSD is the same white elephant. It’s only 3 meals from obsolete.

Quite easy to imagine discrete open source specialisations that combine to give autonomous navigation / control capabilities that can handle complex environments.

An ant can navigate busy traffic without incident.

The assumption that you can maintain a proprietary machine intelligence superiority in any domain is just a bad assumption.

Monoliths are exponentially more costly to improve as they scale. Whereas mass parallel specialists have linear costs.

Like anything, it’s good and bad.

From a socioeconomic perspective, it is a huge amount of change, so expect older people to hate it and younger people to love it.

That may also cue politicians to take stereotypical positions, you might expect right leaning (older targeting) politicians to find reasons to hate it and left leaning (younger targeting) politicians to find reasons to like it.

Tie UBI and robot labour into this and you can begin to imagine how the debate will go.

But none of that matters much in the long term, because it’s going to happen.

From a ecological perspective AI is going to speed up nature, resource gathering will accelerate, genetic mods will accelerate, building will accelerate. Might see autonomous bots that get power from environment eg solar, that would be another big change.

People think energy will get cheaper but I doubt it, demand will go up and supply will go up too, prices will continue to be volatile.

GDP growth should speed up and GDP/capita should increase too, which should in theory be good for everyone, but allocation of surplus is what really matters.

Very long term as insinuated by term “humanity” this is just an inevitable chapter of nature. AI is a part of humanity and humanity is a part of nature. The fact we have reached this point is testament to success of DNA.

It’s increasingly likely that we jump celestials to moon and Mars and then other more distal moons. Doing so massively prolongs the survival of humanity and gives us a lot more road, but doesn’t guarantee we make good use of that extra road.

Earth is only habitable for another 500m years until Sun cooks it, and it’s already 4.5bn years old. So in that context our escape probability and longevity increasing should be a good thing.

Humanity isn’t the algorithm though, it’s DNA that did this. In my view AI is a tool of DNA and not the other way around. Some people are worried that AI might eradicate DNA but I think there’s less than 1% chance of that.

It’s the DNA algorithm that’s running things and that’s not about to change.

DNA is 10^30 instances running for 10^9 years.

That’s an unassailable lead under this sun.

I’m sceptical that AI runs away into a vertical singularity.

I’m expecting it to scale horizontally rather than vertically. Although I never see anyone talking / thinking about it like this.

People are still stuck in the framework of IQ is a vertical scalar, therefore all intelligence will scale vertically as IQ does.

Even though we have learnt this is not the case at all when building enterprise level applications.

The tech industry still can’t see the wood for the trees. Wrt AI. Repeating all the vertical scaling assumptions / mistakes of the 1990’s.

Generalisation won’t be achieved with a vertical monolith. It will be achieved through massive parallelisation of discrete specialisations.

ie horizontally and not vertically.

I built large industrial AI models to make physical world predictions from petabytes of laser interferometry data and there were massive strategic learnings from doing that.

I still never see anyone else in public thinking and understanding the space as strategically as we were in 2018. I presume people somewhere have a handle on this, but it’s all behind closed doors. Much of the public consensus is just flat wrong, we went through all the same learnings 5 years ago.

There are some big surprises on the road ahead. Lots of capital is going to be misallocated.

That’s doable.

A website built on nostr could create links that when followed from other social media sites route through a key gen and auto bio scrape + insert process and drop you into nostr with a temp account.

Maybe an Auth0 route? Lots of ways to do it.

Could create temporary “tourist keys” for people following links from elsewhere that then allow them to immediately engage when they land.

Open source AI is nuclear proliferation and anyone who can’t see that hasn’t thought about it.

People are training GPT comparable models using entirely open source tech stacks for less than $1,000

OpenAI is already dead. Google is obsolete.

Why pay for something that you can use with restrictions, when you can get the same thing in an unrestricted version for free?

Open Source wins this race, because 10,000 specialist models is better AGI than monolithic AGI.

You can retrain and iterate the specialists at 1/10,000th the cost of iterating such a monolith.

Parallelisation of specialisations is superior generalisation. This is the a reason nature evolved to produce species and organism and not monotheism. 🙏

Also…

“Giant models are slowing us down. In the long run, the best models are the ones

which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.”

This 👆 was my #1 realisation back in 2018 when training our interferometry models.

Monolithic models are grotesquely inefficient. It is orders of magnitude better to break things down into chunks and to produce many specialised models than a monolithic general model. This is what nature does. The key is to create a Monel that can bound a domain of specialisation and then allocate relevant data to the training of a specialist model. I spent $10m to learn this, so not sure why I’m saying it here for free.

Generalisation is a mirage. Once you see the challenge of AGI in better resolution, as people are now starting to, you realise that the best path toward generalisation is to build a high resolution of discrete specialisations. ie by amassing ever more, ever finer specialised models and calling them appropriately, you converge on generalisation. If you try and drive straight towards monolithic generalisation… Mr Thermodynamics wants to see you now.

🍻

At some point we are going to get a Satoshi-type AI where we have a self training model of unknown origin and very elusive legal status.

If you are looking for information for an industrial application then you virtually never find that information on the internet in the current world.

The internet has lots of consumer information, a meaningful chunk of scientific information, but scant industrial information.

Google can’t tell you the information you need to build a hydroelectric dam, or an airport, or even a railway bridge.

However, I think you can conduct engineering projects with AI agents. You have to manage them and have a doc system and a bunch of prior work, but you can train the agents and they can actually learn on the job. This is literally a side project of mine just now.

Yes, I think video to LLM and LLM to video a likely core tech of whatever we are all using in latter half of 2020’s (equally I could be wrong!).

We should be thinking about what that means for UX and then how to implement that.

There likely won’t be any more catwalk models or film stars in 5 years. Pretty much the only non-AI mastered content left will be live sports.

As a community we should be able to make directionally good judgements and plan accordingly.

100% agree.

It’s reaching critical mass that is always the biggest challenge though. All userbases decay until you achieve critical mass and network effects take over.

The only way any social media platform has successfully achieved critical mass and escaped user decay is through demographics, by targeting new users only, AKA teenagers.

You literally have to target a future fashion era maybe 2 years out, and build in preparation for that.

Few things in the world are more difficult to execute.

The content is a product of who the audience on the platform is. Content fits the market not the other way around.

What we are missing just now is fullscreen video, screen sharing, speaker/mic sharing, automated content generation, integrated money and the gamification of all these things.

Maybe we are just super early and all this things will be bootstrapped out of current things?

But nostr is mostly being used to build Twitter clones, and whilst this has some value none of Twitters replicas will eat it. It’s a sideways move for 90% of users, one chat UX to another chat UX.

The biggest problem for Twitter type platform is that nobody buys stuff there, so it just has a lot less commerce and a much smaller economy.

People buy stuff in an audiovideo UX, people buy stuff in shared spaces, people buy stuff when it’s easy.

The next big social platform might be built on nostr, or it might not, but it will probably be based on fullscreen video. You don’t see 13 year olds swiping text, it feels like school to them.

They crowd to loud bright things that move fast, there’s a big non-verbal bit rate in UX that we are missing.

If nostr or a monolithic nostr client can hook the generation after TikTok then you will have #btc adoption by default. No way it happens with horizontal migration any time soon, likely to only see it in our lifetimes if it surfs demographics.

There’s now a way to do that.

There wasn’t before.

Two weeks since I posted this.

Facts. nostr:note1a7vsaquz8chmhnsc27ramu4he6crjv8pa3ccd8mq43w9lx2jvx5s882x0m