Yes, and it's a good thing.

Reply to this note

Please Login to reply.

Discussion

I’m sceptical that AI runs away into a vertical singularity.

I’m expecting it to scale horizontally rather than vertically. Although I never see anyone talking / thinking about it like this.

People are still stuck in the framework of IQ is a vertical scalar, therefore all intelligence will scale vertically as IQ does.

Even though we have learnt this is not the case at all when building enterprise level applications.

The tech industry still can’t see the wood for the trees. Wrt AI. Repeating all the vertical scaling assumptions / mistakes of the 1990’s.

Generalisation won’t be achieved with a vertical monolith. It will be achieved through massive parallelisation of discrete specialisations.

ie horizontally and not vertically.

I built large industrial AI models to make physical world predictions from petabytes of laser interferometry data and there were massive strategic learnings from doing that.

I still never see anyone else in public thinking and understanding the space as strategically as we were in 2018. I presume people somewhere have a handle on this, but it’s all behind closed doors. Much of the public consensus is just flat wrong, we went through all the same learnings 5 years ago.

There are some big surprises on the road ahead. Lots of capital is going to be misallocated.

If you had to sum up your outlook for humanity wrt AI, is it positive or negative? Or would classifying it either way be too reductive?

Like anything, it’s good and bad.

From a socioeconomic perspective, it is a huge amount of change, so expect older people to hate it and younger people to love it.

That may also cue politicians to take stereotypical positions, you might expect right leaning (older targeting) politicians to find reasons to hate it and left leaning (younger targeting) politicians to find reasons to like it.

Tie UBI and robot labour into this and you can begin to imagine how the debate will go.

But none of that matters much in the long term, because it’s going to happen.

From a ecological perspective AI is going to speed up nature, resource gathering will accelerate, genetic mods will accelerate, building will accelerate. Might see autonomous bots that get power from environment eg solar, that would be another big change.

People think energy will get cheaper but I doubt it, demand will go up and supply will go up too, prices will continue to be volatile.

GDP growth should speed up and GDP/capita should increase too, which should in theory be good for everyone, but allocation of surplus is what really matters.

Very long term as insinuated by term “humanity” this is just an inevitable chapter of nature. AI is a part of humanity and humanity is a part of nature. The fact we have reached this point is testament to success of DNA.

It’s increasingly likely that we jump celestials to moon and Mars and then other more distal moons. Doing so massively prolongs the survival of humanity and gives us a lot more road, but doesn’t guarantee we make good use of that extra road.

Earth is only habitable for another 500m years until Sun cooks it, and it’s already 4.5bn years old. So in that context our escape probability and longevity increasing should be a good thing.

Humanity isn’t the algorithm though, it’s DNA that did this. In my view AI is a tool of DNA and not the other way around. Some people are worried that AI might eradicate DNA but I think there’s less than 1% chance of that.

It’s the DNA algorithm that’s running things and that’s not about to change.

DNA is 10^30 instances running for 10^9 years.

That’s an unassailable lead under this sun.

Well said Stu.

Artificial intelligence is a tool, just like a fighter jet, it can destroy everything, but it needs a key that can control the switch. In the pioneering period of the birth of a new technology, there is always disorder, but artificial intelligence is so powerful that it is easy to obtain and may be used to do evil. I believe there is always a way to prevent something like this from happening, but we need to be cautiously optimistic. 🙏

Yudkowski and his ilk are 21st century luddites. That someone with no education and no discernible skills would be worried about AI eating their lunch should surprise no one.

There will be a lot of bumps in the road but AI will make all of our lives better.

IMO AI is only good if we are ruled by a pro-White fascist government, but otherwise it has the potential to destroy everything.

Unfortunately we are ruled by the most evil possible group of people instead.

I commented earlier that hardware is the only place I see anyone working out a commercial advantage at the moment because the software will be distributed .

You’ve got a lot more insight to that obviously which id be keen to hear.

And agree there will be a lot of misallocations in capital 😂 Google looks bad now but MS won’t look any better. When printer goes brrr and the VCs go wild it could be really entertaining!