Avatar
Machu Pikacchu
1e908fbc1d131c17a87f32069f53f64f45c75f91a2f6d43f8aa6410974da5562
Interested in bitcoin and physics and their intersection. https://github.com/machuPikacchuBTC/bitcoin

A lot of the other comments say he’s not worth reading, but I’ll take the other side.

It’s been a while since I read the book and watched his interviews but IIRC he promotes the idea of “the fourth turning” and long term economic cycles being predictable. He uses examples from history to back his theory. He talks about debt burden and resets.

For someone who already went down the Bitcoin rabbit hole you might not get much out of it but maybe you’ll pick something up.

The most unrealistic part is the idea that governments haven’t already woken up to AI.

A former NSA chief and head of US cyber command [1] has been on the board of OpenAI for almost a year. The Chinese government has deemed AI critical to their goals [2] at least as far back as 2017.

1. https://apnews.com/article/openai-nsa-director-paul-nakasone-cyber-command-6ef612a3a0fcaef05480bbd1ebbd79b1

2. https://datagovhub.elliott.gwu.edu/china-ai-strategy/

Is the market expecting a shortage of dollars in the near term?

All assets (equities, bonds, bitcoin, even gold) are selling off now so there’s a flight to cash.

What breaks next?

Don’t be afraid to learn in public.

If you don’t say or do something unintentionally stupid at least once a week then you run the risk of being too conservative with your enlightenment.

Eventually you become numb to the cringe.

Computational irreducibility implies that even if we get super powerful AI that can forecast far better than humans it still won’t be able to predict arbitrarily far into the future with any sort of accuracy.

The future is just as uncertain for a superintelligence as it is for humans albeit relatively less so.

nostr:note1e48yn9j0h7kjxj65s86w4mr9qfyzd23ahve0s622ljhgfggy2zgsdpqs7y

In the era of weaponized AI the only winning move is to not play. However, we’re stuck in a Cold War mentality and can’t trust that everyone out there will be chill.

So we’re all forced to push forward aggressively.

Decentralizing AI should be a top priority for research labs out there. Incentivize the best models to be neutral and preferably aligned with humanity.

nostr:note17fkqyfq2a0lv03ckcypclmh074433aczcqv6k4c2p33063j33fxse4uc3f

An entertaining and somewhat likely possible future:

https://ai-2027.com/

All of the stable, “large scale”, natural, complex systems are created from small building blocks acting locally (e.g. the human body made of cells, an economy of people, or a star undergoing fusion).

There’s a temptation by (often well intentioned) designers, engineers, policy makers, etc to consider a desired outcome and take actions at a macro level to create that outcome. This rarely results in a sustainable, stable system.

If we instead start at the local level and build out from there we can create a more robust and organic system but we run into uncertainty at the macro level due to the principle of computational irreducibility [1].

Computational irreducibility is an example of Gödel’s incompleteness theorems in action which says that in any consistent formal system powerful enough to express arithmetic, there are true statements that can’t be proven within the system.

So it seems we either embrace uncertainty or we succumb to short term solutions.

1. https://mathworld.wolfram.com/ComputationalIrreducibility.html

MSTR is a form of capital control: bitcoin flows in, fiat flows out.

Be sure to factor that into your evaluation.

And then they fought you

nostr:note15yvkke6sxxkazraxe3sf3ku67hnrs33ky72wwtwrk2s6uehqyevqw0g33m

It really depends on what you mean by AI. If by AI you mean the standard LLMs everybody uses day to day then you’re probably right for a while.

But there’s more to AI than just the chat interfaces we’re used to. For example DeepMind published some research on how they trained a system that leveraged LLMs in a unique way and found it was capable of solving novel math and computer science problems [1].

DeepMind also trained AlphaGo which famously showed creativity in its game against Lee Se Dol that puzzled experts who initially thought it was a blunder.

Don’t forget that just 3 years ago everyone was debating whether a machine could pass the Turing test and today nobody even thinks about that anymore.

1. https://deepmind.google/discover/blog/funsearch-making-new-discoveries-in-mathematical-sciences-using-large-language-models/

MCP is just an API for LLM agents and lets them access resources in real time. Those resources can change over time and with MCP you can even give the LLM the ability to make those changes in a structured, constrained way.

When you train an LLM its weights are static at that point. The other thing is that if you take an open weight model and fine tune it or distill it then you can end up making it worse along some dimensions even if it ends up being better for your specific case. See this article on catastrophic forgetting:

https://cobusgreyling.medium.com/catastrophic-forgetting-in-llms-bf345760e6e2

Omg first time in years 🤦‍♂️. Well played Carl.

nostr:note1tfzqp9jaavzrlkl7a94gs8sf7lw3nplxq9c9khcw63yhx59tjhaqcet2dq