If you game it out, you reach these conclusions.
This is why big tech is shredding payroll just now. They’ve figured it out. They know what’s coming.
We are going to be clawing at each others throats other to escape the 90% of humanity made obsolete by AI.
This isn’t even super intelligent AGI.
It’s just cheat code level AI.
One law I expect to see somewhere in the West, is it being made illegal to fire people.
Then massive protectioneering is coming.
I think the chances of any government getting out ahead of AI is nil.
They haven’t even figured out how to tax capital, without it escaping every time. They’ve been trying for 2,500 years.
So I do agree, it’s gonna be people who tackle this one.
I think it’s almost impossible for small entities to get sufficient data to train powerful models at the moment. But if the models are open source then small entities will have them.
We are 2 years away from having a personalised AI on your smartphone that knows personal information about you and works to serve your interests, you can shun this if you like but you will have a hard time competing with people who embrace it.
We will also have jailbreak versions of AI that exist on Linux devices. These will be easy to set to nefarious / malevolent tasks and bad people will do that.
Including all kinds of blackmail / fraud / threats / criminal activity you can think of.
By 2030 the world will look very different.
The threat level is going vertical and soft targets are going to get hammered.
I think the chances of any government getting out ahead of AI is nil.
They haven’t even figured out how to tax capital, without it escaping every time. They’ve been trying for 2,500 years.
So I do agree, it’s gonna be people who tackle this one.
I think it’s almost impossible for small entities to get sufficient data to train powerful models at the moment. But if the models are open source then small entities will have them.
We are 2 years away from having a personalised AI on your smartphone that knows personal information about you and works to serve your interests, you can shun this if you like but you will have a hard time competing with people who embrace it.
We will also have jailbreak versions of AI that exist on Linux devices. These will be easy to set to nefarious / malevolent tasks and bad people will do that.
Including all kinds of blackmail / fraud / threats / criminal activity you can think of.
By 2030 the world will look very different.
The threat level is going vertical and soft targets are going to get hammered.
Yeah it’s wild.
The US is so big and so diverse, they have a really hard time writing sensible laws.
Their legislators resort to clearly bad practices like blindsiding each other with 60,000 page documents to read in 2 days.
They probably shouldn’t be doing so much new legislation. At what point is the legislature finished? Do people really think legislature is an open ended project?
After 200 years, you should be about finished.
Maybe the constitution should have made some provision for halvings? 😂
Yes, the TikTok Act barely mentions TikTok.
It’s basically a declaration of USG sovereignty over all wires.
I think this act could even be a serious alignment risk for some future AI to interpret as a threat.
You have to think very hard about all of your actions now. The world today is not a relevant guide for the true consequences of todays actions.
It’s not the super powers you should worry about.
You should worry about terrorists getting hold of modern AI (they already have it).
With zero epidemiology knowledge, I could create the coronavirus in my garage today, using Chat-GPT4 and budget of $20k.
It’s very hard to handle viruses safely.
It is not hard to create a pandemic in your garage.
Whilst it is crazy to see… I suggest, the giant is still asleep.
These are the people who split the atom, walked on the moon, created the internet, mapped the human genome, and are now raising machine intelligence.
They are not done.
This is a powerful mindset.
One of the hardest things to learn is focus. Your most limited resource, the only resource that you only spend and can never earn, is time.
Learning how to allocate your time on things that are most valuable to you, is probably the most difficult and most rewarding thing anyone can do.
Yeah, I see a lot of people confusing the capabilities of this one single product (Chat-GPT), with the capabilities of the technology on which it is built (LLM’s).
Products are bound by technology, but technology is not bound by any single product.
LLM’s obviously are now extremely powerful, surprisingly so, and lots of people will be doing lots of powerful things with LLM’s. It’s going to be easy to get investment for those things now.
But the errors and limitation of Chat-GPT are in no way errors and limitations of LLM’s. Chat-GPT is a product, LLM’s are the technology.
Just because a Volkswagen Beetle can’t turn a corner at 80mph, it doesn’t stop anyone using the same four wheel chassis technology to build an F1 car that can turn the corner at 180mph.
Focusing on the things that Chat-GPT cannot do, or cannot do well, is a mistake. Looks at what it proves LLM’s can do.
You need a reply for this to display as a thread, so that it can be displayed without cropping the image.
🙃
Elon Musk invests in OpenAI and then rants endlessly 🤬 about it going from a non-profit to a profit making org.
🤔
Same dude invests in Twitter and rants endlessly about it going from a profit making to non-profit making org.
🤡 🌎
What if the carousel changes with screen position, ie it shifts as you swipe?
Ugh, the cropping in Damus is annoying.
#[1] is this how it’s supposed to work?
Do we need to correctly guess aspect?
Is this smart or dumb?
Can it be both?

I trained an neural network with laser interferometry data and it had an input surface greater than 10,000 inputs.
It also had a sample rate >100 kHz.
It had no LLM or language model. But it could detect and label pretty much any kinetic phenomena.


