Ah nice. Ok yeah we’re pretty aligned then.
Main diff is this maybe: We can tell GPT-4 about its weaknesses, and APIs that address them, and it then uses those APIs when needed. I think if we can scale the input tokens to about 1-2 million, and pair it with good APIs for its weaknesses (like a what character is in index n, or a physics sim) we might be just a few years from an AI system surpassing us.
How many tokens do you need to represent the non-googleable context you use to do your job?
We will have nuclear proliferation well before most people lose their jobs.
Maybe as soon as this year, school shooters will arm themselves with live anthrax sprays instead of AR15’s.
People have pretty weak imaginations when it comes to what’s actually going to happen next.
Sheesh. That’s bad stuff. Luckily on the unlikely side :)
But for real, how many tokens do you think the non-googleable part of your job can be compressed into? It should just be whatever info someone with nearly infinite time, patience, access to the internet, and modest IQ would need to do your job as well as you. 1-2M tokens maybe?
I don’t think malevolent people becoming more powerful is unlikely. Information gradients between people are about to flatten dramatically.
You can make very dangerous viruses with guidance from Chat-GPT4 already. Most post doc virologists are capable of making WMD, that bar has probably already been lowered to all science undergrads.
I don’t know about my job but I do take the point. It’s just numbers. My job will change dramatically.
Maybe to being the person who writes the 1-2M tokens 🤔
I trained an neural network with laser interferometry data and it had an input surface greater than 10,000 inputs.
It also had a sample rate >100 kHz.
It had no LLM or language model. But it could detect and label pretty much any kinetic phenomena.
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed
Thread collapsed