Haha my reading is that you’re conflating all AI with LLMs. If you’re only talking about the LLM implementations we have now then of course I agree.

This is what I’m responding to (from your first note)

“I’ve made the point previously on nostr; that AI will rapidly advance within the domain of human knowledge but will struggle to advance at a comparable rate beyond the frontier of what humans already know. ie no runaway singularity just yet.”

And what’s a PIN?

Reply to this note

Please Login to reply.

Discussion

Should be PINN, Physics Informed Neural Network.

Yeah we have working LLM’s but to do innovation we need to do a different kind of AI that we don’t have working well yet.

So my point is that Chat-GPT doesn’t runaway into SkyNet or the Matrix.

AI will advance with a series of punctuated equilibriums.

Ah nice. Ok yeah we’re pretty aligned then.

Main diff is this maybe: We can tell GPT-4 about its weaknesses, and APIs that address them, and it then uses those APIs when needed. I think if we can scale the input tokens to about 1-2 million, and pair it with good APIs for its weaknesses (like a what character is in index n, or a physics sim) we might be just a few years from an AI system surpassing us.

How many tokens do you need to represent the non-googleable context you use to do your job?

We will have nuclear proliferation well before most people lose their jobs.

Maybe as soon as this year, school shooters will arm themselves with live anthrax sprays instead of AR15’s.

People have pretty weak imaginations when it comes to what’s actually going to happen next.

Sheesh. That’s bad stuff. Luckily on the unlikely side :)

But for real, how many tokens do you think the non-googleable part of your job can be compressed into? It should just be whatever info someone with nearly infinite time, patience, access to the internet, and modest IQ would need to do your job as well as you. 1-2M tokens maybe?

I don’t think malevolent people becoming more powerful is unlikely. Information gradients between people are about to flatten dramatically.

You can make very dangerous viruses with guidance from Chat-GPT4 already. Most post doc virologists are capable of making WMD, that bar has probably already been lowered to all science undergrads.

I don’t know about my job but I do take the point. It’s just numbers. My job will change dramatically.

Maybe to being the person who writes the 1-2M tokens 🤔

I trained an neural network with laser interferometry data and it had an input surface greater than 10,000 inputs.

It also had a sample rate >100 kHz.

It had no LLM or language model. But it could detect and label pretty much any kinetic phenomena.