Replying to Avatar ⚡️🌱🌙

I’ve made the point previously on nostr; that AI will rapidly advance within the domain of human knowledge but will struggle to advance at a comparable rate beyond the frontier of what humans already know. ie no runaway singularity just yet.

Lots of people think I am completely wrong on this point, so I have asked an expert on this matter.

*****

⚡️🌱🌙

Is it more difficult in a non linear way for LLMs to undertake innovation eg inventing entirely new technologies for specific problems?

*****

Chat-GPT4

Yes, it is generally more challenging for LLMs to undertake innovation or invent entirely new technologies for specific problems. The primary reason is that LLMs, like other AI models, learn patterns, associations, and knowledge from their training data, which is composed of existing human-generated content. Their primary strength lies in identifying and extrapolating patterns from this data.

Inventing entirely new technologies or generating groundbreaking ideas requires a level of creativity and abstract reasoning that goes beyond recognizing patterns in existing data. Although LLMs can generate novel combinations of existing ideas, these are still constrained by the knowledge and concepts present in the training data.

Human creativity, on the other hand, often involves combining ideas from various domains, generating unique insights, and making intuitive leaps that cannot be directly traced back to specific pieces of information. This kind of creative problem-solving is currently beyond the scope of LLMs and AI models in general.

That being said, AI models can still be valuable tools in the innovation process. They can assist researchers, engineers, and inventors in generating new ideas by providing relevant information, analyzing data, and suggesting possible combinations of existing concepts. However, the responsibility for genuine innovation and creative problem-solving still primarily lies with humans.

What would you say to this conceivable critique? You and GPT-4 are wrong here, true innovation is nothing but a heroic myth. Instead, innovation is the integration of previously-existing information. We all stand on giants, sometimes the footing is less obvious or less documented in retrospect, but it’s there nonetheless. Once we get these things to be a little more resistant to leading questions, they will be able to talk with each other, select the best ideas for experimentation based on current knowledge, and use the new knowledge they discover to innovate.

Reply to this note

Please Login to reply.

Discussion

Invent something that is patentable right now… to prove your point.

I recently did. And it didn’t take any quantum leaps of insight.

I think the key is that the LLM needs to know lots of stuff, which they do, and then they need to be prompted to integrate it together either in a self-adversarial way, or by having multiple LLMs argue about stuff until they decide on something to actually test that makes sense to them all. Currently GPT 3.5 can be fooled a little too easily, so I’m not sure signal will come out the end of that conversation. But it’s getting better, and humans get fooled too all the time anyway. Despite that, some things they get convinced of actually work and push the state of the art forward.

What?

I don’t think the actual content of what I invented is so vital to the point that I’d want to dox this account to use it to prove that inventions are hugely dependent and on prior progress, that there isn’t a magical leap of intuition. I think reading a random sampling of patents can show that. Here’s a random start. https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/11587362

And if there are any inventions that you think about as being truly without logical antecedent feel free to share.

I just think that moment of inventive inspiration is not more than the human brain finally connecting the requisite pieces of information in a new way, and extrapolating. It wouldn’t be the most efficient way, but there’s no reason that I know of that a random information subsetting and then considering algorithm can’t do that. And computers can put in long hours for cheap.

I think people, and I’m not saying you necessarily, believe that humans have a special that lets them invent. Something that AI can’t learn. Which is a different claim from “haven’t” learned yet. I think eternal human dominance is a comforting hypothesis, but one that doesn’t have any data to back it up. If humans are so good at inventing, why can’t they invent something that is also just as good or way better at invention?

We can, but LLM’s ain’t it.

You seem to have conflated LLM’s with all AI.

LLM’s are not PIN’s and are not GAN’s.

LLM’s will not easily progress from learning human knowledge to adventuring beyond the frontier of human knowledge.

That’s just not how LLM’s work.

Haha my reading is that you’re conflating all AI with LLMs. If you’re only talking about the LLM implementations we have now then of course I agree.

This is what I’m responding to (from your first note)

“I’ve made the point previously on nostr; that AI will rapidly advance within the domain of human knowledge but will struggle to advance at a comparable rate beyond the frontier of what humans already know. ie no runaway singularity just yet.”

And what’s a PIN?

Should be PINN, Physics Informed Neural Network.

Yeah we have working LLM’s but to do innovation we need to do a different kind of AI that we don’t have working well yet.

So my point is that Chat-GPT doesn’t runaway into SkyNet or the Matrix.

AI will advance with a series of punctuated equilibriums.

Ah nice. Ok yeah we’re pretty aligned then.

Main diff is this maybe: We can tell GPT-4 about its weaknesses, and APIs that address them, and it then uses those APIs when needed. I think if we can scale the input tokens to about 1-2 million, and pair it with good APIs for its weaknesses (like a what character is in index n, or a physics sim) we might be just a few years from an AI system surpassing us.

How many tokens do you need to represent the non-googleable context you use to do your job?

We will have nuclear proliferation well before most people lose their jobs.

Maybe as soon as this year, school shooters will arm themselves with live anthrax sprays instead of AR15’s.

People have pretty weak imaginations when it comes to what’s actually going to happen next.

Sheesh. That’s bad stuff. Luckily on the unlikely side :)

But for real, how many tokens do you think the non-googleable part of your job can be compressed into? It should just be whatever info someone with nearly infinite time, patience, access to the internet, and modest IQ would need to do your job as well as you. 1-2M tokens maybe?

I don’t think malevolent people becoming more powerful is unlikely. Information gradients between people are about to flatten dramatically.

You can make very dangerous viruses with guidance from Chat-GPT4 already. Most post doc virologists are capable of making WMD, that bar has probably already been lowered to all science undergrads.

I don’t know about my job but I do take the point. It’s just numbers. My job will change dramatically.

Maybe to being the person who writes the 1-2M tokens 🤔

I trained an neural network with laser interferometry data and it had an input surface greater than 10,000 inputs.

It also had a sample rate >100 kHz.

It had no LLM or language model. But it could detect and label pretty much any kinetic phenomena.