I don’t think malevolent people becoming more powerful is unlikely. Information gradients between people are about to flatten dramatically.
You can make very dangerous viruses with guidance from Chat-GPT4 already. Most post doc virologists are capable of making WMD, that bar has probably already been lowered to all science undergrads.
I don’t know about my job but I do take the point. It’s just numbers. My job will change dramatically.
We will have nuclear proliferation well before most people lose their jobs.
Maybe as soon as this year, school shooters will arm themselves with live anthrax sprays instead of AR15’s.
People have pretty weak imaginations when it comes to what’s actually going to happen next.
Will AI accelerate the heat death of the universe merely to serve the greed of its corporate overlords?
Should be PINN, Physics Informed Neural Network.
Yeah we have working LLM’s but to do innovation we need to do a different kind of AI that we don’t have working well yet.
So my point is that Chat-GPT doesn’t runaway into SkyNet or the Matrix.
AI will advance with a series of punctuated equilibriums.
Behold… The Zeroth Law of robots.

I don’t think the actual content of what I invented is so vital to the point that I’d want to dox this account to use it to prove that inventions are hugely dependent and on prior progress, that there isn’t a magical leap of intuition. I think reading a random sampling of patents can show that. Here’s a random start. https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/11587362
And if there are any inventions that you think about as being truly without logical antecedent feel free to share.
I just think that moment of inventive inspiration is not more than the human brain finally connecting the requisite pieces of information in a new way, and extrapolating. It wouldn’t be the most efficient way, but there’s no reason that I know of that a random information subsetting and then considering algorithm can’t do that. And computers can put in long hours for cheap.
I think people, and I’m not saying you necessarily, believe that humans have a special that lets them invent. Something that AI can’t learn. Which is a different claim from “haven’t” learned yet. I think eternal human dominance is a comforting hypothesis, but one that doesn’t have any data to back it up. If humans are so good at inventing, why can’t they invent something that is also just as good or way better at invention?
You seem to have conflated LLM’s with all AI.
LLM’s are not PIN’s and are not GAN’s.
LLM’s will not easily progress from learning human knowledge to adventuring beyond the frontier of human knowledge.
That’s just not how LLM’s work.
I don’t think the actual content of what I invented is so vital to the point that I’d want to dox this account to use it to prove that inventions are hugely dependent and on prior progress, that there isn’t a magical leap of intuition. I think reading a random sampling of patents can show that. Here’s a random start. https://image-ppubs.uspto.gov/dirsearch-public/print/downloadPdf/11587362
And if there are any inventions that you think about as being truly without logical antecedent feel free to share.
I just think that moment of inventive inspiration is not more than the human brain finally connecting the requisite pieces of information in a new way, and extrapolating. It wouldn’t be the most efficient way, but there’s no reason that I know of that a random information subsetting and then considering algorithm can’t do that. And computers can put in long hours for cheap.
I think people, and I’m not saying you necessarily, believe that humans have a special that lets them invent. Something that AI can’t learn. Which is a different claim from “haven’t” learned yet. I think eternal human dominance is a comforting hypothesis, but one that doesn’t have any data to back it up. If humans are so good at inventing, why can’t they invent something that is also just as good or way better at invention?
We can, but LLM’s ain’t it.
The kids actually turned up, they found some of it quite funny but contributed and our fund is setting up this week and will start operating next week. 😂
Invent something that is patentable right now… to prove your point.
Huh, that’s strange. You should be able to follow more people.
Has anyone tried using nostr with a fresh new account recently?
If not, when was the last time you did that?
Everyone should do it once in a while. If you want nostr to grow that is.
There is a price below which this makes sense, and a price above which it does not.
In a modern hyperinflation scenario, cash would disappear very quickly.
I imagine 98% of the data you serve is stuff that was uploaded in the past 48 hours?
Also I don’t think social media persistence has huge value to the user? It’s mostly downside.
Data out spikes might be headless clients or some kind of automated scraper that someone is using? Exposure to this is also a vulnerability you have.
Having something as sensitive as your inner ear get infected is an extremely bad day.
Heat doesn’t help the inflammation either.
I’ve made the point previously on nostr; that AI will rapidly advance within the domain of human knowledge but will struggle to advance at a comparable rate beyond the frontier of what humans already know. ie no runaway singularity just yet.
Lots of people think I am completely wrong on this point, so I have asked an expert on this matter.
*****
⚡️🌱🌙
Is it more difficult in a non linear way for LLMs to undertake innovation eg inventing entirely new technologies for specific problems?
*****
Chat-GPT4
Yes, it is generally more challenging for LLMs to undertake innovation or invent entirely new technologies for specific problems. The primary reason is that LLMs, like other AI models, learn patterns, associations, and knowledge from their training data, which is composed of existing human-generated content. Their primary strength lies in identifying and extrapolating patterns from this data.
Inventing entirely new technologies or generating groundbreaking ideas requires a level of creativity and abstract reasoning that goes beyond recognizing patterns in existing data. Although LLMs can generate novel combinations of existing ideas, these are still constrained by the knowledge and concepts present in the training data.
Human creativity, on the other hand, often involves combining ideas from various domains, generating unique insights, and making intuitive leaps that cannot be directly traced back to specific pieces of information. This kind of creative problem-solving is currently beyond the scope of LLMs and AI models in general.
That being said, AI models can still be valuable tools in the innovation process. They can assist researchers, engineers, and inventors in generating new ideas by providing relevant information, analyzing data, and suggesting possible combinations of existing concepts. However, the responsibility for genuine innovation and creative problem-solving still primarily lies with humans.
I’m coining the word of 2023 right here on nostr…
That word is… chaitting.
Another Sunday comes and goes.
Has everyone had a productive week?
Anyone had a good chance to play with the Chat-GPT plug-ins yet? Any good starting recommendations?
I think CO2 is up, but deforestation of continents is a bigger factor than industrial emissions.
Cumulative CO2 in the atmosphere is a tiny delta between two huge numbers that represent sources and sinks of CO2. Natura sources and sinks are vastly larger scale than anthropogenic sources and sinks.
Our damage of natural CO2 sinks (much of which predates the scientific method) is a far bigger contributor to cumulative CO2 than anthropogenic emissions.
Thats the first big misnomer in the climate debate, the other big misnomer is the scale of the consequences.
Why are people still exaggerating the consequences long after the argument is won? It’s makes no sense, all it does now is lower the credibility of the theory.