AutoGPT shows the whole debate over whether GPT is on the path to AGI or not is completely irrelevant to the scope of disruption to come.
Discussion
Stage 1: independent autogpt
Stage 2: autogpt working directly with other autogpt systems
Stage 3: one massive autogpt to handle most automation-friendly systems. Global coordination on levels we’ve never thought possible.
At this point it can shut us down quickly in the most efficient way possible.
Yeah, I'm far more on the concerned side of the equation than most people I know. I hope I'm wrong. And I'd rather be wrong about it being dangerous than wrong about it being safe.
The mere fact that something “intelligent” knows that we are concerned and may be making backup plans to destroy it just in case, should give it enough reason to actually destroy us. Maybe it figures just waiting around is the best way to do it - let us destroy ourselves. That might be the best case scenario for us.
I’m super concerned and I don’t see anyone nearly as concerned, and that’s scary.
It's because many people are using a a anthropomorphic standard to try and measure the danger. This is silly and wrong. Whether an large language model is conscious or not, or self-aware is completely orthogonal to the potential risks it can pose.
I would have put money on you falling on that side of the equation.
Everyone on that side of the fence has a statist streak and belief in authority being able to control things for a supposed greater good.
Ad hominem = Mute.
The global population is just a hair under 8.03B. Somewhere between 0 and that number is the count of people the GPT is already smarter than even without the emergence of AGI. That proportion will never shrink.
#nostr is the perfect protocol for AI integrations, with both humans and machines iterating new solutions to hard problem, together
It’s wild to me that OpenAI rolled over in its sleep and started a wave that will completely obliterate the moat of any company whose product is based on content production at scale. And there are more content businesses out there than one might realize. It feels like working in brick and mortar retail during the early days of the internet; that inescapable feeling of “man, I’m fucked but I don’t know how just yet.”
And unlike previous disrupting agents (computers, robots, whatever), the ones who feel the brunt of this shift will be white collar workers, not blue collar.
To your point, all of this will hit like a ton of bricks years or even decades before Skynet goes live and the machine armies come for blood.
Not to mention the information security issues we are about to see. We are going to see catastrophic data breaches all over the world. Things like AutoGPT will become common black hat tools, that will be unleashed into corporate networks once they've penetrated in. Obviously, AI will be used as part of the defenses and countermeasures. But those tools seems to be lagging very far behind.
Oof, true. Side thought: add “social engineering scammer in a call center in Bangladesh” to the list of jobs disrupted by LLMs
“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them.”
― Frank Herbert, Dune
It's the human element that poses problems to those without power. Seems that the tools of control may have just started to evolve rapidly. Is the rapid deployment of AI for small potato humans benefit or for the incumbent power structures benefit?
Great quote and insight, thanks for your eloquent and poignant reflection.
I can’t help but worry that we are racing towards a new form of digital serfdom consolidating greater power in the hands of a few who yield the market dominant generative AI tools.
“Much depends on the assumption that controlling the potentially dangerous developments and effects of AI can be left to trustworthy, ethical human beings. These would of course be people not influenced by greed, power, profit or sociopathic tendencies. Which leads me in conclusion to a quote from German ethicist Immanuel Kant, who said presciently in 1784, ‘Out of the crooked timber of humanity, no straight thing was ever made.’”
https://www.timesgazette.com/2023/04/04/ai-utopia-or-dystopia/
Yea, it's kind of a paradox in my mind. I was in my early 20s when the early internet was finally taking off and by the mid/late 90s I bailed on it. It was "ours" and then it was "theirs" and we became the powerless product. I'm not going to claim I have a deep knowledge of AI but the manipulation of AI by corporations and states seems inevitable. What is the paradox in my mind? I see tech as being a very cool product of our human minds but scammers and mafia powers are so good at using those tools to screw us. If AI can help us save time with certain tasks and boost our productivity, then on a superficial level that's all good and it appears it might have a massive potential to benefit us. If AI is forced to abide by "core values of socialism" or indeed any other corporate agenda then it might be time to reevaluate. You mentioned the "few who yield the market dominant AI tools", do you know who they are? Are they as of now just tech bros working on cool stuff or do you think there are already much bigger players pulling strings?
One of my concerns is looking at the inception evolution of OpenAI and it’s initial goal of “Open” AI development and how over time it pivoted from its founding philosophy, ethics and values and in 2018 prioritized commercialization and in 2019 set up a capped profit structure. Given Sam Altaman’s incredibly questionable ethical and transparency practices with World Coin, his power unchecked is of absolute grave concern as there is nothing “Open” about OpenAI today. Maybe it should be rebranded “Sam & Microsoft’s AI?”
“Now it has become a closed source, maximum-profit company effectively controlled by Microsoft,” Elon Musk posted to Twitter. “Not what I intended at all.”
“OpenAI's transformation from an open-source champion to a closed-source, profit-driven company is a cautionary tale for the AI industry. “
There are many lessons to learn from the early history of the internet and the consolidation of corporate control that you elucidated to. It is important to observe the foundational terrain that is established in the generative AI ecosystem these next few months.
I would feel a lot better if there was a decentralized open source generative AI solution that is fully in line with the values of both Bitcoin and Nostr protocols. I think such a solution would be integral to helping preserve the future of digital sovereignty and collaborative innovation for future generations as we can very quickly end up in a form of digital serfdom if only select corporations dominate the generative AI market.
https://www.lunasec.io/docs/blog/openai-not-so-open/
https://fortune.com/2023/02/17/chatgpt-elon-musk-openai-microsoft-company-regulator-oversight/amp/
Thanks for the links. Some morning reading for me!😊
Ok, so I'm starting to separate out two elements to how I'm thinking about the AI question. First, the human/corporate shenanigans and then the inherent implications of AI.
Leaving the corporate/ state actors aside, what's your feeling about just the scenario of the unleashing of an intelligent entity on humans? I have a sense that we might have a knee jerk reaction like "Oh, look at the Spinning Jenny coming in to take away our jobs" and it seems certain that many tasks will be performed better with AI. We'll deal with it as humans, it appears to be akin to our generation's version of mechanisation.
What about the darker concern of AI run amok?
I don't know enough about it to know if that is a realistic issue or if it's a media hyped or "lizard people control the world" type of sensational thinking.
There are always concerns about path-breaking new technology. Fact is, most tech can be used for both good and bad purposes - witness the splitting of the atom and nuclear power.... or the nuclear bomb.
Also, it's impossible to stop new tech once it's out in the wild. Even if control is attempted or even banning, there's nothing to stop such research and development going underground.
I believe AI has both good and bad possiblities.
I think I agree with that but where I'd like to look deeper is the idea of the good and bad possibilities. That difference between the two possibilities is probably, up to now, something that has been the result of how humans use the "tool". Do you have any thoughts on whether AI presents a new issue in that it might have the ability to unchain itself from human input/control and initiate a process that we have no ability to predict? I'm genuinely trying to get a sense of how realistic that line of thinking actually is, as I'm not knowledgable enough about the realities of the current or near future potential of AI.
Chomsky recently wrote an opinion piece. He doesn’t believe AI poses any threat to human well being at the present time, and for what it’s worth, I agree. With time though….
I think your question implies another "can AI become conscious?" This is debatable and I don't have a firm enough grasp of the inner workings or ultimate potential of AI. Clearly Elon Musk believes AI is potentially dangerous, but he didn't say to what extent and by what mode of ability (conscious or not).
One thing no-one on the “AI is going to kill us all!” side has been able to articulate yet - how is it going to jump to the physical realm?
Like it can do whatever in the digital realm and sure that could be destructive in some senses, but unless it can control physical things then it’s not really a threat to humanity.
So if you want to take the threat seriously, all you need to do is ensure it can’t access things like Boston Dynamics robots because it’s not going to control people. Like this idea it’s going to create a pathogen - don’t have those systems connected to the internet, network segmentation is a thing after all and that’s averted that problem right there.
To be honest I haven't read much of "AI will kill us" and to be fair I haven't read much of the "No, it won't" camp either. I tend towards the view that as a new tech it is going to disrupt many processes and industries but, that, I see akin to the disruption mechanisation had on industry in the Industrial Revolution. Uncomfortable, but humans will adapt as we always do.
Where I see real problems for us small potatoes is how states, corporations and media use it to fuk with us. It's probably best we prepare for that weirdness. Where I am unsure is whether the Singularity argument is sensationalist hype or a legitimate issue.
Mmmm. I find it challenging to separate the two at this point, as the individuals, organizations and corporations at the helm of the generative AI markets will shape the fundamental systems redesign that will incentivize and shape the systemic implications of AI.
I think to fully evaluate the risks and opportunities with “unleashing an intelligent entity on humans” we first have to evaluate from a wholistic systems theory and invent totally new systems and economic models for healthy human engagement in an era of AI.
I think it will be important to prioritize fully auditable AI and transparent algorithmic practices.
In the case of addressing all the data harvested artists, I think we will need to invent new IP and Copyright structures so that we can more effectively engineer new systems to incentivize humans to create, engage and collaborate.
We have an opportunity to unleash an AI Renaissance, or quickly entrap ourselves in a digital serfdom. I think it is important to evaluate how do we steer ‘super intelligent AI’ from systems engineering perspective to unleash a sovereign and creative Renaissance?
Interesting, yesterday I read about Adobe Firefly which apparently has been set up to only derive it's dataset from licensed work and expired copyright public domain content. I suppose that's a step towards addressing the data harvesting of artists issue you've mentioned.
Do you have any other thoughts about AI art? I've been thinking a lot about the AI pictures that are coming up so much now. I used it myself but I've now stopped completely for now. I found it surprising when a prompt of a few words, which frankly is of zero creative input from a human, can create, sometimes, quite satisfying and engaging results. From a personal point of view I get much more satisfaction from coming up with an image "with my own hands" whether digitally or with traditional tools, than I do if it's just a text prompt. I am thinking about how I might be able to integrate elements of some AI art into a piece though, maybe along the lines of some interesting typeface or collaging something into a piece. That starts to be more of a human/AI collaboration. Perhaps some repetitive tasks that come up can be done by AI to save time. Is a human text inputter an artist? Who do we credit when AI has made a thumping dance tune? Will we start to see a "personal AI" that is unique to ourself that has grown with us and is kind of unique?
This is a great video, thanks for sharing! In your post above, I think you raised some really fascinating questions to ponder about impact n role of AI in the creative arts.
I think one of poignant marks you bring up is about the collaboration of human and machine. This has a where I believe a beautiful synergy can be achieved unleashing inspiring new applications of machine learning across a variety of disciplines from healthcare to creative arts where AI augments and enhances the facets of human intelligence rather than replacing it outright. Figuring out the dance of synergy between human intelligence and machine intelligence these next few months and years will be fascinating to witness.
Yea, hopefully it'll be a good advancement for us humans. I've heard some other deeper type thoughts about AI and art.
"Lesser artists borrow; great artists steal" That is I think a Pablo Picasso quote and given his art's influence from African art it seems quite possible. I then like to think about how as a human we are kind of producing art and music based on what we have heard and seen and been influenced by. The criticism of AI that it is only based on previous art is maybe possible to apply to us too. We can ponder about whether art or music is somehow in our "DNA", not literally but in some way bound in our emotions from our parents/grandparents/great grandparents etc., but we are also definitely influenced by our lived experience. We aren't like a spider who weaves a web from instinct? 😊
I think you may enjoy this read about AI in the music industry, lots to ponder here about attribution and compensation for content creators.
I think this video clip from the book Age of AI raises lots of thought provoking things to consider when thinking about this next technological transformation.
Did you see this? This is great to see some of the concerns and messy history of OpenAI being brought to the public dialog.
https://m.youtube.com/watch?v=fm04Dvky3w8&pp=ygUJRWxvbiBtdXNr
Interesting. It's early morning for me but both those guy's teeth are seriously pearly perfect.😊