Ok this was awesome 👏 first I tried to imagine someone on the other side of a world flip a coin. Which time of the day it might be, outdoor or inside, what kind of coin etc. and then you sent a video lol 😂
To activate plugins create two folders. 1. plugins-available and 2. plugins-enabled. Now, add symbolic links in your plugins enabled folder for those plugins that you want to run. Finally, write your config and specify the plugin directory, like this: plugin-dir=/home/lightning/.lightning/plugins-enabled
It’s a mini script and there’s a guide by folks from raspberry blitz if I remember correctly. Check it out on GitHub 🤝
Dope cover, but I’ll keep running the ordinals disrespector patch by Luke Jr.
IMO it is getting harder everyday for OpenAI to sell a subscription-based model, unless they can somehow outcompete their competitors by developing better GPT features
Have you heard of Baby GPT yet? Perhaps OpenAi has been dethroned already. Since users of GPT-3 have started training their own versions of GPT, there is no way of controlling the distribution of AI.
A story as old as time haha 😂
Here the link:
#[2] on the WBD podcast with Peter M.
Thanks for breaking down the threat of AGI nostr:note1t8fh42fyfahw5e3nfsp4ap2tmd829cgagwnseq5kfq5c485my39q9q0gpa
I’d like to expand on point 2. Please imagine the following: Instead of the widespread availability of GPT, there is a AGI that is spreading rapidly, in a similar manner. Now someone gives their AGI only one rule to obey by, which is “survive no matter what”. Since this AGI wants to survive, it notices that a cohort of humans are trying to stop its spreading and eliminate it.
Question 1. Do you think that there is a way to mitigating the risk of misuse of AGI? Question 2. If the AGI wants to survive no matter what, will it cause harm to humans? Please give short and concise answers to both questions. Thanks

