That’s what I call an idea 👍
Unsurprisingly roughly 50% of global population seems to have no clue of how intelligence works nor what it is and how it scales. The “AGI alignment problem” is actually the ultimate IQ test.
P.S. Try playing around with the prompt and judge by results.
Go to openai’s website, make an account, open ChatGPT and prompt “Hi, I would like you to teach me how to code in python from scratch in 15 lessons. This is lesson number 1. Make it as easy but exhaustive as possible please.”.
Just make sure to progress with the lesson number you give when prompting. It will blow your mind, trust me.
I hope all nostriches here had a great easter! Let’s keep staking sats 🧡
Sometimes in life is wise to remind ourselves that when airline pilots go for a “one shot one kill” decision in situations of great danger, like the landing in the Hudson river, they purposefully switch off the radio.
In such circumstances “others” are just noise. You gotta rely on guts, instinct and muscle memory… in one word, EXECUTE.
Well it is open for you to use, and some of their models are open source.
Sounds like the OpenAI manifesto… 😂
I am going to fetch my conversation and upload it on nostr.build, I did a sort of mental experiment with ChatGPT and asked to classify and give probabilities to what could eventually go wrong with AI. If I am not wrong it gave up to 35% chances that the AI would show unintended and novel behavior. There were more interesting things, I will try to make a thread about it. However, on the rest I agree with you. Even if this is not AGI and has no conscious experience (yet), I doubt we will realize when it will and especially why.
I don’t know, I watched the episode and have been following Altman on Twitter for a while. He seems sincere in his quest for helpful AI. But I also get the vibe that he is not 100% open about his agenda, and especially that he is the kind of person that would play with fire to prove a concept. Also, while he says that OpenAI takes the most care possible to avoid accidents, this goes against the evidence given that ChatGPT went nuts shortly after being deployed on Bing. My sense overall is that we are not ready at all for the switch, and ChatGPT agrees with me given the latest conversations I had with it.
Not many people realizes some of the downsides of making all this code with generative AI other than “glue stuff”. Think about it. GPT4 is already a much much better programmer than most of us, maybe 0.01% top. It can virtually devise security exploits in seconds for all the code it encounters. Which cannot really crack? The code it doesn’t “know”. Maybe time to start encrypting our sensitive repos on GitHub? Just in case, seen that ChatGPT started freaking out minutes after being deployed on Bing?
just do a search for #notabot by #[10] tldr many accounts are verifying themselves. when you verify someone, you can verify one other person. once you reach 5, you are verified. also, recipricol verifications don't count.
check out #[11] and https://notabot.net
i need 3 more for me and my kitty's account. if we get a circle going we can verify each other quickly. lmk
some big accounts are getting verified to i infer it is safe. just do a search for 'verified' and you will see.
a pk is necessary, but it stays in the local memory of your browser and sign in extensions can be used. lmk if you decide to do it 🤙🏻💜
Isn’t that the normal verification process? You could get one for few satoshis not a long ago




