Well shoot. Call me e/acc then :)

But for real. The above assumes we maintain control. Were I to assume that, Iโ€™d totally agree with you. I think the main difference in our views is that I would put my confidence in that assumption at less than 100%, and also small enough that Iโ€™m not comfortable simply behaving as if it is 100%.

And the sci-fi outcome is not exclusively thought to be possible by people who donโ€™t understand LLMs. Donโ€™t know if youโ€™ve seen survey results from AI researchers, but you see the full range of predictions from them as well ๐Ÿซ‚

Reply to this note

Please Login to reply.

Discussion

Those โ€œai researchersโ€ want control over ai, they call themselves โ€œAi Allianceโ€

https://arstechnica.com/information-technology/2023/12/ibm-meta-form-ai-alliance-with-50-organizations-to-promote-open-source-ai/

Nice. That is consistent though with them acting in good faith though right? And I agree. Itโ€™s potentially Orwellian.

Hereโ€™s the paper I was referring to. I think youโ€™re not saying that none of the respondents who are worried are acting in bad faith?

https://aiimpacts.org/wp-content/uploads/2023/04/Thousands_of_AI_authors_on_the_future_of_AI.pdf

Oops, I meant to screenshot a part about extinction. But you can see that in the paper too

Iโ€™ll read the paper n get back to you! ๐Ÿซก but thereโ€™s definitely safety concerns!! I just donโ€™t believe that itโ€™ll do worst than itโ€™ll do good, I believe the trade offs will be significantly good for human kind in the long run. ๐Ÿซ‚

I commit to sending 1k sats your way if you do ๐Ÿซ‚

Yeah, I really hope youโ€™re right. If that happens, humans will be free in a way weโ€™ve never seen before. Itโ€™ll be amazing. Luxury automated superabundance.

Idk what percent confidence Iโ€™d assign that vs the other outcome, just know that extinction is >0% chance, and therefore, since quadrillions (to understate it) of lives are on the line, I think treading very carefully is super super important.