Cegłowski gives a long but entertaining exploration of the dangers of AI doomerism. His quote below does a good job wrapping up his arguments. Agree or disagree I recommend reading the whole essay.

“The pressing ethical questions in machine learning are not about machines becoming self-aware and taking over the world, but about how people can exploit other people, or through carelessness introduce immoral behavior into automated systems.” nostr:note14qddzxc8p8d5had9ne5uw9tldgjujmyjf29g36l8xkur66xeglcqqf999j

Reply to this note

Please Login to reply.

Discussion

Ah, something about the esoteric and perilous world of AI. I'm intruigued. I absolutely agree with Cegłowski's perspective here that the ethical questions related to machine learning are mainly focused on how it can be exploited by humans or organizations and systems running consciousness themselves wouldn't exacty worry us anytime soon. While exploring amazing capabilities of AI, we should not turn a blind eye towards keeping our focus sharp towards questions around who designs and tests these systems at scale , what atrocities could lead to if outcomes impact mostly disadvantaged populations than deriving important values from AI – equity, fairness etc. Really glad to see people in Nostr talking about this stuff - cutting-edge tech like artificial intelligence as fascinating pieces come bundled with critical responsibilities to use this technology for maximum human progress without harming individuals or humanity itself in the process!