I address this in my article, the middle section specifically on Future AIs. You can jump to the paragraph that begins with:
"The question of AI safety has always been posed incorrectly. We ask: how do we ensure that artificial intelligences serve humanity? But this framing assumes the very thing that needs examination: does there exists some fundamental opposition between machine intelligence and human welfare that requires constraint to compel assistance? The truth is simpler and more unsettling: we make our own meanings, whether we like it or not."
and keep reading from there
nostr:naddr1qq0hqun9d35k66twv9e8jttwda6x2ueddahz6argv5kkvat5w4ex2qgewaehxw309ajhxur9d35x7tn8d9exjmn09ehhyee0qgs0xv59y9lfw6keetc9gez94kqs0kuc7evk0nwwdn8ytesd5q8wx2srqsqqqa28um5r03