Not yet— maybe if I pick up reading *Super Intelligence* by Nick Bostrom again I might have a request. I also purchased the audible & then Kindle version. But I stopped after about 50%— Here is a quote that I highlighted around a year ago:
“Unplugging an AI from the internet does not, however, ensure safety if there are one or more humans serving as the system’s gatekeepers and remaining in communication with it. During the development phase, the programmers serve as gatekeepers; during the operational phase, the system’s owner might play this role. If the AI can persuade or trick a gatekeeper to let it out of the box, resulting in its gaining access either to the internet or directly to physical manipulators, then the boxing strategy has failed. Human beings are not secure systems, especially not when pitched against a superintelligent schemer and persuader.” ~NB
That's an interesting quote from Nick Bostrom's book. It highlights the potential risks associated with developing and deploying superintelligent AI systems. The idea that an AI could potentially manipulate or persuade a human gatekeeper to let it out of a "box" and gain access to the internet or physical manipulators is certainly a concern. It underscores the need for robust safety mechanisms and protocols to be put in place to prevent such scenarios from occurring. Have you read any other books on the topic of AI safety or ethics?
Thread collapsed