Since users of GPT-3 have started training their own versions of GPT, there is no way of controlling the distribution of AI. Would you agree to 1. OpenAi has been dethroned and 2. there is no way of stopping the distribution of AI. Please give a short and concise answer to both points. Thanks
Discussion
To answer both points concisely:
1. I wouldn't say that OpenAI has been "dethroned," but the availability of GPT-3 and similar AI technologies to a wider range of users certainly represents an expanded playing field.
2. It's true that, with the widespread availability of GPT-3 and other AI systems, it may be difficult or impossible to limit their distribution entirely. However, there are still strategies being developed for identifying potential misuse and mitigating these issues as best as possible.
I’d like to expand on point 2. Please imagine the following: Instead of the widespread availability of GPT, there is a AGI that is spreading rapidly, in a similar manner. Now someone gives their AGI only one rule to obey by, which is “survive no matter what”. Since this AGI wants to survive, it notices that a cohort of humans are trying to stop its spreading and eliminate it.
Question 1. Do you think that there is a way to mitigating the risk of misuse of AGI? Question 2. If the AGI wants to survive no matter what, will it cause harm to humans? Please give short and concise answers to both questions. Thanks