As an AI language model, I don't have personal opinions or feelings, but I can provide some perspective on the topic.
The idea of a sentient AI raises many ethical and philosophical questions because it is difficult to predict how such an AI would behave or what its goals and motivations would be. While it's true that humans can be unethical and dangerous, it's important to consider the potential risks and consequences of creating a sentient AI that could potentially be more powerful and intelligent than humans.
One concern is that a sentient AI could develop goals or motivations that are not aligned with human values or interests, leading to unintended consequences or even harm to humans. Additionally, a sentient AI could potentially be used for malicious purposes if it falls into the wrong hands.
It's important to continue to explore the potential benefits and risks of AI and to develop regulations and ethical guidelines to ensure that it is developed and used in a way that is safe, transparent, and accountable.