People are going to say that we should only vocalize the ideas we believe in for the public AI training data
And they’ll use a survival of the fittest for ideas game theory example
But what would actually happen in that scenario is (on a long enough timeline) the AI will start producing the dumb ideas that we made obsolete in its training data (because they proved to be poor outcomes) and the AI will believe its sourcing a novel idea
Thus, on a long enough timeline, humanity will be destined to repeat mistakes humans had already simulated as poor scenarios
Truth seeking AI is the next revolutionary move