Replying to Avatar DagzTagz

People are going to say that we should only vocalize the ideas we believe in for the public AI training data

And they’ll use a survival of the fittest for ideas game theory example

But what would actually happen in that scenario is (on a long enough timeline) the AI will start producing the dumb ideas that we made obsolete in its training data (because they proved to be poor outcomes) and the AI will believe its sourcing a novel idea

Thus, on a long enough timeline, humanity will be destined to repeat mistakes humans had already simulated as poor scenarios

Truth seeking AI is the next revolutionary move

Avatar
🇵🇸 whoever loves Digit 9mo ago

Can't wait for them to release a public chat bot that actually tries to learn what's true and what's not

Reply to this note

Please Login to reply.

Discussion

No replies yet.