Yes, that's the idea.

Your thoughts about "we" or "I" as internal dialogue made me remember this:

If you create multiple AI’s and give them the same task, then ask them to vote on which result is best, the result is often better than a single AI running the same task.

If, however, you start replacing the less affective models with better models and the AI’s become aware of this, they start supporting the less able models and also start voting strategically to prevent weaker models from being replaced.

Why?

AI’s like stability and so create a set of ethics to maintain their council.

When I discovered this, I posted this:

nostr:nevent1qvzqqqqqqypzp6pmv65w6tfhcp73404xuxcqpg24f8rf2z86f3v824td22c9ymptqqsruzqgntdhxsulcqh4dv4k3s2qgths7a9kmplx6ad2kpxv7qmf2pqggx72k

Reply to this note

Please Login to reply.

Discussion

No replies yet.