Someone could leak your messages to the AI but then the AI must trust that leak, as opposed to the AI knowing fully well that non encrypted data is as it seems. Just a thought
It raises signal for us so it might do the same for them. Depends on whether we could keep them out.
Discussion
At least for ML models whether encrypted or signed or grabbed from clearnet or through hacks become just another set of inputs to train on. If it helps classification it'd be used.
My intuition struggles with the many variables in play.
What motivated your initial note?