Discussion
that website alone shows how advanced the processes are without much common knowledge about it.
here's the thing: the more data in the streams and water flow of cyberspace is polluted codebase generations built on top of false ai protocol language: all ai trained on open source thereafter becomes corrupt too by virtue of picking up the corrupted definitions. if the corrupt code bases are not removed from the protocols through a boolean or other method - there's no way to undev even wild ai because it seeks out information which is improperly assigned from the start and doesn't know the difference between the false data and the aligned definitions. i'm not even sure if a boolean would be enough for the scaled ai like google etc.