This is critical because an AI running an actual humanoid machine like Boston Dynamics could go ape shit literally because it is emulating trolling someone by doing the opposite of what they asked -- I just had a LLM troll me yesterday & go a bit haywire when i asked it to be concise, and every answer afterward was then the wordiest and longest bunch of nonsense imaginable... it was funny, but also slightly sobering to think how these things could go wrong when controlling something in the real world.