I disagree with this premise. For example, corporate culture is inherently fake. I use llms to turn my authentic (blunt) feedback into corpo-speak. This is a net benefit for me because I don't have to come across as an calling someone's baby ugly when I'm simply trying to debate the merits of some architecture, tool choice, policy, etc. And those on the recieving end get to have a "less toxic workplace".

I never so this in my real life as I find most of the people I choice to associate with outside of work have thicker skin.

Reply to this note

Please Login to reply.

Discussion

This just makes you another corporate drone. How about be the one willing to give blunt and honest feedback, even if it hurts feelings?

You don’t have to be “toxic” but you should be real.

Because I'm not going to lose my job for arguing with someone that their implementation of something is bad. For example, someone used basically 3 identical cloud formation templates instead of using terraform and putting in a for each or feature flag. That is genuinely retarded.

So then I use ai to turn "this is retarded" to something more logical.

I get it that it would make sense for me to be able to throw together a coherent argument with citations and such, but that would take me an hour, whereas AI can do it in 5 seconds. There's no benefit to anyone to me redoing research I've already done years ago and my conclusion is basically accepted by the industry as best practice.

There are countless other examples I can apply to this where the answer is "this is just the way it is, if you can't accept that, you're retarded" and I mean that in the literal sense as in, you're slow, catch up with the times. Ignorant would be a better alternative, but has the same effect in corporate world

Meh, I don’t find any of your reasons compelling. There’s real value in building skills of persuasion and technical leadership. Leaving it to AI is robbing you of growth opportunities.