I think there is a tremendous productivity boost to be had from using AI tools to write code, but I'd warn people one thing: if your code has cryptography in it that can affect user security, using AI tools of course can help, but surprisingly it could also make situations much worse. For example, I just wrote some crude python to implement a particular public key cryptosystem, then asked github copilot for its opinion. It first said, everything is good, this is a secure implementation. Then, I wondered if I should change this one thing X. It said, yes, actually, you need to change X. I then looked up several references online and realized that the changed I proposed was wrong. It doesn't take much imagination to realize that a bot that doesn't understand why something abstract is true or false, and is skewed towards being "constructive" (not intentionally, of course, but it is) can actually make a situation of "implementing crypto badly" much, much worse, because it could end up reassuring you at the worst moment. Be careful out there.

I expect the same comments would apply to anything that is (a) security critical and (b) very obscure. Of course, some AI will be better than others and it will change etc. etc. ; I'm just saying, be really wary if you're in this niche.

#cryptography

Reply to this note

Please Login to reply.

Discussion

AI doesn’t audit…it hallucinates. If it’s crypto-critical, verify with eyes, not vibes. lol.

Yes. It does depend on the field though. It's tremendous with human languages in my experience. In mathematics and similar it often drifts into outright hallucination. I know the more recent models have been somewhat addressing this, but not sure of details. Fundamentally they don't "reason".

💯