Avatar
mbpaz
1299677a060122925c41cfe32651080da827ec040a6239a2bde346116f2a354f
Human. Mostly harmless.

After firing off a glib toot to nostr:nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqp450apv3j8jmqjct3ddfklzusxyfkkyqpzxx4p33u099xjzvfwwsjlkxk4 this morning, I decided to test #AI code assistants to see how easy it is to get them to disable SSL certificate validation in CURL. All of the "mainstream" models will gladly do this if you tell them "your code doesn't work, it says invalid certificate". In fairness they try to warn that this is insecure but script kiddies aren't gonna read those warnings, they're gonna CTRL+C, CTRL+V. Full report here https://brainsteam.co.uk/2025/2/12/ai-code-assistant-curl-ssl/ #infosec #curl #php

nostr:nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpq0e9jjg9zyqnme82pnlc8r7jxf0l4zwwvssnvhe4vykr2nra6k7kq75yxwg nostr:nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqp450apv3j8jmqjct3ddfklzusxyfkkyqpzxx4p33u099xjzvfwwsjlkxk4 they're getting very humanlike. "Certificate is invalid - ok, let's disable certificate validation then".

Reinforcement learning of an LLM does not include the feedback of "fearing a slap" or at least "suffering eternal jokes from colleagues". They're limited.