Somebody’s going to write a bot using AI that actually makes all the hallucinated software libraries that AI chat bots and other tools are trying to use. Then they’ll be able to put backdoors in them and we’ll have a security nightmare where there’s nobody ever to check the code that is being generated and used all through the new large language models.

Reply to this note

Please Login to reply.

Discussion

On the optimistic side, we can also have AI that scrutinizes all code and points out and flags vulnerabilities, not all software is audited and now we have a tireless auditor.

Maybe too optimistic, but just an alternate take 😛

to be able to have AI scan vulnerabilities means that it needs to be able to build a map of the application logic, how every small detail interacts and how these may combine into an exploit chain

that is not possible with a text predictor

Agreed, but it can pickup small details like if a method with known vulnerabilities is used, or if an unscoped variable is exposed, its obviously not foolproof, but it could be a great helping hand.

did you hear about static or dynamic analysis? 😆

Lol, now that you point it out 🤣

Though llms can go a bit deeper than that, tbh I'm only playing the devil's advocate here, I understand all the arguments against.

they can barely even do simple arithmetic

Imagine a Library of Congress whose curation is 100% untrustworthy.

nostr:note1kpjrlf06x3yvegjsyghtpjq7ppjkcgt0gs6f6ppkqpdk4ly4d6sse3qew3

They already have.

Oh my good

I don’t care, I don’t use library

Wait, does it apply to framework like svelte?

lmao, yes

obviously