Avatar
Currency of Distrust
da26e54b86c9a395a4233cbb540fe2aa93cdad4a9b657ed5a724efed5859d23d
Christian | Husband | Father Professional hacker Lover of freedom tech

It’s genuinely shocking how many people will defend the Federal Reserve without knowing a single fucking thing about how it works.

I had a crazy stressful week and I’m exhausted. I ended up doing some very intense exercise to try to manage it, but now I’m quite sore.

Good news is, I have a new fiat mine to work and it’s already a lot better. Things are a total disaster, but the people are cool, so I’ll take it 😅

I mean, it’s probably been 10 years since I watched it, so I wouldn’t be surprised if I didn’t enjoy it now lol

My BitAxe hit a best difficulty of 53M, so I guess you could say it's getting pretty serious 😏

I’ve worked in cybersecurity for 10 years, so I’m definitely familiar with LoL. I guess my point is that most hacking groups are financially motivated and pay very close attention to time/effort/money spent trying to breach a target vs payoff.

There’s just not a lot of reason for them to compromise an AI model that is not guaranteed to do what they expect instead of deploying malware via traditional means that is completely deterministic. This includes APTs because most of them are trying to make money.

Nation state groups where money isn’t the motivation is different, and maybe you’re right in that they’d be the ones to carry this sort of thing out. But I’d still argue that with the huge success they have with far simpler means, it’s likely not worth the lift.

And to be clear, I’m definitely NOT advocating to blindly trust any of these models or software 😅

Alright one more AI note and then I swear I’ll shut up about it (for a bit) 😅

I really enjoy listening to Primeagen and this article he goes over hits home for me. When I depend too much on AI for technical tasks, especially coding, I have this same problem. At one point, after using copilot for a couple weeks, I completely forgot basic python syntax.

https://youtu.be/cQNyYx2fZXw?si=C0ZBCMcs9CZdzdQr

This would be very difficult to pull off and frankly unneeded if the goal is to just deliver malware. Many vastly simpler ways to do so.

The information you get from LLM is far more likely to be tainted before we ever get to this point.

Now, it’s possible they just inject malware into the code that’s part of running the model, but I don’t see it being something the model itself does.