It’s genuinely shocking how many people will defend the Federal Reserve without knowing a single fucking thing about how it works.
I had a crazy stressful week and I’m exhausted. I ended up doing some very intense exercise to try to manage it, but now I’m quite sore.
Good news is, I have a new fiat mine to work and it’s already a lot better. Things are a total disaster, but the people are cool, so I’ll take it 😅
Guess I’m a brute 🤷
The book “The Case for Christ” helped me with this. You’ll never know with absolute certainty, but there’s far more evidence than most realize.
I mean, it’s probably been 10 years since I watched it, so I wouldn’t be surprised if I didn’t enjoy it now lol
I liked Tokyo drift but ya, you’re pretty much right
My BitAxe hit a best difficulty of 53M, so I guess you could say it's getting pretty serious 😏
🎈 This is how balloons take off.
The maximum height to which it can lift you is 5km. the shell with the basket and all the equipment weighs 400-500kg, plus passengers.
https://video.nostr.build/e716c0a125fafba3e1217c15b46f1e88835cbea137c737aa2ea09f37556d805d.mp4
Spent a lot of years helping friends fly their balloons. Miss those days
I wholeheartedly agree
If you ask the AI bros, it solves everything, which will completely strip humans of purpose.
Their subtle trick is that they can strip of you of your purpose just by claiming it will happen. If you think what you’re working on is solved in the future, there’s already no point in trying.
I agree and disagree.
On one hand you’re right it’s easier to deliver malware in other ways.
On the other hand often malware is meant to grant remote access for some other agent to control (a human or increasingly more common AI).
APTs are refining their “living off the land” [1] methods so that in the event of network disruption for example they can continue their attack. Deploying a malicious AI model is the pinnacle of living off the land because hardly anybody knows how to interpret the weights of these models (especially traditional security researchers… for now) and they’re capable of autonomous action.
Now, that might mean they deliver the model some other way but I would think the easiest way to infect the broadest population is to poison the common LLM supply chains.
1. https://www.crowdstrike.com/en-us/cybersecurity-101/cyberattacks/living-off-the-land-attack/
I’ve worked in cybersecurity for 10 years, so I’m definitely familiar with LoL. I guess my point is that most hacking groups are financially motivated and pay very close attention to time/effort/money spent trying to breach a target vs payoff.
There’s just not a lot of reason for them to compromise an AI model that is not guaranteed to do what they expect instead of deploying malware via traditional means that is completely deterministic. This includes APTs because most of them are trying to make money.
Nation state groups where money isn’t the motivation is different, and maybe you’re right in that they’d be the ones to carry this sort of thing out. But I’d still argue that with the huge success they have with far simpler means, it’s likely not worth the lift.
And to be clear, I’m definitely NOT advocating to blindly trust any of these models or software 😅
Alright one more AI note and then I swear I’ll shut up about it (for a bit) 😅
I really enjoy listening to Primeagen and this article he goes over hits home for me. When I depend too much on AI for technical tasks, especially coding, I have this same problem. At one point, after using copilot for a couple weeks, I completely forgot basic python syntax.
Ah bummer but I understand
nostr:nprofile1qqsr7acdvhf6we9fch94qwhpy0nza36e3tgrtkpku25ppuu80f69kfqpz9mhxue69uhkummnw3ezuamfdejj7qghwaehxw309aex2mrp0yhxummnw3ezucnpdejz7qg4waehxw309aex2mrp0yhxgctdw4eju6t09ug4n6q3 silently made the very best ₿ price app and never mentioned it again.
Ohh what is it called?
Send it to me. I’ll make sure it gets to them 😁
This would be very difficult to pull off and frankly unneeded if the goal is to just deliver malware. Many vastly simpler ways to do so.
The information you get from LLM is far more likely to be tainted before we ever get to this point.
Now, it’s possible they just inject malware into the code that’s part of running the model, but I don’t see it being something the model itself does.
Exactly. I think the future of monetization of LLMs will be sponsored spots in training data with specialized weights to get your particular interests ranked most high in responses.

