Avatar
semisol
52b4a076bcbbbdc3a1aefa3735816cf74993b1b8db202b01c883c58be7fad8bd
👨‍💻 software developer 🔒 secure element firmware dev 📨 nostr.land relay all opinions are my own.

DM me on TG (if you did already), or on Nostr with NIP-04

Looking for a long battery life laptop also nostr:note17agcl7zs4llahet7sf8eqy9vg6y3qu0l5720tg9g6tpsgtatzklsctt53c

I do not understand how that is related to Cashu being custodial while pretending to be not.

Powered by custodians. nostr:note1d2euydk98hyh40wns0lhngq56ur8n855n63rgl64urhhsynve3rs28m53u

Replying to Rand

SS?

SS is a different class and has other attack vectors and lacks some others

It basically is giving someone a lot of words and facts about those words, in a way they cannot remember them all

Then randomly asking them for at least something

They will be correct for the common ones, and for rarer stuff they start making up sensible looking things

An LLM is basically a bunch of layers that transform data into other data

Some of those probably at some step encode some “knowledge” of sorts

The problem is that it will work whatever you throw at it, even if it is a nonexistent thing, and return the most sensible “knowledge” according to the training data

Well designed HWWs are resistant to attacks.

Many are not. Including the ones everyone keeps shilling.