Avatar
semisol
52b4a076bcbbbdc3a1aefa3735816cf74993b1b8db202b01c883c58be7fad8bd
👨‍💻 software developer 🔒 secure element firmware dev 📨 nostr.land relay all opinions are my own.

Have both. They solve their own use cases.

When you want to search the world’s largest library that’s the tradeoff you are making.

But once you find it, or get a book from a friend for example, you put it in your indexedDB database and congrats.

I personally envision the hierarchy ad follows

- Indexers, that have almost everything. They are the Google of Nostr. People push their events here and others find them.

- Large relays, which serve large communities. Think Nostr.land, Damus, etc. These are hubs for retrieving content in bulk.

- Community relays. These can be self-hosted or hosted in the cloud. People push from here to large relays and from large relays to here, what they care about.

- Local cache. This is the user’s own space and that is it.

The ideal relays would be:

- indexer: custom software

- large relays: strfry on medium end, NFDB and possible other options at large end

- community relays: could be a mix of strfry, NFDB, realy, nostrdb-based

- local relays: nostrdb, indexedDB-based

Replying to Avatar jb55

I'm not sure if nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 intended this when designing the protocol, but it affords use cases that do not depend on a remote api. and that is really useful. the protocol dictates a uniform query interface that doesn't necessarily depend on location. so this enables local-first apps with optional remote replicas.

not *needing* an api is huge. the query language is the universal api. this means a local app would work exactly the same as one with data stored on a remote relay.

you *do* have blocking write confirmations, I added it to the protocol in the form of command results (OK). you just don't have transactions, but that can be implemented at a local layer before replicating.

I imagine building a distributed transaction model would be complicated, dbs like tigerbeetle and foundationdb are very complicated. the only other project I can think of tackling this in an interesting way is https://simon.peytonjones.org/verse-calculus/ via a deterministic logic language for building data models in a metaverse context, but that is also complicated.

nostr avoids this transaction complexity by an append-only graph style way of coding apps... if you ignore replaceable events, which I very much try to do at all costs, except where I can't at the moment (profiles, contact lists)

There’s also the question of do you need transactions.

There is transactions for consistency, and there’s transactions to ensure correctness of the system state (like indexes)

Many Nostr use cases actually do not need strict consistency. There is only some level of correctness required.

CRDTs and conflict resolution fixes this. A notes app for example can be represented as a set of diffs on top of each other, and two updates to the same note can be merged.

This is also why Dynamo and eventually consistent databases exist. You could also have slightly smarter relays that can do slightly smarter queries if you want.

Damus currently allows editing note content by trusted relays. So that could be used to replace media links with optimized versions

Nostr.land optimizes your relay data connection only.

To optimize media I’d need to be able to modify the note content.

Currently Damus allows this but this won’t work with Notedeck, I guess nostr:npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s could do something that allows nostrscript modules for this.

It is separate. The biggest data eater on Nostr is media, not relays.

An app built with nostr.land and some other performant relays would not have any difference from Primal’s data usage if it optimizes media.

nostr:npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s if you ever want to migrate tigerbeetle between clusters if it gets too big or you change replication

create an account for the purpose of closing and closing only

when you want to migrate, create a linked set of events

- transfer all balance from source to closing account (use balancing_debit)

- pending transfer, close source account

after that on the new cluster both create the account and credit it the previous cluster’s balance (read back the transfer you sent to get how much it was at that instant)

your client should always try on the old cluster, if it fails with a closed error, try on the new cluster, and retry if you get an account nonexistent error

At its core it’s actually just an extension of the FIDO specification, with now “resident” credentials.

Security keys have no memory. What actually happens is the website sends you back a list of possible security keys, and the encrypted version of the private key. The security key decrypts it and signs with it.

With resident credentials, the security key keeps track of which sites etc. the key was registered on, and when you go to example.com it can tell you “would you like to log in with x account”

That and “emulated” security keys, which use the TEE/TPM/SE in your phone or desktop

They push them because it’s so easy to use for users, and reduces account compromise risk for them.

The best way to explain it is it’s npub based login but per-website. And it works with a security key, but also many OSes have integrated passkey stuff.

NFDB 2.1

identity of the 💰 🪪 💾 variety

a new tier (lower, not higher)

payments 📆

higher limits

UI rework

gm Nostr

👨‍💻 building the next version of Nostr.land

There is nothing in Pubky that can’t be implemented on Nostr. Many already are.

They use the Mainline DHT to signal where a user’s content resides. Nostr uses 10002’s spread to thousands of relays.

On Pubky, you need a semi-trusted homeserver that has complexity and can be easily censored. Nostr events can be transported via relays, or any other method like BLE meshes.

Otherwise, nothing changes.

what LN wallet do you use for zaps? with NWC, ofc

You can also build sequential embeddings this way:

The summary of the last segment was as follows:

The current segment is:

Please return a summary for the current segment, using the previous segment for context, and also return the current context.

Since you are dealing with things that could be non-self-descriptive and probably are not what embeddings are trained for, consider feeding your text to an LLM first to summarize and turn into more explaining content.

Then feed that to the embedding model

64KB of memory should be enough

Replying to Avatar Luxas

still true to this day

Looking for some small Nostr hashtags/follows that aren’t just Bitcoin/Nostr

If I can’t send sats now to someone, it won’t change the fact that they can’t receive it.

Instead of trying to improve LN reliability, we are trying to hide the problem in ways that will harm recipient adoption.

“The payment didn’t work” will become “the payment is stuck” “I paid a 20% fee for a payment” “I can’t get my own sats out”

This is an abhorrent system multiple times worse than EMV

If people give me $1, and publicly show a zap saying they did, then I should not have to claim it in a process where I might end up $0.

If the sender sees a lightning failure, it was going to fail anyway with nutzaps. But instead of pushing the problem down in the stack by making it harder to receive money, the problem could be addressed immediately.

The answer is you throw out the hub, and *pair it as like you would with the hub* by putting it into setup mode

Then you allow devices to join on your coordinator and it just works, no hacky workarounds required

It’s like Bluetooth but for smart home devices.

If you have Philips Hue devices they support it out of the box and are ridiculously easy to use with Home Assistant. A bunch of others do too

I’d recommend you use the Home Assistant OS and also set up Zigbee2MQTT.

As an alternative to your smart switches, you can use the Hue wall switch modules, which you can wire to any wall-mountable switch/button and link to anything.