We're trimming everything to be local-first, and getting LE Bluetooth going, but nostr:npub12262qa4uhw7u8gdwlgmntqtv7aye8vdcmvszkqwgs0zchel6mz7s6cgrkj suggested also going full Kafka on the other end of the spectrum and I'm like...

Have both. They solve their own use cases.
When you want to search the worldâs largest library thatâs the tradeoff you are making.
But once you find it, or get a book from a friend for example, you put it in your indexedDB database and congrats.
I personally envision the hierarchy ad follows
- Indexers, that have almost everything. They are the Google of Nostr. People push their events here and others find them.
- Large relays, which serve large communities. Think Nostr.land, Damus, etc. These are hubs for retrieving content in bulk.
- Community relays. These can be self-hosted or hosted in the cloud. People push from here to large relays and from large relays to here, what they care about.
- Local cache. This is the userâs own space and that is it.
The ideal relays would be:
- indexer: custom software
- large relays: strfry on medium end, NFDB and possible other options at large end
- community relays: could be a mix of strfry, NFDB, realy, nostrdb-based
- local relays: nostrdb, indexedDB-based
I'm not sure if nostr:npub180cvv07tjdrrgpa0j7j7tmnyl2yr6yr7l8j4s3evf6u64th6gkwsyjh6w6 intended this when designing the protocol, but it affords use cases that do not depend on a remote api. and that is really useful. the protocol dictates a uniform query interface that doesn't necessarily depend on location. so this enables local-first apps with optional remote replicas.
not *needing* an api is huge. the query language is the universal api. this means a local app would work exactly the same as one with data stored on a remote relay.
you *do* have blocking write confirmations, I added it to the protocol in the form of command results (OK). you just don't have transactions, but that can be implemented at a local layer before replicating.
I imagine building a distributed transaction model would be complicated, dbs like tigerbeetle and foundationdb are very complicated. the only other project I can think of tackling this in an interesting way is https://simon.peytonjones.org/verse-calculus/ via a deterministic logic language for building data models in a metaverse context, but that is also complicated.
nostr avoids this transaction complexity by an append-only graph style way of coding apps... if you ignore replaceable events, which I very much try to do at all costs, except where I can't at the moment (profiles, contact lists)
Thereâs also the question of do you need transactions.
There is transactions for consistency, and thereâs transactions to ensure correctness of the system state (like indexes)
Many Nostr use cases actually do not need strict consistency. There is only some level of correctness required.
CRDTs and conflict resolution fixes this. A notes app for example can be represented as a set of diffs on top of each other, and two updates to the same note can be merged.
This is also why Dynamo and eventually consistent databases exist. You could also have slightly smarter relays that can do slightly smarter queries if you want.
This is something I can support in the next Nostr.land version
Damus on Android = notedeck
Damus currently allows editing note content by trusted relays. So that could be used to replace media links with optimized versions
Nostr.land optimizes your relay data connection only.
To optimize media Iâd need to be able to modify the note content.
Currently Damus allows this but this wonât work with Notedeck, I guess nostr:npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s could do something that allows nostrscript modules for this.
It is separate. The biggest data eater on Nostr is media, not relays.
An app built with nostr.land and some other performant relays would not have any difference from Primalâs data usage if it optimizes media.
Their caching server is not the part saving data, itâs their media optimization proxy.
They can get rid of it and instead use high performance relays like nostr.land and it would still barely use data :)
nostr:npub1xtscya34g58tk0z605fvr788k263gsu6cy9x0mhnm87echrgufzsevkk5s if you ever want to migrate tigerbeetle between clusters if it gets too big or you change replication
create an account for the purpose of closing and closing only
when you want to migrate, create a linked set of events
- transfer all balance from source to closing account (use balancing_debit)
- pending transfer, close source account
after that on the new cluster both create the account and credit it the previous clusterâs balance (read back the transfer you sent to get how much it was at that instant)
your client should always try on the old cluster, if it fails with a closed error, try on the new cluster, and retry if you get an account nonexistent error
At its core itâs actually just an extension of the FIDO specification, with now âresidentâ credentials.
Security keys have no memory. What actually happens is the website sends you back a list of possible security keys, and the encrypted version of the private key. The security key decrypts it and signs with it.
With resident credentials, the security key keeps track of which sites etc. the key was registered on, and when you go to example.com it can tell you âwould you like to log in with x accountâ
That and âemulatedâ security keys, which use the TEE/TPM/SE in your phone or desktop
They push them because itâs so easy to use for users, and reduces account compromise risk for them.
The best way to explain it is itâs npub based login but per-website. And it works with a security key, but also many OSes have integrated passkey stuff.
NFDB 2.1
identity of the đ° 𪪠đž variety
a new tier (lower, not higher)
payments đ
higher limits
UI rework
gm Nostr
đ¨âđť building the next version of Nostr.land
There is nothing in Pubky that canât be implemented on Nostr. Many already are.
They use the Mainline DHT to signal where a userâs content resides. Nostr uses 10002âs spread to thousands of relays.
On Pubky, you need a semi-trusted homeserver that has complexity and can be easily censored. Nostr events can be transported via relays, or any other method like BLE meshes.
Otherwise, nothing changes.
what LN wallet do you use for zaps? with NWC, ofc
You can also build sequential embeddings this way:
The summary of the last segment was as follows:
The current segment is:
Please return a summary for the current segment, using the previous segment for context, and also return the current context.
Since you are dealing with things that could be non-self-descriptive and probably are not what embeddings are trained for, consider feeding your text to an LLM first to summarize and turn into more explaining content.
Then feed that to the embedding model
64KB of memory should be enough
excuse me a CAR?!
German public transit isnât extremely shit and even in places where it is people use it.
Ein super Artikel von Hartmut Gieselmann zu #opensource.
Es ist wichtig zu verstehen, dass Opensource die uneingeschränkte Nutzung aller bedeutet. Wer die Lizenz beschränkt im Einsatzgebiet oder fßr gewisse Personengruppen illegalisiert, sollte den Namen Opensource direkt ganz wegstreichen:
https://www.heise.de/meinung/Kommentar-Open-Source-auch-fuer-die-Boesen-10438733.html
NVK would like to have a word about the âcommiesâ that âstoleâ their HWW code that was licensed as OSS
then made it better
First time?
The API should return a âbulk-allowedâ field before any bulk queries are made.
Looking for some small Nostr hashtags/follows that arenât just Bitcoin/Nostr
If I canât send sats now to someone, it wonât change the fact that they canât receive it.
Instead of trying to improve LN reliability, we are trying to hide the problem in ways that will harm recipient adoption.
âThe payment didnât workâ will become âthe payment is stuckâ âI paid a 20% fee for a paymentâ âI canât get my own sats outâ
This is an abhorrent system multiple times worse than EMV
If people give me $1, and publicly show a zap saying they did, then I should not have to claim it in a process where I might end up $0.
If the sender sees a lightning failure, it was going to fail anyway with nutzaps. But instead of pushing the problem down in the stack by making it harder to receive money, the problem could be addressed immediately.
This is starting to sound oddly familiar
No one does their job properly either
I wish I had 10% of this đ¤Ł
Why not more upload?
The answer is you throw out the hub, and *pair it as like you would with the hub* by putting it into setup mode
Then you allow devices to join on your coordinator and it just works, no hacky workarounds required
Since I am a nerd I have my Home Assistant pi shoved in a server rack.
The Zigbee coordinator is in another place with another Pi, that has a USB dongle connected and it connects to an MQTT server on the HA pi
Itâs like Bluetooth but for smart home devices.
If you have Philips Hue devices they support it out of the box and are ridiculously easy to use with Home Assistant. A bunch of others do too
Iâd recommend you use the Home Assistant OS and also set up Zigbee2MQTT.
As an alternative to your smart switches, you can use the Hue wall switch modules, which you can wire to any wall-mountable switch/button and link to anything.


