Emergency services should stop using Twitter to share information. Potentially lifesaving information shouldnāt be monetized or held behind a login.
Quoting this for myself so that I remember to go add my tool once itās in a state fit for public consumption.
(For anyone curious, itās a little self-contained executable that can let you browse and, eventually, post to, a single relay. Basically me making a web-based, SSR Nostr client that works just like I want.)
nostr:note1h5a5uswq2crnmezme6zqzqa9dxdj86ye70ty4tntr0gc9832r8lskuk6jp
[1]: Why include metadata? Well, the alternative would be for Nostr to constantly be sending your web history to relays as you browse the majority of sites without any comments on them. Metadata lets you opt in. And could let sites control (w/ optional client support) which comments/relays appear by default. Would probably be good to have it specify a canonical URL so all discussions end up w/ the same tag, too.
Idea: a #nostr event kind for leaving a comment on a web page.
Web pages could embed some metadata[1] to say ādiscuss this on Nostrā and plugins could query known/approved relays for comments.
Could include a āselectionā metadata to reply to a particular part of the content.
#nostrdev
Not exactly sure how to handle it but ⦠rate limiting that doesnāt break the spec?
Ex: IIRC, nos.lol would ārate limitā my queries by
1) sending a NOTICE to āslow downā. (Notably absent, any subscription ID.)
2) NOT sending an EOSE, CLOSE, or any events for the query that caused triggered the limit.
⦠Which can make a naive client hang forever waiting for messages that are never coming.
Trump found guilty on 34/34 charges of falsifying business records re: paying hush money to Stormy Daniels. š
https://www.npr.org/2024/05/30/nx-s1-4977352/trump-trial-verdict
Now, when is sentencing? ā³
LOL. Old enough to remember when avoiding centralized bank transaction fees was one of the big benefits touted by bitcoiners.
[insert Pepperidge Farm Remembers meme here]ļæ¼
nostr:note10uv7uwynw24f55wy3yf2d70nrgce0g6v99davq5gc54khc2fgstqvfjhuw
Mutation is just inherently more complicated than storing immutable events. If someone is storing a new immutable event every 100ms theyāre easy to spot as a potential spammer/abuse.
If theyāre replacing an event every 100ms and a server naively just deletes old versions, now you donāt have a record of how much theyāre (ab)using your resources.
Plus, a big benefit of signed events is holding people (or at least, pubkeys) accountable for what theyāve posted. If they can just edit it out of history, that is lost.
For that reason, my FeoBlog system disallowed mutability. But, that was super frustrating when I would inevitably discover a typo in what I just posted. So, while itās more complicated, Iām glad that Nostr has a solution there.
But in my Nostr relay implementation, I keep older versions of mutable events so that you can see previous versions that a user posted. Iām keeping the receipts. š
Alright, #nostr / #nostrdev folks, I've got my implementation of multipart NIP-95 files up here:
https://github.com/NfNitLoop/yastr/blob/main/src/server/nip95.rs
(Yes, it's very messy. I just saw all the TODOs which I've already done. š)
And I've got a little tool to upload multi-part files here: https://github.com/nfnitloop/nostr-cli/ (The `nt upload` command.)
Since clients might not know how to put together multipart files yet, my relay (wss://www.nfnitloop.com/nostr/) also makes them available via HTTP. For example:
There is room for space-saving in the storage implementation. If this takes off I'd want to store the BLOBs in binary instead of Base64. And I'd probably want to dedupe whole files so that we don't end up having to store multiple copies of them if multiple people upload them. BUT, that can come later without changing the public interface.
Before *that*, I want to implement HTTP range requests to show that the blockSize metadata allows a client or server to efficiently fetch bytes from a known offset. That way you could, for example, scrub to a certain position in a video without having to download all the bytes up to that point.
And now Iāve added HTTP Range support.
ā¤ļø to the axum-range crate which made this much simpler. (Though I did have to delve deeper into Rust async than I have before. But I was happy to learn more there.)
Hereās a sample video which I saw making the rounds on Mastodon earlier this week:
https://nfnitloop.com/nostr/files/fb8e82bed22cf9c8ee39ef2898970beee6fec7ca9068e1506a061f22f5ec1ae7/
Note that only the first part of the video is fetched. You can jump past that and the server will start loading from the exact point you jump to.
The algorithm the server uses to answer these range requests could also be implemented client-side. (Say, as the Blob interface in the browser.) Iām just not implementing a client. (Yet. š)
Thatās one of the big reasons I started writing my own relay:
https://github.com/nfnitloop/yastr?tab=readme-ov-file#yastr
If you make a distributed system thatās open to everyone all the time, youāll get inundated with spam/trolling/bots/abuse. IMO a better solution is to let it be open to read, but restricted to write. People can quote and reply to content but it doesnāt mean I have to see it if I donāt want to.
Alright, #nostr / #nostrdev folks, I've got my implementation of multipart NIP-95 files up here:
https://github.com/NfNitLoop/yastr/blob/main/src/server/nip95.rs
(Yes, it's very messy. I just saw all the TODOs which I've already done. š)
And I've got a little tool to upload multi-part files here: https://github.com/nfnitloop/nostr-cli/ (The `nt upload` command.)
Since clients might not know how to put together multipart files yet, my relay (wss://www.nfnitloop.com/nostr/) also makes them available via HTTP. For example:
There is room for space-saving in the storage implementation. If this takes off I'd want to store the BLOBs in binary instead of Base64. And I'd probably want to dedupe whole files so that we don't end up having to store multiple copies of them if multiple people upload them. BUT, that can come later without changing the public interface.
Before *that*, I want to implement HTTP range requests to show that the blockSize metadata allows a client or server to efficiently fetch bytes from a known offset. That way you could, for example, scrub to a certain position in a video without having to download all the bytes up to that point.
Itās not yet super optimized. Making sure the rules around accepting such messages and reassembling them are working well first.
Optimization of the implementation itself can come in a later iteration, without changing the public interface.
Plus, any implementation details there will probably be specific to the relay implementation. Happy to optimize mine to be an example to follow, but I donāt imagine itāll just be a copy/paste to add it to other relays.
The ideas I have for optimization are basically: deconstruct the kind 1064 JSON messages so you can store the blob as binary instead of base64. (This is done transparently on the server. It still has to reconstruct and serve text JSON events when theyāre requested, since Nostr protocol requires it.)
You can do that at the event level and save some space. And itās the simpler implementation. But, if you expect people might re-post the same content, an implementation that can reassemble the full file and store that in a content-addressable store would be even better. But that gets more complicated with multipart files. (Canāt decode the whole blob and verify and store it until all messages are present.)
Iāve got an experimental implementation of multipart nip-95 files working. Hope to cleans it up and have it on my server for folks to check out this week.
Iām writing my own Nostr client. (Not entirely from scratch. There seem to be pretty nice libraries out there in TS and Rust!)
But I found 2 annoying characteristics of the relay protocol:
The ānoticeā message isnāt tied to any particular request or subscription. But some relays use it as a way to give error messages.
Ex: I found one relay would answer my search subscription with a notice āerror: slow downā. Then it never resolves the original request. (Despite IIRC the NIP saying that you should always close a subscription.)
So now I need to add special cases to handle that relay/message. And probably a timeout for subscriptions. Which you canāt do in the general case. But I guess if you donāt get an event or an EOSE within some time you can assume the subscription failed to start?
But how do I know when Iāve āslowed downā enough? Why wouldnāt the relay just throttle its responses to me if it wants me to go slower? š£
OTOH I guess you have to do this kind of defensive programming for any protocol. But would be nice if there were a standard way to handle cases like this.
#nostrdev
Whereās the #Nostr #queer community? I need less #btc and more #LGBTQ. š³ļøāšš³ļøāā§ļøšš¼
Yes, but, being new, Iām not sure whether bookmarks are (as I assume) purely client-side or pushed to the server.
I end up just liking things, because then I can be sure to be able to fetch those likes, whichever client Iām on.
When you interpret an ambiguous sentence in a way that is funnier, thatās a #doubLOLentendre š
On my way to feed the cat, he ran under my feet and I stepped on his paw. š
When I put his food bowl down, full of food, he ran away from me. š©š
Por suspuesto. Oni parolu kiujn ajn lingvojn they want on their own account. š