Avatar
Blake
b2dd40097e4d04b1a56fb3b65fc1d1aaf2929ad30fd842c74d68b9908744495b
#Bitcoin #Nostr #Freedom wss://relay.nostrgraph.net

šŸš€Introducing https://nokyctranslate.com !šŸŒ The future of AI-powered translations for Nostr is here! šŸ¤– Pay with #Bitcoin/#Lightning, no KYC needed. āœ… Get your API Key, add funds (0.1 sat/char), and start translating in your Nostr app. Works with any #LibreTranslate supported app, like Damus! šŸ“± #NoKYCTranslate #NostrTranslations

Are you anonymous, or can we know who runs it?

I’ve updated the code and it’s now hosted at https://nostr-delete.vercel.app.

Latest code can load your extension relays, and kind 10002 relays for you.

The old URL will redirect.

https://nostr-delete.vercel.app

Hosted. I’m less up to date with PAAS providers, but I did sign up to vercel last week.

I did get the static site output working as well. GitHub updated.

CC @fiatjaf​

https://github.com/blakejakopovic/nostr_delete

Thanks. I just got this working.

Do you know if there it a way to compile all the JavaScript output into a single file?

It seems new svelte projects use the same new project commands. I was fairly sure I selected svelte (without kit) during creation - because I’d heard it isn’t amazing.

Basically the GitHub project is the latest code and the other delete.html isn’t updated as I haven’t worked out how to compile it statically without a web server yet. The second image is the GitHub version - with kind 10002 and extension relay loading options.

It looks like I can maybe use this adapter to output static site files? My previous attempts failed to find how to do it.

https://kit.svelte.dev/docs/adapter-static

The static URL is below.

The issue I had was when I wrote it in plain JavaScript was I needed a better UI template engine. Then I ported the code to Svelte to finish the relay import from extension and kind 10002 query across relays.

I then found out that Svelte is a real pain to export a single self-contained file (they love server side rendering apparently). Maybe someone can assist?

The GitHub repo version can be run locally and has those two features included. I don’t have a hosted version of the Svelte at present.

The logic at present is to rely on a bootstrap list, your browser extension relay data, and kind 10002 + any custom relays you wish to add - and broadcast the deletion to them. It doesn’t connect to relays first and ask which relays have that event - it just sends the deletion proactively to all loaded/selected relays.

If someone wanted it to blast deletion out to 100s of relays blindly, they can edit the bootstrap relay list in the code and it should work fine.

One limitation with the browser extensions is they can be very noisey with too. I made buttons for what could have been loaded (mostly) automatically, because at least with Nostore, it asks to read pubkey, then asks to read relays, and then asks to sign event. Lots of approval popups.

Everything can definitely be tweaked or improved. Things like private relays lists will need support

https://cdn.nostrgraph.net/public/delete.html

I haven’t had time to look in depth (not sure it’s released yet?), and I don’t have full understanding of his roadmap - but it’s all about making Nostr easier.

My only issue is I need to build for scalability and JavaScript isn’t right for my needs at present. I certainly need JavaScript libs and tooling.. but for different goals.

Yep. That’s an image from my dashboards that’s back by my aggregator and relay.

The key issue is Nostr has around 15MM non-spam events. Indexes are getting big. Tables should likely start to be partitioned on kind/time. Partial indexes for recent data and perhaps limited indexing for historic. My DB is near 90GB already.

However, I’ve started to build support for effectively roll up statistics which can collect is near-real-time and basically give hourly/daily visibility per pubkey.

Basically, it’s too slow unless you either pre-warm cache or create roll up metrics, which is what I’ve got working now. I’ve seen a 400X query improvement in places - early data as tables are not filled with lots of different pubkey’s data yet.. but it’s significant anyway.

If you have some time, and like Damus (or other Nostr apps), why not give it an App Store Review.

Here was mine: ā­ļøā­ļøā­ļøā­ļøā­ļø

ā€œDamus is an evolutionary step above legacy social media apps. Your content is your content. You can leave at any time. You can join in any groups or conversations without fear it isn’t the popular opinion or being shadow banned. It’s early, but day by day the helpful community is bringing a new richness to the world.ā€

https://apps.apple.com/au/app/damus/id1628663131

I’ve started to design a universal Nostr query and publishing engine/library. Only brainstormed notes so far and a ASCII diagram, however it’s a complex problem and we need something.

One aspect I envision will be like SQL query engines/planner, where they can have cost optimisation functions and estimations.

Things like kind 10002, nip05, relay latency/health, relay hints, blacklists, paid query/relays, rate limiting, etc.

The gossip approach is great and it works, however it’s more of a ā€˜read my group approach’. I don’t think it has any documented publishing rules/logic - like how best to publish a DM or reaction, etc. I’m focused on your network, to the entire network ideally.

I also want it to have common queries abstracted like get notifications, get DMs, get event details/stats, etc. we likely only have 40-50 queries used commonly in apps today.

I could fail.. but I’ll share as I put parts or ideas together. Diagramming is the next major step. However I can already see how queues, channels and async rust can build a really awesome library that abstracts away as much of the complexity as possible. Maybe web assembly support.

I don’t have more than lots of observation, however I don’t think blastr reliability hits more than around 30-50 relays total. Often less.

The 300 numbers gets thrown around a lot, however I suspect the relays being offline/shutdown, rate limiting, now requiring payment, Ip blocking, or whatever else (without stateful retry publishing on failure) doesn’t match the common outcome.

I think generally relay payment support is developing along side - but we have limitations in client UI, subscription payments, being lightning only, etc. I don’t think we even have solid open source relay payment gateways that track pubkeys days until expiry or whatever.

I’m general or for something you’re working on?

Because I aggregate relays I can usually detect the newest events for a kind. And I also can do better aggregation or things like reactions and reply counts. Spam detection is easier because I see things quicker and at a larger scale. Lots of benefits - but certainly more headaches as I’m at least 10X most relays inbound processing today, lots of duplicates tho.. but it all adds up.

It’s all still best effort and never 100% accurate or current. You’ll never have full visibility.

I did consider maybe paid API services as an option. I have them today for free with rate limiting.

https://api.nostrgraph.net/beta/identities/b2dd40097e4d04b1a56fb3b65fc1d1aaf2929ad30fd842c74d68b9908744495b.json?pretty=true

https://api.nostrgraph.net/beta/events/2f6d28b773bf5f5c0b8f29e30ab5cca2ccd17cc5d771cb3530b067e0b372b5d3.json?pretty=true

I very rarely see events older than a month in my timeline having been reacted to, commented on, etc.

Most relays could store a rolling 1-3 months of data and be very useful. Keep hot the metadata kinds 0/3/10002 and then a month of data.

The concern however is a similar problem to broken http links on legacy internet.

I think the archive relay is certainly the direction headed. We don’t need to keep 2 year old data indexed as well as recent. Or we can purge older revisions or replaced events (if stored).

An alternative to the archive relay is paid relays that persist a pubkey/members data for longer periods or forever. I think this will be common too.

All relays all data.. that will die the next couple Nostr 10X jumps.