šIntroducing https://nokyctranslate.com !š The future of AI-powered translations for Nostr is here! š¤ Pay with #Bitcoin/#Lightning, no KYC needed. ā Get your API Key, add funds (0.1 sat/char), and start translating in your Nostr app. Works with any #LibreTranslate supported app, like Damus! š± #NoKYCTranslate #NostrTranslations
Are you anonymous, or can we know who runs it?
Iāve updated the code and itās now hosted at https://nostr-delete.vercel.app.
Latest code can load your extension relays, and kind 10002 relays for you.
The old URL will redirect.
https://nostr-delete.vercel.app
Hosted. Iām less up to date with PAAS providers, but I did sign up to vercel last week.
I did get the static site output working as well. GitHub updated.
CC @fiatjafā
To generate static files (SSG) with SvelteKit instead of SSR you can use the static adapter. https://github.com/sveltejs/kit/tree/master/packages/adapter-static
Thanks. I just got this working.
Do you know if there it a way to compile all the JavaScript output into a single file?
Svelte doesn't have anything to do with SSR. Maybe you used sveltekit or something weirder I don't understand. nostr:npub1jlrs53pkdfjnts29kveljul2sm0actt6n8dxrrzqcersttvcuv3qdjynqn may know.
You say you're using kind 10002, but I only see a static relay list, was that supposed to be updated?


It seems new svelte projects use the same new project commands. I was fairly sure I selected svelte (without kit) during creation - because Iād heard it isnāt amazing.
Basically the GitHub project is the latest code and the other delete.html isnāt updated as I havenāt worked out how to compile it statically without a web server yet. The second image is the GitHub version - with kind 10002 and extension relay loading options.
It looks like I can maybe use this adapter to output static site files? My previous attempts failed to find how to do it.
The static URL is below.
The issue I had was when I wrote it in plain JavaScript was I needed a better UI template engine. Then I ported the code to Svelte to finish the relay import from extension and kind 10002 query across relays.
I then found out that Svelte is a real pain to export a single self-contained file (they love server side rendering apparently). Maybe someone can assist?
The GitHub repo version can be run locally and has those two features included. I donāt have a hosted version of the Svelte at present.
The logic at present is to rely on a bootstrap list, your browser extension relay data, and kind 10002 + any custom relays you wish to add - and broadcast the deletion to them. It doesnāt connect to relays first and ask which relays have that event - it just sends the deletion proactively to all loaded/selected relays.
If someone wanted it to blast deletion out to 100s of relays blindly, they can edit the bootstrap relay list in the code and it should work fine.
One limitation with the browser extensions is they can be very noisey with too. I made buttons for what could have been loaded (mostly) automatically, because at least with Nostore, it asks to read pubkey, then asks to read relays, and then asks to sign event. Lots of approval popups.
Everything can definitely be tweaked or improved. Things like private relays lists will need support
The hourly metrics are even cooler.
Iāve been working to productise this for a while now. I have enough to put something together and make it public. Itās still a chunk of work.
I thought of tiered access: things like past 7 days for basic, through to past 12 months hourly metrics for pro.
Businesses who migrate to or use Nostr are going to want stuff like this too. They are my ideal clients as a scale it out and Nostr grows.

I havenāt had time to look in depth (not sure itās released yet?), and I donāt have full understanding of his roadmap - but itās all about making Nostr easier.
My only issue is I need to build for scalability and JavaScript isnāt right for my needs at present. I certainly need JavaScript libs and tooling.. but for different goals.
Yep. Thatās an image from my dashboards thatās back by my aggregator and relay.
The key issue is Nostr has around 15MM non-spam events. Indexes are getting big. Tables should likely start to be partitioned on kind/time. Partial indexes for recent data and perhaps limited indexing for historic. My DB is near 90GB already.
However, Iāve started to build support for effectively roll up statistics which can collect is near-real-time and basically give hourly/daily visibility per pubkey.
Basically, itās too slow unless you either pre-warm cache or create roll up metrics, which is what Iāve got working now. Iāve seen a 400X query improvement in places - early data as tables are not filled with lots of different pubkeyās data yet.. but itās significant anyway.
Oi! Stop drumming up competition š¤£
No one imagine anything until Iām ready.
If you have some time, and like Damus (or other Nostr apps), why not give it an App Store Review.
Here was mine: āļøāļøāļøāļøāļø
āDamus is an evolutionary step above legacy social media apps. Your content is your content. You can leave at any time. You can join in any groups or conversations without fear it isnāt the popular opinion or being shadow banned. Itās early, but day by day the helpful community is bringing a new richness to the world.ā
As a workaround try this. https://metadata.nostr.com. It only supports the newer format however.
One challenge is we have an old standard for relay lists and a new one. Some apps are using old or accidentally delete the other data.
If you can try stick to a single app for a little while it should be ok after a while.
Iāve started to design a universal Nostr query and publishing engine/library. Only brainstormed notes so far and a ASCII diagram, however itās a complex problem and we need something.
One aspect I envision will be like SQL query engines/planner, where they can have cost optimisation functions and estimations.
Things like kind 10002, nip05, relay latency/health, relay hints, blacklists, paid query/relays, rate limiting, etc.
The gossip approach is great and it works, however itās more of a āread my group approachā. I donāt think it has any documented publishing rules/logic - like how best to publish a DM or reaction, etc. Iām focused on your network, to the entire network ideally.
I also want it to have common queries abstracted like get notifications, get DMs, get event details/stats, etc. we likely only have 40-50 queries used commonly in apps today.
I could fail.. but Iāll share as I put parts or ideas together. Diagramming is the next major step. However I can already see how queues, channels and async rust can build a really awesome library that abstracts away as much of the complexity as possible. Maybe web assembly support.
Ha. Thanks. I posted a problem not a solution š
My only known issue with notifications is there we have no way to track the cursor of what has been read. Each app always shows the same events are notifications.
I think we need a way to address that.
I donāt have more than lots of observation, however I donāt think blastr reliability hits more than around 30-50 relays total. Often less.
The 300 numbers gets thrown around a lot, however I suspect the relays being offline/shutdown, rate limiting, now requiring payment, Ip blocking, or whatever else (without stateful retry publishing on failure) doesnāt match the common outcome.
I think generally relay payment support is developing along side - but we have limitations in client UI, subscription payments, being lightning only, etc. I donāt think we even have solid open source relay payment gateways that track pubkeys days until expiry or whatever.
Iām general or for something youāre working on?
Because I aggregate relays I can usually detect the newest events for a kind. And I also can do better aggregation or things like reactions and reply counts. Spam detection is easier because I see things quicker and at a larger scale. Lots of benefits - but certainly more headaches as Iām at least 10X most relays inbound processing today, lots of duplicates tho.. but it all adds up.
Itās all still best effort and never 100% accurate or current. Youāll never have full visibility.
I did consider maybe paid API services as an option. I have them today for free with rate limiting.
I very rarely see events older than a month in my timeline having been reacted to, commented on, etc.
Most relays could store a rolling 1-3 months of data and be very useful. Keep hot the metadata kinds 0/3/10002 and then a month of data.
The concern however is a similar problem to broken http links on legacy internet.
I think the archive relay is certainly the direction headed. We donāt need to keep 2 year old data indexed as well as recent. Or we can purge older revisions or replaced events (if stored).
An alternative to the archive relay is paid relays that persist a pubkey/members data for longer periods or forever. I think this will be common too.
All relays all data.. that will die the next couple Nostr 10X jumps.

