Yes, I'm fine with that. I just don't know if you realized that it's hard for an organization to use it because they'll tend to have long lists with more than one dimension, and that they don't want publicized.
Discussion
this is a client problem not a relay problem
quite simply if that allow list is only published to the relay then it's no problem anymore
it's a client problem
i appreciate you doing the thinking on this, i'm just making sure that you contain the bounds of your thoughts into what is in what domain
i totally agree about the issue of lists being implemented as CRDTs but while it is not difficult for *me* to implement a relay that uses them to do things, the cryptographic signature verification load on clients is very high, this is why giant lists have remained the norm
they have to pipeline the processing of these events and cache them so tehy don't ask for them again, and they have to address the matter of making their caches more robust and possibly even they need a way to transport them between clients
so it's a lot of stuff to build, i know easily just by thinking about it what is required to make it viable but we need front end devs
i might say that in funding prioritiies for gitcitadel we may need to consider that the sooner we can get a typescript ninja on teh payroll the better
Why do I need a cryptographic signature on a list of pub hex IDs that I only use within one server landscape? If anything, I need to encrypt and obfuscate that list, in case it leaks, and put it behind an API that just tells the requesting client if the current npub is on the list, or not, so that there's only one list.
Customer lists are high-prio security, IMO. Especially, if we can link pub IDs to Lightning wallets over payments, and IP addresses to pub IDs. I'm thinking subscriber lists should be something more like Fort Knox, less like a public relay.
nostr:npub10npj3gydmv40m70ehemmal6vsdyfl7tewgvz043g54p0x23y0s8qzztl5h have you given this any thought?
I don't need all 122 of you, one will suffice.
lol, 122
yes, nip-86 is some kind of an attempt to make a spec for relay management, and he did a lot of work making that, and implemented it
but this problem is different, like you say, the issue is authentication, and the data itself needs to be atomic and yuge
i've just given it some thought and quite simply as regards to realy you can just deploy it without an owner designated, and create a private control system for access control
but it's a big job, not gonna lie, i made realy's policy system based on what i could cheaply acquire for UI interface without actually building the UI
it works, to stop spam, that's what it was for
making relay access control is a separate matter, in the private arena, and i'm just gonna say that um... what is the priority on this action-item exactly, do you have serious customers lining up to deploy systems like this or is this just a hypothetical?
We'd be the customer. We'd make Realy a public-facing mirror to theforest.nostr1.com and theforest.nfdb.com (or whatever domain Semi bought). And could make another mirror for the git server relay.
We already have strfry, you see. We specifically wanted something very different. We're going to have fallback plans. You should need to hit mutliple different relays on multiple different machines in mutliple different countries, manned by mutliple different admins, as our potential attack vector is the nation state. We're paranoid as heck.
Our current plan is to just hardwire it to the uploader and then use theforest AUTH to sign into the uploading web page, but we could then AUTH to Realy, directly, and then our subscribers could access services over HTTP and whatever else nifty you build.
We don't need it, yet, tho. It'll be at least 6 months, before we release anything using it. We can all just ruminate on it, a bit. Our ideas tend to get better, over time.
well, i'll be ready, such a policy scheme needs some refactoring of architecture to be easy to add, but if we can muster a nice UI for it i can do a nice back end and protocol
It's an algorithmic and architectural conundrum. How do you keep customer data online and easily available to your servers, without it being stealable or leakable, and while handling a heavy load of requests?
yeah it's a delicously juicy problem for me to ponder on
you need to just remember that we can avoid storing actual valuable data of our clients, for a start, no kyc, at least, not on the gear that is doing the service
then you have threat models
the network side, the cryptography is pretty solid, but we have issues that may grow in complexity when the number of administrators increases, 1 admin, easy, 5 admins, that is 25x the risk level
the physical level, not sure how to address that one nostr:npub12262qa4uhw7u8gdwlgmntqtv7aye8vdcmvszkqwgs0zchel6mz7s6cgrkjis probably more geared towards this vector of attack in his work with secure elements... my inclination would be to say we colocate and put strong squealers on our hardware
AI scraping means that a lot of data can be guess due to other data. Things happening near the same time stamp from the same region, for example. And even anons behind a VPN or Tor want some privacy.
The admin issue is what makes it tough. If someone can get to it and they control the encryption key, then they can leak it.
So, you need to anonymize or obfuscate it and/or isolate it programmatically and/or physically, as well as encrypt it.
i know you are a bit excited about this but after you think it all through for a while, digest it, you will have some good insights
i think that nip-86 was not really quite the right direction to go with this issue, it really needs a bigger plan than just addressing the superficial requirements, because of the threats that could be involved
for sure, limiting risk can be easily achieved by only having one owner who has superseding rights and technical skill to detect and remedy problems
the more people you give administrative rights, the less powerful administrative rights have to be
Ideally, even a person who can leak the decrypted data wouldn't be able to make sense of that data.
Like, if you used multisig. 🤔
Or batch discovery. Hmm. The server holds the key, maybe, and three people hold the server key.
this is a painful rabbithole you are diving into there
we have to define our threat model, and you must not think outside of it for reasons you are experiencing
we have to trust our server's physical security, for example, or otherwise we have to have physical hardening on our servers, which is a great increase in cost
it can be mitigated by making encryption schemes that defeat physical breaches, but there is limits to how strong you can make this security, especially with scale, cryptography gets astronomically expensive at scale, the math is absurdly expensive compared to simple ordinary computations, the overflows and so forth involved tend to be in the dozens if not hundreds of cycles per operation
Yeah, that's why we want different machines. Then you move the threaten more internal, which is easy to manage with permissions.
LOL you know how much I love this logic stuff.

That's probably why Primal built their server, but I don't want a server inbetween the user and their relays. I want smarter relays that can talk amongst themselves, on the server side.
That way, someone who prefers their own relay can switch ours out for the own, and everything in the client would still work. I love being able to use my own relay, and I want that for everyone.
well, it's just a matter of time and priority and funding
i spend half my day building a matchmaking system for a gaming social network that is intended to be a go-to funnel for attracting gamers to use it to feed data to gaming and anime studios
that's a lot of energy i can't put into this
