The issue in this case is the target problem isn’t directly flood spam. It’s largely repetitive or generic generated events that aren’t from a direct human source. It’s like metadata events or piggy back events that don’t provide value to most users.
A significant issue I realised recently with POW pubkeys, is when you use HD identities that derive from a seed key - even if your initial seed or first derived key has high POW, any children using incremental counters won’t. It will be a random POW.. most likely 0.
This means POW for HD identities is likely incompatible with many of the simpler key management and rotation approaches.
I think it’s something that needs to be addressed from both sides - publisher and client filtering. Perhaps a kind 0 flag? Unsure about best bottom up approaches.
I think a “bot” is perhaps too specific phrasing. Is a bot something that posts my twitter posts to Nostr as well (extremely hypothetical.. I don’t dead bird). Not really.. but it is automated or assisted? But does that matter.. not really, it’s mimicking a human. Does a bot reply to DMs? Etc.
Verification events read like a bot, but are ideally once per account/service combo. This is a ‘service-like’ event/message example - in contrast to a dedicated bot/machine account.
My interest in identifying machine posts is to remove them from certain summarisation data - like hashtags or words or url frequencies. I want to represent human contributions instead of the most frequent repetitive bot terms.
I think we need a better name too. Bots, automated, machine… maybe hybrid accounts exist too… naming is hard.
Maybe “service” accounts? Maybe there are event a few standard types that could be chosen from?
Awesome. Some really interesting client side ideas in the demo.
Great example of a value add Nostr service. Building more value over time helps stay competitive and potential offer additional premium services that can help cover costs and make a sustainable growing business. nostr:note1k4u80y5jjm07f4jqkuxafjz6yd7l8l42y63v8u0vp4ffdc24a68qwlz7ln
And I’ve even considered how Proof of Work report events could help. Game theory says they can’t because the market dynamics don’t match incentives.
For example, when a bad event costs 1 unit, the report has to cost at least 1 unit too. But many reports are made, like votes, to try surface moderation sooner - or even just lack of information it has been reported (bifurcated network visibility).
That means for each 1 bad event, you have 1+N cost to report it. Spam wins, as the cost is equal at best, but more likely expensive to report then create.
And even if you think that can work, who is coving the cost of the reporters? I was processing 15 million spam events per day until recently. Near zero cost to create, and non-zero greater cost to counteract = clear loser.
Ha. I’ve kind of wanted to learn Scala… but it’s just the time investment for basically the same outcomes I can get today without it.
Clojure is great once you get lisp. There seems to be a barrier to human cognition where they syntax hurts adoption. The code as data and pure functional programming aspects are incredible.
Spam is typically two forms. Annoyance and a scam to extract money for idiots or the unfortunate. I’m less concerned about those in general.
I am far more concerned about targeted and malicious coordinated attacks being performed. General spam isn’t dangerous - however the fake hostage phone calls with your daughters voice are. And governments and or malicious actors can 100% no question abuse decentralised moderation. And even people who don’t like you - why, because computers can report faster than humans. Moore makes that law..
I haven’t read your latest proposal in full, so I’m commenting in general around approaches and trade offs.
It sounds like instead, you’re after centralised moderation or curation. Both are asking someone else to make decisions on what you see and don’t see. Again, I don’t see that requiring any protocol level changes… you can do that all on a layer 2 moderation layer above. The important part is it’s opt in and transparent.
My reply earlier to another of your replies states how this is easily possible.
For 1, based on your past expression that the intent is to manage illegal content - then technically all content on your relay needs to be audited/moderated. Moderation doesn’t mean taking action - just that a layer of decision making sits above content. If you instead are seeking a content flagging mechanism to help you highlight content as higher priority for moderation review - that is a different problem and use case. You don’t need Nostr to do that - you can just use a HTTP request to your server. You are seeking centralised reporting (specific to your relay) in that case. Other relays don’t match your relays local laws or concerns.
Individual relays can do whatever they want. You could have a whitelist to be able to post to a channels, or even an approval queue before the relay will show that event in queries. Ideally they have a NIP with the NIP value as a supported nip.
You can build a moderated Reddit style subreddit environment for a relay or even a group of relays. You can shadow ban identities and events. You can hide posts or comments. You can limit publishing events to certain channels or groups only - or whitelist. You can even delete events from the relay - even without a delete event. Or even using a delete event, however with a whitelisted pubkey that’s allowed to delete content of other pubkeys from your relay - basically an invalid event for other relays so they can reject it.
That’s all centralised moderation. By all means, build a modern reddit or twitter - where people become power hungry, over moderate, censor what they dislike, and build an in-group culture that can’t think for itself unless it’s the same-think.
For 2, the answer is simple, join at least one relay in common. This isn’t a problem specific for your use case. It’s literally a Nostr architectural gap - two disconnected networks literally are silos and cannot see each other - at least without some replication or rebroadcasting. Or to communicate with someone directly you look up their 10002 event and connect to their publish=true relays - either on demand or add one or more of them to share common relays. And to message them, you publish to their read=true relays. And if you are seeing to solve identity or content discovery - that’s also an open development area and has nothing to do with moderation as a requirement.
The best part, if I don’t like their #Bitcoin posts, I add a client app filter and hide that. Simple.
I suggested that as a joke in the past.
I don’t think the UX works - do you have to perform a CAPTCHA per relay you publish to for every three events? Relays would otherwise have to trust some shared CAPTCHA service session or result.
Plus CAPTCHA doesn’t work well, and is dying slowly. It was a rat race of computer vs computer.
If you're interested in Nostr Paid Services, I've written my thoughts on approaches here - including:
1. On-demand and optionally authenticated
2. Pre-paid with authentication (and top up/extend mechanism)
3. Membership entitled and authenticated (with join mechanism and possible service feature caps or excess usage fees)
https://github.com/nostr-protocol/nips/issues/340
I'd like to collaborate to battle test these approaches, and see if we can define workflows that work for the paid services people are, and want to build on Nostr.
Feedback welcome and encouraged.
And don’t be discouraged by the GitHub issue being proof of work related.. it’s just a use case where we’re discussing possible payment flows.
Nostr paid services can apply to: translation APIs, paid event broadcast, premium relay membership (additional member features), media uploads, interactive bots, paid Nostr content, paying for store goods, whatever.
If you're interested in Nostr Paid Services, I've written my thoughts on approaches here - including:
1. On-demand and optionally authenticated
2. Pre-paid with authentication (and top up/extend mechanism)
3. Membership entitled and authenticated (with join mechanism and possible service feature caps or excess usage fees)
https://github.com/nostr-protocol/nips/issues/340
I'd like to collaborate to battle test these approaches, and see if we can define workflows that work for the paid services people are, and want to build on Nostr.
Feedback welcome and encouraged.
“Decentralised Moderation” is censorship, as you never know who is making decisions on your behalf.
In contrast, if a relay you join has moderation terms defined, and a list of public moderators (or even private if you don’t care) - that is completely fine. That’s opt-in transparent moderation.
If I join a moderated Nostr group/channel (when we have better moderation tools in future) - not a problem. You dislike the moderation, start your own Nostr group/channel.
And if I follow someone or follow a list or feed of content, that’s curation. Again, 100% fine for that data source and curator to pick what to include.
Sounds fine. Thankfully we don’t need Nostr to be trustful.
I don’t trust relays. The beauty is they are actually in competition. If they don’t let you easily lookup profiles, replies and threads, it feels like they aren’t working at the user UX level.
My main concern was instead the expectation that we should trust decentralised reports and/or moderation events - which is different.
As a decentralised network, Nostr has censorship resistance properties - however it has a single major fault. Due to all content being hash addressable (an event id or pubkey), it now means to moderate the exact same content or identity, you target this single hash and broadcast bogus reports to all relays.
In effect it’s cheaper moderation than today, where the same content is not linked across platforms with hard references. And we already know US officials email Twitter to suppress tweets with hesitation - it’s not just theoretical.
Market incentives exist to build both general and targeted bot identity farms, to slowly get followed by interacting with you, and posting targeted content to your Nth degree network too. At some point your immediate network becomes 5-15% bots. Your 2nd degree network becomes not 5-10 people, but 50-100+ imposter bots. They now can control what you see using trust thresholds - and it’s hard to detect or notice it’s even happening to begin with.
Now, let’s say I run a Nostr Web Store selling digital goods. Imagine if my shop competitors can now buy or lease an bot identity farm, targeting my current or future customers. They could abuse reports, create fake product reviews, and so forth.. my competition now steals my business away. It could be really subtle too.. barely noticeable at first.
The state has unlimited money. I can already generate 22,000 identities and events/second on my laptop. Twitter, Facebook, LinkedIn use KYC and still has a major bot problem - ironically KYC acted like poor man's CAPTCHA, and kept virtual identities from exploding. That still doesn't work however, as if you go to India or a country with 1B+ population, and just piggy back off their SIM card 'mobile number as KYC' - you can still create mass virtual identities. Being an Indian SIM doesn't mean your profile has to be Indian, or for anyone to else to even know... It's a failure of mobile KYC as proof of identity - but also shows how trusting even centralised identity’s is a extremely problematic.
You also can't ban VPNs from publishing events, as people need them for protection. That means you can't rate-limit IP addresses. How do you propose, at this scale and volume, you can make any decisions from the data - when it can all be spoofed, targeted censorship, and so forth.
Let me know if these projects can help you with real world outcomes. I’m mostly slowly building for myself and the ecosystem - but the reality is we don’t have that many Nostr rust devs.. so often it feels like other people using stuff has a low chance atm.
I’d love to build on top of these projects and end up with more full functional projects people can use and run themselves. Like even a nice way for paid relays to not rely on third parties they don’t want to.
There is a full draft NIP written up as well.
https://github.com/blakejakopovic/nostr_pow_service/blob/master/NIP-XX.md
There is a relay AUTH NIP 42 as well, so basically my pow service example only lets you request events with your pubkey to have pow generated for them. That means someone else can’t use all your credits by pretending to make events from you, and it means you can deduct credits or have a fee you can drawn down for that pubkey.
These projects are all related and can work together. I don’t want to tell people how to charge or do accounting - even if they wanted to accept fiat. I’m focused on Bitcoin and lightning however, and bringing the tools we have up to scratch to empower providers.
nostr:note1u0pxa5egj4258f84hp75jl65d9ldh3a63l87du7gnk7cs00nkxzqtpltmv