It worked fine for Tweetbot. Definitely my preference is filtering client side - as the the relay query results are processed.
JB55 has put a lot of effort into making events fetch and render fast. Adding filtering, will have some additional computation. Obviously more filters will mean more non-linear computation. Just need to build it and test really - no point optimising without seeing how a minimal approach goes.
At some point I envision a smarter relay query NIP drafted and can support filters or rules. It’s perhaps a bit early, while what we have largely works ok.
forgot about these regex filters! still a WIP or in PR yet? can’t wait to try them out awesome work.
I forget who, but I was told someone had started to work on client filtering for Damus. I shared my concept and haven’t followed up since - if we don’t see much action, I may circle back and progress the mock up toward a functional PR.
It’s mostly just Swift UI and data models so far - it’s missing the query hook to filter data. I suspect it may need a couple performance smarts, as pre-rendering is a performance sensitive area. Things like filtering based on NIP05 requires kind0 for that pubkey - so you ideally don’t want to show it, fetch kind0, match on filtered NIP-05 and then hide it.
I can’t say for sure, however I imagine twitter purposely made reporting accounts a 5-8 click painful process. I suspect that’s due to the noise it produced - even in a centralised and KYC environment. They didn’t know how to make the report data useful.
It obviously wasn’t part of their strategy to manage bots - as it almost seemed like an afterthought.
The filtering/blocking approach used by Amethyst today has a few main faults - the primary one being lack of visibility of app wide filtering. People find out by accident, or at all - which is obviously a sign of a broken approach. I don’t quite see it as censorship, yet more as a poorly functioning content ‘value’ scoring system.
The inability to opt-in and customise your filters is really the lacking feature here. I’m actually surprised just how badly the WoT - trusting 5 nth degree following spam reports = shadow ban content - has performed. It shows that close/greater connected individuals (group clusters) regularly have different preferences and desires to self-curate what they see - instead of having it done on their behalf somehow (I.e. WoT signals). People also report or mute for many reasons - from bad content to “I just don’t care about this type of content”. Breaking News: Humans value different things.
Enabling users to self curate more easily and see how far that gets us is the best approach. We can even add filtering for NIP05 domains, or perhaps relays to avoid connecting to. Even content warning, profanity masking, nudity detection, etc. all done locally on devices without major performance impacts.
Damus POC I mocked up. Damus has active development in this area.
nostr:note18z7rqv87qt0nu3v0rajf4ghnrqcnnc3ujgeqy3zswgs2nwh3nq3qmwhnlh
Looks interesting.
:) By “imagine”, I mean an irreversible change that is spurred on by something typically outside the current system, that the system cannot control or outlast/survive.
An example of this is simply Bitcoin. Bitcoin will eat the runaway inflationary system.
If you asked most people today globally; they believe in democracy as an alternative to fascism/socialism/communism, and that their vote, if they choose to, matters (even if in a small way). Certainly far fewer people believe that in certain countries, however democracy is typically not blamed, corruption or a dictator is blamed. It’s not a ‘fair democracy’ is the typical defence. Democracy is the gold standard for a free society (not saying I agree).
Yet, no democracy is fair, and freedoms are gradually abolished by a process of aggregated edge-case law creation and politicians with ulterior motives.
Democracy is broken and doesn’t meet its claimed coordination or social benefit properties - but it was also an improvement over other mass governance systems. It’s just starting to fail very slowly.
What is democracy’s better alternative - and what will spur the change needed for adoption? Smaller privately run estates is all I’m aware of being proposed.
I’m starting to imagine how AI will break democracy’s back and cripple it for good. Sure politicians lying and never delivering isn’t great for trust - however the ML models can lie beyond a common mass broadcasted message (or set) - it can “target individuals and tell them what they personally need to hear to be swayed”. Just like suggesting you purchase a specific vacuum from a company.
AI encourages people to think (critically) less. Shortest, fastest answer. It may not be AI itself that’s dangerous - but how it indirectly changes peoples behaviour.
And if you don’t think people can’t be mislead by mass - just look at Covid. Regardless if you think Covid19 is as deadly as told, or vaccines as safe as told - it fooled many into forgetting to think for themselves, and make a self-judgement call. People are easily led by fear - especially en masse.
It’s funny because this actually breaks democracy entirely (it’s already broken). Imagine for $1,000 I can influence the votes of 10,000,000 people to my favour by telling them what the need to hear to vote my way. I imagine we will need a replacement for democracy this century - exactly what? I don’t know.
It’s interesting that some state, perhaps the EU, hasn’t tried to draft laws against this. It’s obviously really bad, especially long term.
If I interact with an AI in the near future and it’s secret agenda is to mislead and/or non-transparently incentivise options where the AI (or owner) benefits at my expense, we have a grim world ahead, as it’s capabilities and scope of interactions increase.
Hint: This has been happening for two decades already with targeted advertising.
I’m less concerned with AI itself, and more concerned with the humans, corporations, and governments that wield it for their gain.
It’s worth pointing out that ML has been used maliciously for a while now. Defined as using it against another human for personal gain, at the expensive of that person.
The other reason why Twitter and Reddit went paywall is because of machine learning training feeds. ML is looking to decreased the lag between training and human interaction.
It’s said much of the digital information available has already been consumed by the advanced ML engines today - reaching the next cliff to climb in optimising outcomes with the same data - meaning smaller incremental improvement over the recent rapid improvements.
While your data may not be as valuable for advertising, it’s the future feed into these ML learning flows. In a sense, ML businesses are pricing out the rest of the market for things like API access. It’s important to understand this is coming and consider consequences - like the crazy low APi limits to price ratio.
If non-corporate and non-government controlled AI systems have a chance, people need to exodus these closed platforms for their data to even the playing field.
It’s great seeing the advertising business models slowly topple…
Twitter, Reddit… unsustainable businesses without an expensive walled garden approach.
Funny part is I don’t even think it’s caused by Nostr.. it’s caused by market saturation, and inability to create new value for continued growth. They are slowly dying by capitalism’s “the weak starve” property.
Less advertising doesn’t directly mean less privacy invasive data collection and spying.. but certainly takes away some of its capture value.
Anyone and everyone. Filtering and customised views are a developing area, so until then the content will be more raw. Best to follow people and slowly add more for the best signal.
One message at a time. Seems like you’ve come to the right place reading your bio 🙂
/inscriber --model S(hit)FT --budget self-bankrupt-quick start
Best part of the article is they refuse to drop the @ from @fiatjaf in all references. https://www.forbes.com/sites/digital-assets/2023/05/30/bitcoin-social-network-nostr-creator-fiatjaf-/

