Avatar
david
e5272de914bd301755c439b88e6959a43c9d2664831f093c51e9c799a16a102f
neurologist and freedom tech maxi Co-founder @ NosFabrica 🍇 Grapevine, 🧠⚡️Brainstorm

Exactly! 🕺🏻

And a namespace is itself a list of … something. But what exactly are the items on this list?

does nip 32 describe how to format a namespace?

I know that replaced some earlier NIPs that generated some controversy. I should take another look at 32. I recall there were some things I didn’t follow but if it’s being utilized then the examples will help.

How to make something that is simple on the surface, yet complex enough under the hood to do what needs to be done, is challenging when it comes to web of trust, bc many simple solutions just don’t work.

But your points are well taken. Ways I could simplify:

- I describe channels and topics as two different things. Maybe get rid of channels and just stick with topics.

- I describe curation of pubkeys AND notes. Maybe just curate pubkeys, forget about the notes.

- forget about organizing topics into a hierarchy. Just have topics, that’s it.

- provide the ability to upvote but eliminate the ability to downvote.

In each of the above cases, the extra features could potentially be added later if users ask for it.

But consider this:

Flip the channel. Watch your feed. What could be simpler? 😃

The difference between Curated Lists and NIP-51 lists, if I understand NIP 51 correctly: each NIP-51 list has a list owner, who controls what items go on the list.

With Curated Lists, each list has an author, but there is no list “owner” bc the author has no more control over the items on the list than anyone. Anybody can submit items to any list. And then the submitted items get accepted or rejected by the web of trust that is centered on the reference user.

And you can select anyone as the reference user. Different reference user will (potentially) mean a different set of items on the list.

(In the absence of controversy, you’ll probably get the same set of items regardless of reference user. But when there’s controversy, then you’ll see changes in the list depending on reference user. I refer to this phenomenon as Loose Consensus.)

Probably both. My tentative plan is for the series of bounties to culminate in an app that I call Nostr Channels. My description, as of last night:

The app is something that I call Nostr Channels, and it would make use under the hood of the protocol for decentralized curation of data that I have been working on for quite some time now. I envision it as a fork of one of the existing nostr clients. The idea is that you open up your client, select a "channel," and see a feed that is enriched for the topic(s) that are selected to correspond to that channel (minus any topics that you want blocked from that channel, e.g. NSFW notes). Your web of trust would curate the list of topics, the organization of topics into a hierarchy, and the association of any given topic with pubkeys and individual notes. You would have the option (if you so desire) to create new topics, to edit their hierarchical organization, to select which users you "trust" to curate specific topics, and to associate pubkeys and note IDs with specific topics. But even before you've had a chance to do any of those things, channel selection would be fully functional. In the beginning the number of available topics and channels would be few, but it would get progressively better and more refined as more and more users participate.

Not yet, but the idea of a data vending machine, or something like it, is going to be of vital importance, I think.

I’m in the process of putting together a series of bounties to refactor DCoSL with better design and UX to give it better visibility. Trying to decide if rebuilding it as a web app (instead of a desktop app) is the way to go. You could go to the site, pick a list, pick a reference user, and observe which list items are accepted vs rejected by the reference user’s web of trust. The key observation would be that list curation by Alice’s WoT != list curation by Scammy Joe’s WoT.

So then I think: maybe add an API, so your client could request the crowdsourced list of High Quality Writers (or notes) on Topic X (query contains the event ID of the list and the pubkey of the reference user) and get back the crowdsourced list of pubkeys (or note event IDs). And maybe charge some sats to use the API.

Ultimately, the idea of a data vending machine may be better, if I understand the idea correctly, bc the sats are going to a user (the one ultimately responsible for the curating) rather than to a website.

I agree with you to beware building tools that reinforce large accounts. It is one of the symptoms of legacy, ad-based tech platforms that prioritize influencers over all else. Building tools that simply reward follower counts can be a hard habit to break.

On that topic: in DCoSL, the grapevine is based on replacing average scores with *weighted* averages, where weight refers (mostly) to a contextual influence score. But we must be careful to calculate influence scores in a way that does NOT scale linearly with follower count or any other simple measure of visibility. Minus the pathologies of the centralized ad-based system, there’s no reason to give the social media “influencer” who spends all his time building up a million followers more weight than the Einstein who only has 8 followers bc, well, he has better things to do than be an “influencer.”

We witnessed centralized takeover of the narrative during c19.

You’re correct that we need to beware echo chambers. To be honest, it’s a question I think about a lot as I try to envision tools to give us control over the content that we digest. The last thing I want is for everyone to run around building better echo chambers.

But what does that mean? What’s the difference between silencing a troll on the advice of your web of trust vs building an echo chamber?

Complex questions.

If you want a simple theory that doesn’t solve a complex problem, here’s one: “censorship is evil,” without careful consideration of what does and does not constitute “censorship.”

If Alice decides not to consume Bob’s content, some might automatically say she has “censored” Bob. And we all know that “censorship is evil,” so … she’s done a bad thing, or something, according to our simple (but clearly misleading) theory.

What do you mean when you use the word “censorship”?

The question at hand should be whether the tools that we’re discussing are centralized or decentralized.

When you have control over the content that shows up on your feed, is that a good thing or a bad thing? Do you have a moral obligation to consume spam?

Some people might bear witness to the evils of central banking and incorrectly maintain that bitcoin is equally evil bc bitcoin is money, and money is the root of all evil. But they fail to appreciate that bitcoin != fiat.

Likewise, Alice’s decision not to consume content from Bob != government censorship.

Regarding your point that scammers will just spin up more accounts:

In a world with potentially unlimited swarms of scambots, perhaps we’ll need a system where we simply ignore all unvetted users. But of course we don’t want the isolated user who’s a real person to be left out in the cold. So build multiple methods to break into the system:

- pay some fee; you get to decide how much is enough to get onto your feed

OR

- social vetting: one or more trusted users attests: “this account is a real person, I know bc we communicated in meat space.” This feeds into a score, and you decide what threshold score is enough to break into your feed.

It’s a good point about scalability.

And I agree with calculating trust scores. Trust ultimately isn’t binary: Alice may trust Bob to maintain a list, but she may trust Charlie more than Bob.

But we have to keep in mind the point nostr:npub1t0nyg64g5vwprva52wlcmt7fkdr07v5dr7s35raq9g0xgc0k4xcsedjgqv makes and that I wrote about[1], which is that follow != trust.

Also, there are an infinite number of types of trust. Alice may trust Bob to maintain a bots list but not to maintain some other list. Trust is contextual.

And trust scores (as well as other types of scores) need a “confidence” component. Alice may think Charlie is 5X smarter than Bob in some given context, but her assessment may be based on scant data (low confidence) or it may be based on lots of data (high confidence). This is how Curated Lists currently works [2].

And to address the problem that bots cost (basically) zero: the default trust score for unvetted accounts should be an adjustable parameter. If sybil attacks are a problem, set the default trust score to zero. If they’re not, adjust the default score accordingly. Curated Lists currently has this as an adjustable parameter in the control panel [3].

[1] https://github.com/wds4/DCoSL/blob/main/dips/coreProtocol/02.md

[2] https://github.com/wds4/pretty-good/blob/main/appDescriptions/curatedLists/exampleListCurationGrapevine.md

[3] https://github.com/wds4/pretty-good/blob/main/appDescriptions/curatedLists/screenshots.md

This is the kind of question that spurs me to build DCoSL. Not only what defines a bot, but: *who* defines a bot? Who creates and defines new categories? We need some degree of consensus on these questions - but we don’t want any single entity to have the power to dictate that consensus to the rest of us!

We need what I call “loose consensus”:

https://github.com/wds4/DCoSL/blob/main/glossary/looseConsensus.md

Longer term, the DCoSL solution will include that you’ll have the power to create all sorts of lists: lists for bots, spammers, trolls, NSFW, shitcoiners, statists, etc. and Alice will be able to attest, explicitly, that she trusts (or does not trust) Bob to curate lists in general, or to curate this particular list, or that particular list. Which means you won’t need to scrape the follows list. And it also means that if the bots mutate by being less obviously bot-like, then the community will be able to counter with new lists that are more appropriate to the changing threat.