Avatar
G. T. Karber
725cdc19012626666c9d3f8221ad2b467691bae7b1332aab62170b9e3bd5ad07
Hollywood mystery writer. Creator of Murdle.com.
Replying to Avatar HoloKat

Wikipedia on Nostr? What might that look like?

Right of the bat we’d have to think about spam. To combat spam we’d probably need to introduce proof of work. And it can’t be too easy to complete.

To allow for a wide variety of perspectives we’d probably want to show the end user various different “views” of the same page, based on user-specified criteria. Much like algo choice, the user would need to have control over how many opinions and which they’d like to consider while viewing that page.

For example, a page about CCP might be heavily edited by CCP and it is entirely possible that the page would be manipulated via PoW, likes, zaps, whatever… So what is the end user to do? They would have to be able to use the social graph to determine how many and which opinions they value most. But even that comes with a bunch of challenges and possible manipulation.

I imagine Atlas, sorry Pablo haha, would resemble version controlled UI where you can quickly broaden or narrow the scope of opinions you wish to consider which then updates the document version based on how many people in you’re criteria edited it or found it valuable.

Challenge 1: Most people will not interact with the millions upon millions of pages so there may be weak signal from your personal or even extended social graph. How do we keep the pages balanced in perspective? Do we provide geofenced views? “I don’t care what China has to say about itself” What if they just use US IPs?

Challenge 2: Even if you were able to source enough social graph signal, that signal can also be manipulated by someone who is keen to manipulate via likes, zaps, or PoW.

Finding signal with NIP5

One thing we could consider is designing the UI in such a way that it specifies which entity a user is related to via NIP5.

For example, it is not inconceivable that lists could be curated that include all known government and university websites. Then, anyone with a NIP5 from any of those domains would be classified as a “government official” or “associated with X university”. Combined with the information about who made the last change you’d have a clearer picture of the potential biases involved.

Of course, this doesn’t mean only users from govts or universities provide signal (govt domain is mostly to monitor potential abuse), so we’d need a UX that helps jump between various sources of edits including the plebs. The UI has to be obvious enough and easy enough to navigate that you can see the page edits from these various sources without clicking too much.

The question that remains: how do you know which version to present the user if they are constantly edited by hundreds or thousands of people near real time?

Perhaps the answer is: You don’t.

What if we use DVMs as a means of querying the information to find what we consider to be most fair and bias-free.

For this, we could provide some pre-built prompts to show results that exclude govt domain edits, or exclude university domain edits, or exclude Bulk geo-fenced populations. This would probably require a smarter relay though… have to think that one through. But the idea is that you could see which country is making the edits and offer a DVM that excludes the country which may be biased in their edits. The pros of this approach is that you can eliminate some of the manual IP manipulation by highly interested actors.

For the final UI, I imagine a topic page that provides the following:

- Topic name
- Clearly visible which types of entities have edited it and how many times
- A query window like any of the AI chat models.

- Perhaps some “most popular” edited versions view in an easy to hover navigation) see below for what I mean by “most popular”

- Pre-created prompts to help filter without much thinking (exclude govt. edits, show university edits, exclude countries etc…)

- A highly selectable set of filters that are easy to check and apply - which then triggers a very fast DVM query.

- Name and bio of editor returned by the query

- Comments on the most commented version (perhaps also filtered by NIP5 criteria (also specifiable by end user))

We know we can’t rank anything by zaps as that is easily gamed. We also can’t fully rely on likes because it’s trivial to create keys in an automated fashion. A state actor can easily influence all of these metrics. (This is where filtering by NIP5 may actually help).

These are just some starting thoughts. I think the end result would give a lot of querying options to the end user and allow them to decide for themselves which versions of the “truth” they believe in. We can show them many versions, and perhaps find some ways to curate the top versions - whatever that means or whatever that looks like in the end, but ultimately the user can decide which sources to trust or ignore.

Aren’t these problems already addressed robustly — nay, solved? — by Wikipedia?

What do people talk about on here?