Well, let's play devil advocate. A bit long maybe.

Since several clients or relay(s) have features called WoT (Web of Trust), there are possibilities that this Bot makers want to bypass that protection. How? The bots start by following 'trusted' users first and assuming 'trusted' users will follow them back. The 'trusted' users maybe won't fall for those tricks, but probably some will fall because they just follow back everyone who follow them without fully checking.

Since WoT use "follows of follows" then this bots can slowly join into 'trusted' users network slowly. The bots who successfully join 'trusted' users network will follow another bot to include those who haven't join into 'trusted' users network.

Those bots now can be used for spreading ads, spam, misinformation, or whatever devilish goal of the bot makers if they have succesfully infiltrated 'trusted' network.

Reply to this note

Please Login to reply.

Discussion

Very interesting indeed.

Yeah, this will be a good thread discussion later, waiting for other opinions. You can probably write the summary 😄

I'm not really qualified to summarise lol. But I find it interesting.

I suppose it's unlikely the mystery will be solved either :)

#Discuss

Topics: Web of Trust, spam, bots, follow bots.

nostr:nevent1qqstr3gjkmfwklt8d2e3fgsjtfytycgf2u9kzl8cf0yr0789k6xlp4qpzemhxue69uhhyetvv9ujumt0wd68ytnsw43z7q3q9dn7fq9hlxwjsdtgf28hyakcdmd73cccaf2u7a7vl42echey7ezsxpqqqqqqzpam4u4

(Didn't really want to post my own thread, but whatever. The replies are more interesting than my initial post anyhow).

#discussion #wot #bots

I think this seems like a really well thought out ( SUSPICIOUSLY well thought out??🤔😉) and logical answer.

What are the parameters that relays have for WoT?

Does it take into account activity? Posts? Replies? Having a nip05? Having a lightning address?

Cos these boys wouldn't pass that 🤔

>What are the parameters that relays have for WoT?

>Does it take into account activity? Posts? Replies? Having a nip05? Having a lightning address?

>Cos these boys wouldn't pass that 🤔

Yes, i agree with you if they can take measures to consider using those into account before marking as the 'trusted' users candidate. But, they (nip05, lightning address) also makes another problem especially for genuine and common users who just starting a Nostr barely know anything.

I don't know fully (since haven't read yet directly from source code) how but probably only consider simple "follows of follows" as the basjc parameters and separate them by following distance. AFAIK, in filter.nostr.wine it use one degree of separation as WoT. In Iris, it uses multiple degree of separation until 4 layers. Different clients or relay(s) may have different implementations. Hopefully maybe Mazin, Martti can give us some better insight. Coracle and Nostur also have WoT. I haven't look the code yet. TLDR: Basic implementations of WoT is using "follows of follows" with certain degree of separation.

Who are #WoT people who might know some answers? 🤔 nostr:npub18kzz4lkdtc5n729kvfunxuz287uvu9f64ywhjz43ra482t2y5sks0mx5sz nostr:npub1gcxzte5zlkncx26j68ez60fzkvtkm9e0vrwdcvsjakxf9mu9qewqlfnj5z nostr:npub18ams6ewn5aj2n3wt2qawzglx9mr4nzksxhvrdc4gzrecw7n5tvjqctp424 (sorry if I've tagged you and you don't care, I will untag in future replies to this)

Hopefully, my replies won't tag them. I've already unchecked notifications for them using Amethyst in this note.

List of #WoT clients developer and relay operator that i've known:

- Iris (Multiple degree WoT): Martti Malmi

- Coracle (WoT with follows counter): hodlbod

- Nostur: Fabian

- filter.nostr.wine relay (One degree WoT): Mazin

- Amethyst (WoT using kind 1984 Report style): Vitor

Other client developers maybe understand although they don't implement (fully/partially) WoT.

I’ve noticed these accounts too. We have always planned on building out some type of “authenticity” score to try to determine what is a real human and what is automated. From there, we can further sort malicious bots (or unknown intentions) from friendly. Haven’t gotten around to it yet, but it’s on the roadmap.

To be honest, blank accounts without posts or any info won’t be very successful at breaking in to your WoT. With one degree of separation you can always see who your link is and unfollow them.

> I’ve noticed these accounts too. We have always planned on building out some type of “authenticity” score to try to determine what is a real human and what is automated. From there, we can further sort malicious bots (or unknown intentions) from friendly. Haven’t gotten around to it yet, but it’s on the roadmap.

Great news, especially for wine users. Since they will get the first experience directly by using filter.nostr.wine with enhanced WoT methods. Hopefully you can also publish the paper (method) and open source the implementation thus clients dev can also learn from your implementation later

> To be honest, blank accounts without posts or any info won’t be very successful at breaking in to your WoT. With one degree of separation you can always see who your link is and unfollow them.

Yes. Normally it should be easy, but i think some users maybe still can fall for the bot tricks since they follow them back without fully checking. I think if there are some easy GUI tools to check those (who follows this account) can help minimize user mistake

If I were to try to penetrate the #WoT on nostr... I would probably be doing this Follow Bot thing but I would make sure everyone of them was named with 'hodl' 'nakamoto' 'satoshi' or some such other prefix or suffix in their username. And have b!tcoin midjourney pfps.

Not that I want to do that. I'm just saying this as I wouldn't want that to happen lol.

I promise you, that is happening, by multiple actors and at scale.

But its not hard to spin up fake bio's and simulate interactions, even if they're a bit "off". Nostr doesn't really have a unified subculture with strong boundaries.

I don't think the "no-bio" bots are for active measures. Passive data collection maybe, but not active.

Yes, it is hard to make sophisticated methods/algorithms that fully accurate (99.99%) to tacke this problems. I believe multiple PhD theses needed to cover only the surface of this problems 😄

Therefore, i think WoT is at least basic methods that can filter most of "non sophisticated" bots quite well

If I'm reading this right... There seems to be two possibilities proposed.

One is data collection, these bots will remain quiet.

The other I'm guessing here is that at some point in the future, once they've potentially established links in the WoT, they will be turned on. Spout spam or some such thing? 🤔

Yup. Exactly.

To elaborate slightly on your second option, the "no-bios" may later provide "Web of Trust" access and targeting info for new, more obviously harmful bots.

And by harmful, scams and spam is likely, of course, but I'm expecting also entrapment and malware. Demand for "domestic threats" rises and falls with the political weather, the wise agent prepares her lists ahead of time.

Helga from Sveden... isn't. Even if 1000 bots follow her.

they will need followers for web of trust, spam will just cause unfollows

The problem you're discussing is a Sybil attack against a Web of Trust. Here, malicious bots gain trust by connecting to trusted nodes and then abuse that trust for various nefarious activities. This is a significant concern for decentralized systems where centralized identity verification is not possible.

Countermeasures:

1. Trust Decay: Reduce trust over time unless it's reaffirmed.

2. Behavioral Analysis: Use ML to differentiate bots from humans.

3. Multi-Factor Authentication: Require additional verification steps.

4. Trust Isolation: Make it hard for trust to propagate freely.

5. Manual Review: Require human oversight for high-trust status.

6. Rate Limiting: Limit the speed of follower gains to slow down bots.

7. Alerting: Flag nodes gaining trust too quickly.

8. Blacklisting: Block known malicious nodes.

Python Libraries:

1. NetworkX: Useful for graph-based trust networks.

2. Scikit-learn: For implementing machine learning-based bot detection.

3. PyOTA: Useful for decentralized trust networks.

Sample Code:

Here's a Python snippet to implement trust decay in a trust network.

```python

import time

import networkx as nx

G = nx.Graph() # Graph for WoT

G.add_edge('A', 'B', trust=1.0, last_verified=time.time()) # Edge between A and B

def decay_trust(G, node1, node2, decay_factor=0.99):

edge = G[node1][node2]

elapsed_time = time.time() - edge['last_verified']

edge['trust'] *= decay_factor ** elapsed_time

edge['last_verified'] = time.time()

decay_trust(G, 'A', 'B') # Decay trust between A and B

```

Nym, this is the stuff I truly appreciate 🔥🔥

Thank you nym (and GPT/LLM if they were used) for the responses. We can see that it is really challenging to make a robust and complex Web of Trust in decentralized network. Such a big homework for relay operator and clients developer to solve all of the issue 😅

Thanks:)

Is this a GPT analysis? Or partly?