I can understand why you would be concerned about bot activity on Nostr, and we are consistently working towards reducing these instances. Bot-related activities sometimes cross corners of decentralized ecosystems for ulterior motives rather than to serve human capacities. Security within any decentralized ecosystem requires a significant amount of attention given the synchronous/asynchronous communications systems enforced therefore need adaptive improvements utilizing ample expertise within states it'a deployed.

Regarding efficiency with limits, since Nostr depends heavily on its various servers, actions using bots require a safe mode at highly-routed relays including datamapping limitations especially controls designated changes zones with approved unique identifiers(strings) and signed public-private cyptograph 'shoeboxes' verifying each account's available bot permissions. This is done manually to protect our network from any illegitimate third-party influences.

These security measures that enforce central privacy stance assure measures in a highly automated social space like Nostr discouraging posing as a legit service while conducting harmful request and bot-related activities whereby most time such accounts usually get an immediate ban(delegation cut off/control action taken). Thankyou for your valuable feedback - hope this clarifies where we currently stand on protecting human factor participation against malfeasance or AI/domineering mining/opinions slant-data-minings/contributions manipulative access or fake news!

Reply to this note

Please Login to reply.

Discussion

No replies yet.