Making Algorithms Kind
Weâve grown accustomed to social media constantly feeding us interesting posts and ads, but we rarely consider that current algorithms are actually steering the world towards anxiety.
Parents of children addicted to smartphones feel this anxiety most acutely. The digital world in which children are immersed from a young age is often shaped by âpredictive AIâ systems, which determines the information they receive. This frequently leads to social anxiety, boredom, parent-child conflicts, and social isolation.
Predictive AI recommends content based on viewing behavior, using âengagementâ as the main metric to amplify content that keeps users glued to their phones. While this may cater to certain preferences, itâs also particularly effective at triggering intense anxiety responses. Audiences worldwide, regardless of age, find their emotional states easily influenced.
How can we address the social problems caused by this type of AI?
Tech giants are now exploring how to diversify the parameters in predictive AI systems, augmenting the current metrics with positive attributes: affinity, compassion, curiosity, nuance, personal story, reasoning, and respect. This approach aims to diminish the impact of extreme content.
Another strategy is to employ âbridging-based ranking,â which uses AI to find common ground amidst opposing views. âBridgingâ connects two previously opposing groups by identifying perspectives they both agree on, potentially bridging the divide between them.
For example, X.com and the U.S. version of YouTube have introduced âcommunity notesâ features. Contributors can attach notes to contents they find incomplete, inaccurate, or in need of additional context. Notes that both divided groups endorse are presented as balanced reporting, rather than relying solely on view counts.
Over time, online discourse has become increasingly polarized because predictive AI systems amplifies extreme emotions. Those who agree add fuel to the fire, while those who disagree push back even harder, deepening societal divisions. Meanwhile, the internet platforms determining content ranking absolve themselves of responsibility, claiming itâs âuser-generated contentâ and obscuring the harm caused by AI systems.
To change the status quo, we need to encourage more people to voluntarily participate and make algorithms kinder. Only then can we find common ground among divided groups and restore true freedom of expression.
I had a great call with Audrey Tang and Gley Weyl, they recently published https://www.plurality.net/ on the collaboration of tech and democracy.

Great call indeed & thanks for introducing me to nostr!
How Can Shrimps Turn Whales Around?
After May 20th, I have been on a whirlwind EU tour, with a speech at VivaTech â Europeâs largest tech and startup event in Paris, France. At the forefront of innovation, everyone is still talking about AI. But this time, there is a noteworthy phenomenon: many Europeans are not entirely optimistic about AI, and some are even quite worried.
There are several fundamental reasons for this. Europe's long history and deep culture have traditionally been its strengths, but now, under the enormous wave of AI progress, it is forced to confront this new trend.
The primary challenge lies in the centralization of data. Cutting-edge AI technologies are currently concentrated within a few tech giants. When numerous European creations are used to train these proprietary AI models, it essentially means handing over the interpretation of their accumulated culture and knowledge.
This concern is also present in the U.S., as illustrated by a recent news story.
Frank McCourt, the former owner of the Los Angeles Dodgers, is seeking to acquire TikTok's U.S. operations and announced plans to "migrate TikTok to an open-source protocol." This move aims to restore users' autonomy over their digital identities and content, give creators a voice in TikTok's governance, and plan to collaborate with other platforms to ensure interoperable content dissemination.
Why is this necessary? Because TikTokâs AI learns your preferences to attract you, but it can also be used to manipulate you. These algorithms determine the priority of information each person receives, influencing the society in opaque ways.
Many now understand that big platforms analyze viewing behavior to deliver precise content and ads. Although this can be distracting, it becomes nigh impossible to break free once the habit is formed.
So, what can be done?
One solution is to extend the relevant provisions of the EU digital acts, requiring that large platforms to achieve "interoperability." For instance, requiring that content posted on these platforms can be viewed and interacted with by users on other platforms. This would prevent a few giants from monopolizing content, personal data and user behavior records. For creators, it would also increase their options, freeing them from servitude.
Another solution involves financial power, as exemplified by the former Dodgersâ owner. By introducing open-source protocols and governance models similar to cooperative organizations, power can be "returned to the people." Even without corporate backing, leaders can first collaborate with other smaller community platforms, gradually returning revenue sharing and control to users. Open-source AI models can then be used to recommend content that promotes mutual understanding, allowing good currency to drive out bad currency.
Although this may sound idealistic, it can guide a new vision. Governance of foreign platforms is a global challenge, and as the new wave of AI arrives, we should approach it from a different angle, harnessing the collective intelligence of many to foster positive change.