Abstract

As the last few years have seen an increase in both online hostility and polarization, we need to move beyond the fact-checking reflex or the praise for better moderation on social networking sites (SNS) and investigate their impact on social structures and social cohesion. In particular, the role of recommender systems deployed by Very Large Online Platforms (VLOPs) such as Facebook or Twitter has been overlooked. This paper draws on the literature on cognitive science, digital media, and opinion dynamics to propose a faithful replica of the entanglement between recommender systems, opinion dynamics and users’ cognitive bias on SNSs like Twitter that is calibrated over a large scale longitudinal database of tweets from political activists. This model makes it possible to compare the consequences of various recommendation algorithms on the social fabric, to quantify their interaction with some major cognitive bias. In particular, we demonstrate that the recommender systems that seek to solely maximize users’ engagement necessarily lead to a polarization of the opinion landscape, to a concentration of social power in the hands of the most toxic users and to an overexposure of users to negative content (up to 300% for some of them), a phenomenon called algorithmic negativity bias. Toxic users are more than twice as numerous in the top 1% of the most influential users than in the overall population. Overall, our results highlight the systemic risks generated by certain implementations of recommender systems and the urgent need to comprehensively identify implementations of recommender systems harmful to individuals and society. This is a necessary step in setting up future regulations for systemic SNSs, such as the European Digital Services Act.

Keywords: Opinion Dynamics, Social Networking Sites, Recommender Systems, Cognitive Bias, Polarization, Complex Systems

Take home messages

Recommender systems that seek to solely maximize users’ engagement pose systemic risks for our societies:

  • they necessarily lead to a polarization of the opinion landscape,
  • they lead to a concentration of social power in the hands of the most toxic users
  • they lead to an average  +26% algorithmic negativity bias (cf. Toxic Data) i.e. an overexposure of users to negative content (up to 300% for some of them)

Advances on opinion dynamics modeling.
The calibration of model of opinion dynamics on real-world data (Twitter messages) show that :

  • Influence between social media users, measured as the probability of retweeting a message, decays roughly exponentially as the difference of opinion increases, with some refinement revealing political strategies. This suggests that the tolerance of users relatively to distinct opinions is not a step function like in H-K models for example, but an exponentially decreasing one.
  • The function for opinion fusion process that best fits the empirical data among the set of arithmetic and trigonometric functions when assuming an homogeneous functional form is the linear function, already widely used in the literature on opinion dynamics (Deffuant et al. 2000; Jager & Amblard 2005).

Interactive visualization and resources