First of its kind study has analysed data across multiple social media platforms, over more than three decades, revealing the persistence of toxic interactions irrespective of the platform.

By Mr Shamim Quadir (Senior Communications Officer), Published

Published today in Nature, a new study has identified recurring, ‘toxic’ human conversation patterns on social media, which are common to users irrespective of the platform used, the topic of discussion, and the decade in which the conversation took place.

In particular, the study suggests that prolonged conversations on social media are more prone to toxicity, and polarisation, when divergent viewpoints from debate lead to an escalation of online disagreement.

Contrary to the prevailing assumption, the study suggests that toxic interactions do not deter users from engagement, who actively participate in conversations. It also suggests that toxicity does not necessarily escalate as discussions evolve.

In collaboration with City, University of London and the Institute of Complex Systems, CNR, Rome, the study was led by the Center for Data Science and Complexity for Society at the Department of Computer Science, Sapienza University of Rome,

Growing concern surrounds the impact of social media platforms on public discourse and their influence on social dynamics, especially in the context of toxicity.

The study employed a comparative approach across eight social media platforms to explore critical factors related to the persistence of toxic interactions in digital communities. The platforms included the more contemporary Facebook, Reddit, Gab, and YouTube, and the older USENET, a worldwide distributed discussion system established in 1980 — over a decade before the world wide web went online to the general public. The dataset comprised more than 500 million user comments spanning a period of 34 years.

The study analysis adopted the definition of ‘toxicity’ provided by state-of-the-art classifier software, and which considers toxicity as “a rude, disrespectful or unreasonable comment likely to make someone leave a discussion”.

The core finding of the study indicates a complex interaction between harmful content and participation in online debates. It suggests user resilience to negativity in digital environments, and should inform policymakers’ understanding of those environments and consequent decision making.

Despite the evolution of social media platforms and changing social norms over three decades, the study findings represent a significant consistency in user interaction dynamics based on a constant human component.

Professor Andrea Baronchelli, Professor of Complexity Science at City,  University of London, Token Economy theme lead at The Alan Turing Institute, co-author of the study, said:

Analysing multiple platforms is key to isolating genuinely human behavioural patterns from simple reactions to the idiosyncratic online environments. The attention is too often focused on the specific platform, forgetting human nature. Our study is an important step to change this attitude and move the spotlight back on who we are and how we act.

Professor Walter Quattrociocchi, Lead at the Center for Data Science and Complexity for Society at the Department of Computer Science, Sapienza University of Rome, said:

This research represents a significant advancement in understanding online social dynamics and how they are influenced by algorithms, moving beyond the focus on single platforms. The results underscore the broad implications of algorithmic influence on social interactions.

“The study highlights the crucial importance of data science in analysing and interpreting online human behaviour, confirming that toxic behaviour is a deeply ingrained aspect of digital interactions.

Watch Professor Andrea Baronchelli discuss his research on the physics of socio-technical systems

Hashtags

Related schools, departments and centres