Do global legal frameworks hold social media platforms accountable for hosting content that incites violence?

Thesis / Dissertation

2025

Permanent link to this Item
Authors
Supervisors
Journal Title
Link to Journal
Journal ISSN
Volume Title
Publisher
Publisher

University of Cape Town

License
Series
Abstract
The digital era has witnessed an unprecedented expansion in social media platforms' use, influence, and societal impact.1 Sixty percent of the global population uses social media, with the daily exchange of messages reaching into the billions.2 As of 2023, Facebook boasts 2.98 billion monthly active users,3 YouTube exceeds 2.68 billion users,4 and X (formerly Twitter) had 450 million users.5 These platforms offer users unrestricted capacity for expressing views and communication, often with minimal (though not constant) oversight while facilitating the concealment of user identities.6 While this technological advancement has opened new avenues for global connectivity and communication, it has also given rise to an alarming increase in the spread of hate speech.7 In the last twenty years, these online platforms have evolved into environments where hateful narratives and stereotypes flourish unchecked, primarily aimed at marginalized groups, leading to increased communal violence, ethnic cleansing, and even genocide.8 Major platforms such as Facebook, X, and YouTube have been criticized for failing to remove harmful content promptly and effectively and for mistakenly removing content that does not breach their policies.9 This research endeavours to comprehensively investigate the accountability of social media platforms in addressing and mitigating the impact of hate speech that fuels acts of violence within the public sphere. Legal, ethical, and technological perspectives will be considered to examine the responsibilities borne by social media platforms in moderating user-generated content. A detailed analysis of existing legal frameworks, both national and international, governing hate speech and its consequences will be conducted to evaluate whether social media platforms are held accountable for content that incites violence. A comparative analysis of diverse social media platforms will be integral to this research, considering variations in policies, enforcement mechanisms, and responsiveness to instances of hate speech inciting violence. Case studies will be examined to illustrate specific incidents, shedding light on the challenges faced by social media platforms and the repercussions of inadequately addressing hate speech. This research aims to determine the legal responsibilities and accountability of social media platforms for hosting content that incites violence and examines whether the current measures are sufficient in addressing this critical issue.
Description

Reference:

Collections