Home > Research > Publications & Outputs > Guardians of the Galaxy: Content Moderation in ...

Electronic data

  • sec24winter-final60

    Accepted author manuscript, 2.13 MB, PDF document

    Available under license: CC BY: Creative Commons Attribution 4.0 International License

View graph of relations

Guardians of the Galaxy: Content Moderation in the InterPlanetary File System

Research output: Contribution to conference - Without ISBN/ISSN Conference paperpeer-review

Forthcoming
  • Saidu Sokoto
  • Leonhard Balduf
  • Dennis Trautwein
  • Yiluo Wei
  • Gareth Tyson
  • Ignacio Castro
  • Onur Ascigil
  • George Pavlou
  • Maciej Korczynski
  • Bjorn Scheuermann
  • Michał Król
Close
Publication date2/06/2024
<mark>Original language</mark>English
EventUsenix Security Symposium - PHILADELPHIA, United States
Duration: 14/08/202416/08/2024
https://www.usenix.org/conference/usenixsecurity24

Conference

ConferenceUsenix Security Symposium
Abbreviated titleUsenix Sec
Country/TerritoryUnited States
CityPHILADELPHIA
Period14/08/2416/08/24
Internet address

Abstract

The InterPlanetary File System (IPFS) is one of the largest platforms in the growing “Decentralized Web”. The increasing popularity of IPFS has attracted large volumes of users and content. Unfortunately, some of this content could be considered “problematic”. Content moderation is always hard. With a completely decentralized infrastructure and administration, content moderation in IPFS is even more difficult. In this paper, we examine this challenge. We identify, characterize, and measure the presence of problematic content in IPFS (e.g. subject to takedown notices). Our analysis covers 368,762 files. We analyze the complete content moderation process including how these files are flagged, who hosts and retrieves them. We also measure the efficacy of the process. We analyze content submitted to denylist, showing that notable volumes of problematic content are served, and the lack of a centralized approach facilitates its spread. While we identify fast reactions to takedown requests, we also test the resilience of multiple gateways and show that existing means to filter problematic content can be circumvented. We end by proposing improvements to content moderation that result in 227% increase in the detection of phishing content and reduce the average time to filter such content by 43%.