Behind the Screen
Copyright 2019 by Sarah T. Roberts.
All rights reserved.
This book may not be reproduced, in whole or in part, including illustrations, in any form (beyond that copying permitted by Sections 107 and 108 of the U.S. Copyright Law and except by reviewers for the public press), without written permission from the publishers.
Yale University Press books may be purchased in quantity for educational, business, or promotional use. For information, please e-mail (U.K. office).
Set in Minion type by IDS Infotech Ltd.
Printed in the United States of America.
ISBN 978-0-300-23588-3 (hardcover : alk. paper)
Library of Congress Control Number: 2018963972
A catalogue record for this book is available from the British Library.
This paper meets the requirements of ANSI/NISO Z39.48-1992 (Permanence of Paper).
10 9 8 7 6 5 4 3 2 1
For my grandparents
For my parents
For Patricia
Human-Computer Interaction ... I mean, what other kind is there?
DR. CHRISTINE PAWLEY, 2009
Contents
Behind the Screen
INTRODUCTION
Behind the Internet
This book represents the culmination of eight years of research into the work of commercial content moderation of the internet, the workers who do it, and the reasons behind why their work is both essential and, seemingly paradoxically, invisible. Commercial content moderators are professional people paid to screen content uploaded to the internets social media sites on behalf of the firms that solicit user participation. Their job is to evaluate and adjudicate online content generated by users and decide if it can stay up or must be deleted. They act quickly, often screening thousands of images, videos, or text postings a day. And, unlike the virtual community moderators of an earlier internet and of some prominent sites today, they typically have no special or visible status to speak of within the internet platform they moderate. Instead, a key to their activity is often to remain as discreet and undetectable as possible.
Content moderation of online social and information spaces is not new; people have been creating and enforcing rules of engagement in online social spaces since the inception of those spaces and throughout the past four decades. What is new, however, is the industrial-scale organized content moderation activities of professionals who are paid for their evaluative gatekeeping services, and who undertake the work they do on behalf of large-scale commercial entities: social media firms, news outlets, companies that have an online presence they would like to have managed, apps and dating tools, and so on. It is a phenomenon that has grown up at scale alongside the proliferation of social media, digital information seeking, and online connected social and other activity as a part of everyday life.
As a result of the incredible global scale, reach, and impact of mainstream social media platforms, these companies demand a workforce dispersed around the world, responding to their need for monitoring and brand protection around the clock, every single day. Once the scope of this activity became clear to me, along with the realization that talking about it would require coining a descriptive name for this work and the people who do it, I settled on the term commercial content moderation to reflect the new reality. I also use some other terms interchangeably to stand in for commercial content moderators, such as moderators or mods, screeners, or other, more generic terms; unless otherwise specifically noted, I am always talking about the professional people who do this work as a job and for a source of income. There are many terms to describe this type of work, and employers, as well as the moderators themselves, may use any one of them or others even more abstract.
Of course, commercial content moderators are not literally invisible; indeed, if anyone should seek them out, they will be therein plush Silicon Valley tech headquarters, in sparse cube farms in warehouses or skyscrapers, in rural America or hyperurban Manila, working from home on a laptop in the Pacific Northwest while caring for kidsin places around the world. But the work they do, the conditions under which they do it, and for whose benefit are all largely imperceptible to the users of the platforms that pay for and rely upon this labor. In fact, this invisibility is by design.
The goal of this book is therefore to counter that invisibility and to put these workers and the work they do front of mind: to raise awareness about the fraught and difficult nature of such front-line online screening work, but also to give the rest of us the information we need to engage with more detail, nuance, and complexity in conversations about the impact of social media in our interpersonal, civic, and political lives. We cannot do the latter effectively if we do not know, as they say, how the sausage gets made.
The process of identifying and researching the phenomenon of commercial content moderation has connected me with numerous people in a variety of stages of life, from different socioeconomic classes, cultural backgrounds, and life experiences. It has necessitated travel to parts of the world previously unfamiliar to me, and has led me to study the work of scholars of the history, politics, and people of places like the Philippines, while also considering the daily realities of people located in rural Iowa. It has made connections for me between Silicon Valley and India, between Canada and Mexico, and among workers who may not have even recognized themselves as peers. It has also necessitated the formulation of theoretical framings and understandings that I use as a navigation tool. I hope to make those connections across time and space for them, and for all of us.
In the United States, using the internet as a means of communication and social connection can be traced to some of its earliest moments, such as when researchers at UCLA in 1969 attempted to transmit a message from one computer node to another connected through ARPANETthe internets People used command-line programs, such as talk, on the Unix operating system to communicate in real time long before anyone had ever heard of texting. They sent messages to one another in a new form called email that, at one time, made up the majority of data transmissions crossing the internets networks. Others shared news, debated politics, discussed favorite music, and circulated pornography in Usenet news groups. All were virtual communities of sorts, connecting computer users to one another years before the birth of Facebooks founders. Each site of communication developed its own protocols, its own widely accepted practices, its own particular flavor, social norms, and culture.
Because access to internet-connected computers was not commonplace in the first decades of its existence (there was not even a preponderance of personal computers in everyones home at this time), access to this nascent internet was largely the province of people affiliated with universities or research and development institutes, largely in the United States, and also in Great Britain and northern Europe. Despite the seemingly homogeneous backgrounds of these early users, people found plenty about which to disagree. Political, religious, and social debates, lengthy arguments, insults, trolling, and flame wars all were common in the early dayseven as we continue to struggle with these issues online today.
To contend with these challenges, as well as to develop and enforce a sense of community identity in many early internet social spaces, the users themselves often created rules, participation guidelines, behavioral norms, and other forms of self-governance and control, and anointed themselves, or other users, with superuser status that would allow them to enforce these norms from both a social and technological standpoint. In short, these spaces moderated users, behavior, and the material on them. Citing research by Alexander R. Galloway and Fred Turner, I described the early social internet in an encyclopedia entry as follows:
Next page