April 25, 2024

After the Facebook emotional contagion experiment: A proposal for a positive path forward

Now that some of the furor over the Facebook emotional contagion experiment has passed, it is time for us to decide what should happen next. The public backlash has the potential to drive a wedge between the tech industry and the social science research community. This would be a loss for everyone: tech companies, academia, and the public. In the age of big data, the interaction between social scientists and tech companies could yield a richer understanding of human behavior and new ideas about how to solve some of society’s most important problems. Given these opportunities, we must develop a framework within which this research can continue, but continue in a responsible way.

I think that tech companies — in collaboration with other stakeholders — should develop Human-Subjects Research Oversight Committees (HSROCs) that would review, improve, and ultimately accept or reject plans for human-subjects research, much like the Institutional Review Boards (IRBs) that exist at U.S. universities. Based on my own experience conducting online experiments, serving on the Princeton University IRB, and working at a tech company — I’m currently an employee of Microsoft Research while on leave from Princeton — I’ve developed five principles that I think should govern tech companies’ HSROCs.

Before describing these principles, I think it worthwhile to quickly review some of the history of human-subjects research and the current practices in the tech industry; any consideration of what to do going forward should be informed by the past and present. First, we must acknowledge that scientists have done awful things to other people, all in the name of research. This history is real; it should not be ignored; and it creates a legitimate demand for ethical oversight of human-subjects research. In response to past ethical failures, researchers have developed basic principles that can guide research involving human subjects: the Nuremberg Code (1949), the Declaration of Helsinki (1964), and most recently the Belmont Report (1978). Building on the Belmont Report, the U.S. government created the Federal Policy for the Protection of Human Subjects — known as the “Common Rule” — which governs research funded by the federal government and therefore governs IRBs at U.S. universities.

Tech companies, however, because their research is not federally funded, are not governed by the Common Rule and do not have IRBs. Currently, the practices governing the design, implementation, and publication of human-subjects research differ across tech companies, and these practices are not transparent. This state of affairs is not surprising because the conduct of social science research within tech companies is relatively new. Further, many of the researchers inside tech companies were trained in computer science and engineering — not in the social sciences —  so they have little prior experience with the ethical dilemmas of human-subjects research.

Given that background, the five principles that I propose for HSROCs are: 1) restricted in scope, 2) focused on balancing risks and benefits, 3) transparent, 4) dynamic, and 5) diverse.

  • Restricted in scope

The single most important decision about HSROCs is defining their scope. I believe that they should cover all “human-subjects research.” Because this definition is so critical, let me be very explicit.  I would propose that HSROCs adopt the definition of “human-subjects research” proposed in a recent National Research Council report:

Human-subjects research is systematic investigation designed to develop or contribute to generalizable knowledge by obtaining data about a living individual directly through interaction or intervention or by obtaining identifiable private information about an individual.

Critically, this definition also clarifies what is not human-subjects research, and two points are worth highlighting. First, human-subjects research must involve human subjects (“obtaining data about a living individual”).  Therefore, research on topics like quantum computers or fault-tolerant algorithms would not be under the purview of HSROCs.  As best I can tell, much of the published research currently conducted at Microsoft, Google, Twitter, LinkedIn, and Facebook does not involve human subjects, although the proportion varies from company to company.

Second, under this definition, human-subjects research must be “designed to develop or contribute to generalizable knowledge,” which most likely would exclude experiments conducted to address business questions specific to a single company.  For example, Google’s experiment to see which of 41 shades of blue would increase click through rates is business research, not scientific research seeking “generalizable knowledge.”  In other words, this definition of human-subjects research makes a distinction between scientific research — which seeks generalizable knowledge — and what I would call business research — which seeks actionable information for a single company.

Because scope specification is so important and because drawing the line in the way I propose excludes almost all activities happening inside of tech companies right now, I’d like to address several possible objections.  First, critics might argue that this restricted scope does not address the real dangers posed by technology companies and their power over us.  And, I would agree.  The concentration of data and power inside of companies with little ethical oversight is problematic.  However, it is worth pointing out that this issue is larger than tech companies: political campaigns, credit card companies, and casinos all use massive amounts of data and controlled experimentation in order to change our behavior.  As a society we should address this issue, but developing oversight systems for business research is beyond the scope of this modest proposal.  Second, some might object to the fact that this proposal would unfairly place stronger ethical controls on scientific research than business research.  It is true that HSROCs would create a dual system inside of companies, but I think it is important for people involved in scientific research to play by a higher set of rules.  Society grants scientists great privileges because they are deemed to be working for the creation of knowledge in service of the common good, and we should not be in an ethical race to the bottom.  In fact, strong ethical procedures governing scientific research inside of companies might eventually serve as a model for procedures governing business research.  A third critique is that the line between scientific research and business research is blurry and that companies could circumvent HSROCs by merely relabeling their research.  This is certainly possible, but one of the responsibilities of HSROCs would be to clarify this boundary, much as university IRBs have developed an extensive set of guidelines clarifying what research is exempt from IRB review.

Acknowledging these limitations, I would like to describe two benefits of making HSROCs restricted in scope.  First, it is consistent with the existing ethical norms governing human-subjects research.  In fact, the first section of the Belmont Report — the report which served as a basis for the rules governing university IRBs — explicitly sets boundaries between “practice” and “research” (although in the context of medical research).  A second benefit of a restricted scope is that it increases the chance of voluntary adoption by tech companies.

  • Focused on balancing risks and benefits

Like IRBs, HSROCs should help researchers balance the risk and benefit in human-subjects research.  One important but subtle point about balance is that it does not require the elimination of all risk.  However, research involving more than “minimal risk” — the risks that participants would experience in their everyday life — requires stronger justifications of benefits to participants and society more generally.  When presented with a research plan that does not strike an appropriate balance between risks and benefits, an HSROC could stop the project completely or could require modifications that would yield a better balance.  For example, an HSROC could require that a specific project exclude participants under 18 years old.  In addition to changes that help decrease risk, an HSROC could also require changes that increase benefit.  For example, an HSROC could require that the resulting research be published open access so that everyone in society can benefit from it, not just people at institutions that provide access to costly academic journals. I think that a strong emphasis on open access publication is one way that industrial HSROCs could provide strong leadership for university IRBs.

  • Transparent

Facebook has research oversight procedures that were put into place some time after the emotional contagion study was conducted in 2012. Here’s what the Wall Street Journal reported:

Facebook said that since the study on emotions, it has implemented stricter guidelines on Data Science team research. Since at least the beginning of this year, research beyond routine product testing is reviewed by a panel drawn from a group of 50 internal experts in fields such as privacy and data security. Facebook declined to name them.

Company research intended to be published in academic journals receives additional review from in-house experts on academic research. Some of those experts are also on the Data Science team, Facebook said, declining to name the members of that panel.

Secret procedures followed by unnamed people will not have legitimacy with the public or the academic community, no matter how excellent those procedures might be.  The secret nature of the process at Facebook contrasts with university IRBs: all have extensive websites that are accessible to everyone, both inside the university and outside. (Here’s the website of the Princeton IRB, for example).  At a minimum, HSROCs should make their meeting schedule, guiding documents, and names of committee members publicly available through similar websites.  In addition to creating legitimacy, transparency would also represent a contribution to the wider scientific community because the procedures of HSROCs could help university IRBs. In fact, my guess is that many university IRBs would love to learn about the Facebook review system in order to improve their own systems governing online research.

  • Dynamic

HSROCs should be designed in a way that enables them to change at the pace of technology, something that university IRBs do not do. The Common Rule, which governs university IRBs,  was formally enacted in 1991, and it took 20 years before the Department of Health and Human Services began the process of updating the rules.  Further, this process of updating the Common Rule, which began in 2011, is still not complete.  Changing federal regulations is complicated, so I appreciate the need for these processes to be cautious and consultative. However, the HSROCs must be able to adapt more quickly.  For example, user expectations about data collection, retention, and secondary use will probably change substantially in the next few years, and this evolution of user expectation will create the need for changes in the kinds of consent and debriefing that HSROCs require.

  • Diverse

When you take away all the technical language and bureaucracy, a university IRB is just a bunch of people sitting in a room trying to make a decision. Therefore, it is critical that diverse experiences and views are represented in the room. As per the Common Rule guidelines on membership, the Princeton IRB has a mix of faculty, staff, and members of the wider community (e.g., a physician, a member of the clergy).  My experience has been that these members of the community provide a great benefit to our discussions. Any review board within a tech company should include people with different backgrounds, training, and experience. It should also include people who are not employees of the company.

 

Turning these five principles — restricted in scope, focused on balancing risks and benefits, transparent, dynamic, and diverse — into a concrete set of rules and standards for an HSROC would involve a lot of careful thought, and I imagine that, at least initially, HSROCs would vary from company to company.   However, as I hope this post has demonstrated, it is possible to bring ethical oversight to scientific research involving tech companies (or companies in any industry) without requiring radical changes to the business practices of these companies.

The promise of social research in the digital age comes with a dark side.  Our increased ability to collect, store, and analyze data, which can enable us to learn so much, can also increase the harm that we can cause.  I predict that if there are no oversight systems put into place, we will likely see ethical lapses as serious as the Milgram obedience experiment or the Stanford prison experiment, but rather than involving dozens of people, these Internet-scale lapses could harm millions of people.

There are a lot of hard decisions that need to be made going forward.  What forms of consent and debriefing should be required for online experiments?  Does the dramatically different scale of online experimentation require rethinking of existing concepts such as “minimal risk”?  What research is permissible with the terabytes of non-experimental data generated by users every day?  All of these decisions will be made, either explicitly or implicitly, and they will be made by people just like you and me.  The difficulty of these decisions means that we should develop institutions that will enable us, as a community, to make these decisions wisely.

Comments

  1. Ian Graham says

    So all Facebook would have had to would be to define the research and analysis as “business research”, not publish it, but use it in their filtering of information, and everything would be hunky-dory? The fundamental ethical issues are with the practice of the research, not its publication. As in your quote of Facebook’s “improved” procedures, the HSROC’s are at least in part about restricting publication rather than avoiding unethical research. And who polices the HSROCs? In Europe I see support for a regulatory agency, but US commentators seem to imagine these bodies as self-policing.