September 22, 2017

Archives for July 2017

Design Ethics for Gender-Based Violence and Safety Technologies

Authored (and organized) by Kate Sim and Ben Zevenbergen.

Digital technologies are increasingly proposed as innovative solution to the problems and threats faced by vulnerable groups such as children, women, and LGBTQ people. However, there exists a structural lack of consideration for gender and power relations in the design of Internet technologies, as previously discussed by scholars in media and communication studies (Barocas & Nissenbaum, 2009; boyd, 2001; Thakor, 2015) and technology studies (Balsamo, 2011; MacKenzie and Wajcman, 1999). But the intersection between gender-based violence and technology deserves greater attention. To this end, scholars from the Center for Information Technology at Princeton and the Oxford Internet Institute organized a workshop to explore the design ethics of gender-based violence and safety technologies at Princeton in the Spring of 2017.

The workshop welcomed a wide range of advocates in areas of intimate partner violence and sex work; engineers, designers, developers, and academics working on IT ethics. The objectives of the day were threefold: (1) to better understand the lack of gender considerations in technology design, (2) to formulate critical questions for functional requirement discussions between advocates and developers of gender-based violence applications; and (3) establish a set of criteria by which new applications can be assessed from a gender perspective.

Following three conceptual takeaways from the workshop, we share instructive primers for developers interested in creating technologies for those affected by gender-based violence.

 

Survivors, sex workers, and young people are intentional technology users

Increasing public awareness of the prevalence gender-based violence, both on and offline, often frames survivors of gender-based violence, activists, and young people as vulnerable and helpless. Contrary to this representation, those affected by gender-based violence are intentional technology users, choosing to adopt or abandon tools as they see fit. For example, sexual assault victims strategically disclose their stories on specific social media platforms to mobilize collective action. Sex workers adopt locative technologies to make safety plans. Young people utilize secure search tools to find information about sexual health resources near them. To fully understand how and why some technologies appear to do more for these communities, developers need to pay greater attention to the depth of their lived experience with technology.

 

Context matters

Technologies designed with good intentions do not inherently achieve their stated objectives. Functions that we take for granted to be neutral, such as a ‘Find my iPhone’ feature, can have unintended consequences. In contexts of gender-based violence, abusers and survivors appropriate these technological tools. For example, survivors and sex workers can use such a feature to share their whereabouts with friends in times of need. Abusers, on the other hand, can use the locative functions to stalk their victims. It is crucial to consider the context within which a technology is used, the user’s relationship to their environment, their needs, and interests so that technologies can begin to support those affected by gender-based violence.

 

Vulnerable communities perceive unique affordances

Drawing from ecological psychology, technology scholars have described this tension between design and use as affordance, to explain how a user’s perception of what can and cannot be done on a device informs their use. Designers may create a technology with a specific use in mind, but users will appropriate, resist, and improvise their use of the features as they see fit. For example, the use of a hashtags like #SurvivorPrivilege is an example of how rape victims create in-groups on Twitter to engage in supportive discussions, without the intention of it going viral.

 

ACTION ITEMS

  1. Predict unintended outcomes

Relatedly, the idea of devices as having affordances allows us to detect how technologies lead to unintended outcomes. Facebook’s ‘authentic name’ policy may have been instituted to promote safety for victims of relationship violence. The social and political contexts in which this policy is used, however, disproportionately affects the safety of human rights activists, drag queens, sex workers, and others — including survivors of partner violence.

 

  1. Question the default

Technology developers are in a position to design the default settings of their technology. Since such settings are typically left unchanged by users, developers must take into account the effect on their target end users. For example, the default notification setting for text messages display the full message content in home screen. A smartphone user may experience texting as a private activity, but the default setting enables other people who are physically co-present to be involved. Opting out of this default setting requires some technical knowledge from the user. In abusive relationships, the abuser can therefore easily access the victim’s text messages through this default setting. So, in designing smartphone applications for survivors, developers should question the default privacy setting.

 

  1. Inclusivity is not generalizability

There appears to be an equation of generalizability with inclusivity. An alarm button that claims to be for generally safety purposes may take a one-size-fits-all approach by automatically connecting the user to law enforcement. In cases of sexual assault, especially involving those who are of color, in sex work, or of LGBTQ identities, survivors are likely to avoid such features precisely because of its connection to law enforcement. This means that those who are most vulnerable are inadvertently excluded from the feature. Alternatively, an alarm feature that centers on these communities may direct the user to local resources. Thus, a feature that is generalizable may overlook target groups it aims to support; a more targeted feature may have less reach, but meet its objective. Just as communities’ needs are context-based, inclusivity, too, is contextualized. Developers should realize that that the broader mission of inclusivity can in fact be completed by addressing a specific need, though this may reduce the scope of end-users.

 

  1. Consider co-designing

How, then, can we develop targeted technologies? Workshop participants suggested co-design (similarly, user-participatory design) as a process through which marginalized communities can take a leading role in developing new technologies. Instead of thinking about communities as passive recipients of technological tools, co-design positions both target communities and technologists as active agents who share skills and knowledge to develop innovative, technological interventions.

 

  1. Involve funders and donors

Breakout group discussions pointed out how developers’ organizational and funding structures play a key role in shaping the kind of technologies they create. Suggested strategies included (1) educating donors about the specific social issue being addressed, (2) carefully considering whether funding sources meet developers’ objectives, and (3) ensuring diversity in the development team.

 

  1. Do no harm with your research

In conducting user research, academics and technologists aim to better understand marginalized groups’ technology uses because they are typically at the forefront of adopting and appropriating digital tools. While it is important to expand our understanding of vulnerable communities’ everyday experience with technology, research on this topic can be used by authorities to further marginalize and target these communities. Take, for example, how tech startups like this align with law enforcement in ways that negatively affect sex workers. To ensure that research done about communities can actually contribute to supporting those communities, academics and developers must be vigilant and cautious about conducting ethical research that protects its subjects.

 

  1. Should this app exist?

The most important question to address at the beginning of a technology design process should be: Should there even be an app for this? The idea that technologies can solve social problems as long as the technologists just “nerd harder” continues to guide the development and funding of new technologies. Many social problems are not necessarily data problems that can be solved by an efficient design and padded with enhanced privacy features. One necessary early strategy of intervention is to simply raise the question of whether technologies truly have a place in the particular context and, if so, whether it addresses a specific need.

Our workshop began with big questions about the intersections of gender-based violence and technology, and concluded with a simple but piercing question: Who designs what for whom? Implicated here are the complex workings of gender, sexuality, and power embedded in the lifetime of newly emerging devices from design to use. Apps and platforms can certainly have their place when confronting social problems, but the flow of data and the revealed information must be carefully tailored to the target context. If you want to be involved with these future projects, please contact or .

The workshop was funded by the Princeton’s Center for Information Technology Policy, Princeton’s University Center for Human Values, the Ford Foundation, the Mozilla Foundation, and Princeton’s Council on Science and Technology.

 

 

LinkedIn reveals your personal email to your connections

[Huge thanks to Dillon ReismanArvind Narayanan, and Joanna Huey for providing great feedback on early drafts.]

LinkedIn makes the primary email address associated with an account visible to all direct connections, as well as to people who have your email address in their contacts lists. By default, the primary email address is the one that was used to sign up for LinkedIn. While the primary address may be changed to another email in your account settings, there is no way to prevent your contacts from visiting your profile and viewing there whatever email you chose to be primary. In addition, the current data archive export feature of LinkedIn allows users to download their connections’ email addresses in bulk. It seems that the archive export includes all emails associated with an account, not just the one designated as primary.

It appears that many of these addresses are personal, rather than professional. This post uses the contextual integrity (CI) privacy framework to consider whether the access given by LinkedIn violates the privacy norms of using a professional online social network.
[Read more…]

On Encryption, Archiving, and Accountability

As Elites Switch to Texting, Watchdogs Fear Loss of Accountability“, says a headline in today’s New York Times. The story describes a rising concern among rule enforcers and compliance officers:

Secure messaging apps like WhatsApp, Signal and Confide are making inroads among lawmakers, corporate executives and other prominent communicators. Spooked by surveillance and wary of being exposed by hackers, they are switching from phone calls and emails to apps that allow them to send encrypted and self-destructing texts. These apps have obvious benefits, but their use is causing problems in heavily regulated industries, where careful record-keeping is standard procedure.

Among those “industries” is the government, where laws often require that officials’ work-related communications be retained, archived, and available to the public under the Freedom of Information Act. The move to secure messaging apps frustrates these goals.

The switch to more secure messaging is happening, and for good reason, because old-school messages are increasingly vulnerable to compromise–the DNC and the Clinton campaign are among the many organizations that have paid a price for underestimating these risks.

The tradeoffs here are real. But this is not just a case of choosing between insecure-and-compliant or secure-and-noncompliant. The new secure apps have three properties that differ from old-school email: they encrypt messages end-to-end from the sender to the receiver; they sometimes delete messages quickly after they are transmitted and read; and they are set up and controlled by the end user rather than the employer.

If the concern is lack of archiving, then the last property–user control of the account, rather than employer control–is the main problem. And of course that has been a persistent problem even with email. Public officials using their personal email accounts for public business is typically not allowed (and when it happens by accident, messages are supposed to be forwarded to official accounts so they will be archived), but unreported use of personal accounts has been all too common.

Much of the reporting on this issue (but not the Times article) makes the mistake of conflating the personal-account problem with the fact that these apps use encryption. There is nothing about end-to-end encryption of data in transit that is inconsistent with archiving. The app could record messages and then upload them to an archive–with this upload also protected by end-to-end encryption as a best practice.

The second property of these apps–deleting messages shortly after use–has more complicated security implications. Again, the message becoming unavailable to the user shortly after use need not conflict with archiving. The message could be uploaded securely to an archive before deleting it from the endpoint device.

You might ask why the user should lose access to a message when that message is still stored in an archive. But this makes some sense as a security precaution. Most compromises of communications happen through the user’s access, for example because an attacker can get the user’s login credentials by phishing. Taking away the user’s access, while retaining access in a more carefully guarded archive, is a reasonable security precaution for sensitive messages.

But of course the archive still poses a security risk. Although an archive ought to be more carefully protected than a user account would be, the archive is also a big, high-value target for attackers. The decision to create an archive should not be taken lightly, but it may be justified if the need for accountability is strong enough and the communications are not overly sensitive.

The upshot of all of this is that the most modern, secure approaches to secure communication are not entirely incompatible with the kind of accountability needed for government and some other users.  Accountable versions of these types of services could be created. These would be less secure than the current versions, but more secure than old-school communications. The barriers to creating these are institutional, not technical.