August 19, 2017

When the cookie meets the blockchain

Cryptocurrencies are portrayed as a more anonymous and less traceable method of payment than credit cards. So if you shop online and pay with Bitcoin or another cryptocurrency, how much privacy do you have? In a new paper, we show just how little.

Websites including shopping sites typically have dozens of third-party trackers per site. These third parties track sensitive details of payment flows, such as the items you add to your shopping cart, and their prices, regardless of how you choose to pay. Crucially, we find that many shopping sites leak enough information about your purchase to trackers that they can link it uniquely to the payment transaction on the blockchain. From there, there are well-known ways to further link that transaction to the rest of your Bitcoin wallet addresses. You can protect yourself by using browser extensions such as Adblock Plus and uBlock Origin, and by using Bitcoin anonymity techniques like CoinJoin. These measures help, but we find that linkages are still possible.

 

An illustration of the full scope of our attack. Consider three websites that happen to have the same embedded tracker. Alice makes purchases and pays with Bitcoin on the first two sites, and logs in on the third. Merchant A leaks a QR code of the transaction’s Bitcoin address to the tracker, merchant B leaks a purchase amount, and merchant C leaks Alice’s PII. Such leaks are commonplace today, and usually intentional. The tracker links these three purchases based on Alice’s browser cookie. Further, the tracker obtains enough information to uniquely (or near-uniquely) identify coins on the Bitcoin blockchain that correspond to the two purchases. However, Alice took the precaution of putting her bitcoins through CoinJoin before making purchases. Thus, either transaction individually could not have been traced back to Alice’s wallet, but there is only one wallet that participated in both CoinJoins, and is hence revealed to be Alice’s.

 

Using the privacy measurement tool OpenWPM, we analyzed 130 e-commerce sites that accept Bitcoin payments, and found that 53 of these sites leak transaction details to trackers. Many, but not all, of these leaks are by design, to enable advertising and analytics. Further, 49 sites leak personal identifiers to trackers: names, emails, usernames, and so on. This combination means that trackers can link real-world identities to Bitcoin addresses. To be clear, all of this leaked data is sitting in the logs of dozens of tracking companies, and the linkages can be done retroactively using past purchase data.

On a subset of these sites, we made real purchases using bitcoins that we first “mixed” using the CoinJoin anonymity technique.[1] We found that a tracker that observed two of our purchases — a common occurrence — would be able to identify our Bitcoin wallet 80% of the time. In our paper, we present the full details of our attack as well as a thorough analysis of its effectiveness.

Our findings are a reminder that systems without provable privacy properties may have unexpected information leaks and lurking privacy breaches. When multiple such systems interact, the leaks can be even more subtle. Anonymity in cryptocurrencies seems especially tricky, because it inherits the worst of both data anonymization (sensitive data must be publicly and permanently stored on the blockchain) and anonymous communication (privacy depends on subtle interactions arising from the behavior of users and applications).

[1] In this experiment we used 1–2 rounds of mixing. We provide evidence in the paper that while a higher mixing depth decreases the effectiveness of the attack, it doesn’t defeat it. There’s room for a more careful study of the tradeoffs here.

Getting serious about research ethics in computer science

Digital technology mediates our public and private lives. That makes computer science a powerful discipline, but it also means that ethical considerations are essential in the development of these technologies. Not all new developments may be welcomed by users, such as a patent application by Facebook that enables the company to identify their users’ emotions through cameras on their devices. A critical approach to developing digital technologies, guided by philosophical and ethical principles, will allow interventions that improve society in meaningful ways.

The Center for Information Technology Policy recently organized a conference to discuss research ethics in different computer science communities, such as machine learning, security, and Internet measurement.  This blog post is the first in a series that summarizes and builds on the panel discussions at the conference.

Prof. Arvind Narayanan points out that computer science sub-communities have traditionally developed their own community standards about what is considered to be ethical. See for example responsible vulnerability disclosure standards in information security, or the Menlo Report for the Internet measurement discipline. This allows norms and standards to be tailored to the needs of sub-disciplines. However, the increasing responsibilities of researchers and sub-communities, arising from the increasing power and reach of computer science, are sometimes met with confusion. There is a tendency to see ethical considerations as a “policy issue” to be dealt with by others.

Prof. Melissa Lane of the University Center for Human Values points out that while ethics is rooted in understanding community standards and norms, these do not exhaust it, as some researchers in computer science and other fields can sometimes be tempted to think.  Rather, the academic study of ethics provides the tools to critically reflect on these norms and challenge existing and new practices. A meaningful computer science research ethics therefore does not just translate existing norms into functional requirements, but explores how values are enabled, operationalized, or stifled through technology. A careful analysis of a particular context may even uncover new values that were previously taken for granted or not even considered to be a norm. Think, for example, of “disattendability” — the idea of going about your business without anyone tracking you or paying attention to you. We usually take this for granted in the physical world, but on the Internet, ad trackers, among others, actively violate this norm on an ongoing basis. By understanding the effects of design choices and methodologies, ethics guides technology designers to choose the most appropriate approach among the available alternatives.

Ethics is known for its somewhat conflicting theories, such as consequentialism (“Ends justify the Means”) and deontology (“Act in such a way that you treat humanity […] never merely as a means to an end, but always at the same time as an end”). Prof. Susan Brison cautions against an approach that simply takes an ethical theory and applies it to a technology. She raised the question whether computer science research and data science may require new types of ethics, or evolved theories. Digital data is changing the underlying properties of information, whereby our traditional ways of thinking are being challenged in important ways. For example, micro-targeting of bespoke political messages to individuals circumvents the ability to let ‘good speech’ drown out ‘bad speech’, which is a foundational idea for the concept of freedom of speech.

In my research, I’ve found that ethical guidelines can be incomplete, inaccessible, or conflicting, and existing legal statutes from previous technological eras may not be directly applicable to current technology. This has resulted in computer science communities being somewhat confused about their ethical and legal responsibilities. The upcoming posts in this series will explore some of the ethical standards in machine learning, security, algorithmic transparency, and Internet measurement. We welcome any feedback to move this discussion forward at a crucial time for the ethics of computer science.

See the introduction to the conference here.

Design Ethics for Gender-Based Violence and Safety Technologies

Authored (and organized) by Kate Sim and Ben Zevenbergen.

Digital technologies are increasingly proposed as innovative solution to the problems and threats faced by vulnerable groups such as children, women, and LGBTQ people. However, there exists a structural lack of consideration for gender and power relations in the design of Internet technologies, as previously discussed by scholars in media and communication studies (Barocas & Nissenbaum, 2009; boyd, 2001; Thakor, 2015) and technology studies (Balsamo, 2011; MacKenzie and Wajcman, 1999). But the intersection between gender-based violence and technology deserves greater attention. To this end, scholars from the Center for Information Technology at Princeton and the Oxford Internet Institute organized a workshop to explore the design ethics of gender-based violence and safety technologies at Princeton in the Spring of 2017.

The workshop welcomed a wide range of advocates in areas of intimate partner violence and sex work; engineers, designers, developers, and academics working on IT ethics. The objectives of the day were threefold: (1) to better understand the lack of gender considerations in technology design, (2) to formulate critical questions for functional requirement discussions between advocates and developers of gender-based violence applications; and (3) establish a set of criteria by which new applications can be assessed from a gender perspective.

Following three conceptual takeaways from the workshop, we share instructive primers for developers interested in creating technologies for those affected by gender-based violence.

 

Survivors, sex workers, and young people are intentional technology users

Increasing public awareness of the prevalence gender-based violence, both on and offline, often frames survivors of gender-based violence, activists, and young people as vulnerable and helpless. Contrary to this representation, those affected by gender-based violence are intentional technology users, choosing to adopt or abandon tools as they see fit. For example, sexual assault victims strategically disclose their stories on specific social media platforms to mobilize collective action. Sex workers adopt locative technologies to make safety plans. Young people utilize secure search tools to find information about sexual health resources near them. To fully understand how and why some technologies appear to do more for these communities, developers need to pay greater attention to the depth of their lived experience with technology.

 

Context matters

Technologies designed with good intentions do not inherently achieve their stated objectives. Functions that we take for granted to be neutral, such as a ‘Find my iPhone’ feature, can have unintended consequences. In contexts of gender-based violence, abusers and survivors appropriate these technological tools. For example, survivors and sex workers can use such a feature to share their whereabouts with friends in times of need. Abusers, on the other hand, can use the locative functions to stalk their victims. It is crucial to consider the context within which a technology is used, the user’s relationship to their environment, their needs, and interests so that technologies can begin to support those affected by gender-based violence.

 

Vulnerable communities perceive unique affordances

Drawing from ecological psychology, technology scholars have described this tension between design and use as affordance, to explain how a user’s perception of what can and cannot be done on a device informs their use. Designers may create a technology with a specific use in mind, but users will appropriate, resist, and improvise their use of the features as they see fit. For example, the use of a hashtags like #SurvivorPrivilege is an example of how rape victims create in-groups on Twitter to engage in supportive discussions, without the intention of it going viral.

 

ACTION ITEMS

  1. Predict unintended outcomes

Relatedly, the idea of devices as having affordances allows us to detect how technologies lead to unintended outcomes. Facebook’s ‘authentic name’ policy may have been instituted to promote safety for victims of relationship violence. The social and political contexts in which this policy is used, however, disproportionately affects the safety of human rights activists, drag queens, sex workers, and others — including survivors of partner violence.

 

  1. Question the default

Technology developers are in a position to design the default settings of their technology. Since such settings are typically left unchanged by users, developers must take into account the effect on their target end users. For example, the default notification setting for text messages display the full message content in home screen. A smartphone user may experience texting as a private activity, but the default setting enables other people who are physically co-present to be involved. Opting out of this default setting requires some technical knowledge from the user. In abusive relationships, the abuser can therefore easily access the victim’s text messages through this default setting. So, in designing smartphone applications for survivors, developers should question the default privacy setting.

 

  1. Inclusivity is not generalizability

There appears to be an equation of generalizability with inclusivity. An alarm button that claims to be for generally safety purposes may take a one-size-fits-all approach by automatically connecting the user to law enforcement. In cases of sexual assault, especially involving those who are of color, in sex work, or of LGBTQ identities, survivors are likely to avoid such features precisely because of its connection to law enforcement. This means that those who are most vulnerable are inadvertently excluded from the feature. Alternatively, an alarm feature that centers on these communities may direct the user to local resources. Thus, a feature that is generalizable may overlook target groups it aims to support; a more targeted feature may have less reach, but meet its objective. Just as communities’ needs are context-based, inclusivity, too, is contextualized. Developers should realize that that the broader mission of inclusivity can in fact be completed by addressing a specific need, though this may reduce the scope of end-users.

 

  1. Consider co-designing

How, then, can we develop targeted technologies? Workshop participants suggested co-design (similarly, user-participatory design) as a process through which marginalized communities can take a leading role in developing new technologies. Instead of thinking about communities as passive recipients of technological tools, co-design positions both target communities and technologists as active agents who share skills and knowledge to develop innovative, technological interventions.

 

  1. Involve funders and donors

Breakout group discussions pointed out how developers’ organizational and funding structures play a key role in shaping the kind of technologies they create. Suggested strategies included (1) educating donors about the specific social issue being addressed, (2) carefully considering whether funding sources meet developers’ objectives, and (3) ensuring diversity in the development team.

 

  1. Do no harm with your research

In conducting user research, academics and technologists aim to better understand marginalized groups’ technology uses because they are typically at the forefront of adopting and appropriating digital tools. While it is important to expand our understanding of vulnerable communities’ everyday experience with technology, research on this topic can be used by authorities to further marginalize and target these communities. Take, for example, how tech startups like this align with law enforcement in ways that negatively affect sex workers. To ensure that research done about communities can actually contribute to supporting those communities, academics and developers must be vigilant and cautious about conducting ethical research that protects its subjects.

 

  1. Should this app exist?

The most important question to address at the beginning of a technology design process should be: Should there even be an app for this? The idea that technologies can solve social problems as long as the technologists just “nerd harder” continues to guide the development and funding of new technologies. Many social problems are not necessarily data problems that can be solved by an efficient design and padded with enhanced privacy features. One necessary early strategy of intervention is to simply raise the question of whether technologies truly have a place in the particular context and, if so, whether it addresses a specific need.

Our workshop began with big questions about the intersections of gender-based violence and technology, and concluded with a simple but piercing question: Who designs what for whom? Implicated here are the complex workings of gender, sexuality, and power embedded in the lifetime of newly emerging devices from design to use. Apps and platforms can certainly have their place when confronting social problems, but the flow of data and the revealed information must be carefully tailored to the target context. If you want to be involved with these future projects, please contact or .

The workshop was funded by the Princeton’s Center for Information Technology Policy, Princeton’s University Center for Human Values, the Ford Foundation, the Mozilla Foundation, and Princeton’s Council on Science and Technology.