December 16, 2017

Getting serious about research ethics: Security and Internet Measurement

[This blog post is a continuation of our series about research ethics in computer science that we started last week]

Research projects in the information security and Internet measurement sub-disciplines typically interact with third-party systems or devices to collect a large amounts of data. Scholars engaging in these fields are interested to collect data about technical phenomenon. As a result of the widespread use of the Internet, their experiments can interfere with human use of devices and reveal all sorts of private information, such as their browsing behaviour. As awareness of the unintended impact on Internet users grew, these communities have spent considerable time debating their ethical standards at conferences, dedicated workshops, and in journal publications. Their efforts have culminated in guidelines for topics such as vulnerability disclosure or privacy, whereby the aim is to protect unsuspecting Internet users and human implicated in technical research.

 

Prof. Nick Feamster, Prof. Prateek Mittal, moderator Prof. Elana Zeide, and I discussed some important considerations for research ethics in a panel dedicated to these sub-disciplines at the recent CITP conference on research ethics in computer science communities. We started by explaining that gathering empirical data is crucial to infer the state of values such as privacy and trust in communication systems. However, as methodological choices in computer science will often have ethical impacts, researchers need to be empowered to reflect on their experimental setup meaningfully.

 

Prof. Feamster discussed several cases where he had sought advice from ethical oversight bodies, but was left with unsatisfying guidance. For example, when his team conducted Internet censorship measurements (pdf), they were aware that they were initiating requests and creating data flows from devices owned by unsuspecting Internet users. These new information flows were created in realms where adversaries were also operating, for example in the form of a government censors. This may pose a risk to the owners of devices that were implicated in the experimentation and data collection. The ethics board, however, concluded that such measurements did not meet the strict definition of “human subjects research”, which thereby excluded the need for formal review. Prof. Feamster suggests computer scientists reassess how they think about their technologies or newly initiated data flows that can be misused by adversaries, and take that into account in ethical review procedures.

 

Ethical tensions and dilemmas in technical Internet research could be seen as interesting research problems for scholars, argued Prof. Mittal. For example, to reason about privacy and trust in the anonymous Tor network, researchers need to understand to what extent adversaries can exploit vulnerabilities and thus observe Internet traffic of individual users. The obvious, relatively easy, and ethically dubious measurement would be to attack existing Tor nodes and attempt to collect real-time traffic of identifiable users. However, Prof. Mittal gave an insight into his own critical engagement with alternative design choices, which led his team to create a new node within Princeton’s university network that they subsequently attacked. This more lab-based approach eliminates risks for unsuspecting Internet users, but allowed for the same inferences to be done.

 

I concluded the panel, suggesting that ethics review boards at universities, academic conferences, and scholarly journals engage actively with computer scientists to collect valuable data whilst respecting human values. Currently, a panel on non-experts in either computer science or research ethics are given a single moment to judge the full methodology of a research proposal or the resulting paper. When a thumbs-down is issued, researchers have no or limited opportunity to remedy their ethical shortcomings. I argued that a better approach would be an iterative process with in-person meetings and more in-depth consideration of design alternatives, as demonstrated in a recent paper about Advertising as a Platform for Internet measurements (pdf). This is the approach advocates in the Networked Systems Ethics Guidelines. Cross-disciplinary conversation, rather than one-time decisions, allow for a mutual understanding between the gatekeepers of ethical standards and designers of useful computer science research.

 

See the video of the panel here.

How to buy physical goods using Bitcoin with improved security and privacy

Bitcoin has found success as a decentralized digital currency, but it is only one step toward decentralized digital commerce. Indeed, creating decentralized marketplaces and mechanisms is a nascent and active area of research. In a new paper, we present escrow protocols for cryptocurrencies that bring us closer to decentralized commerce.

In any online sale of physical goods, there is a circular dependency: the buyer only wants to pay once he receives his goods, but the seller only wants to ship them once she’s received payment. This is a problem regardless of whether one pays with bitcoins or with dollars, and the usual solution is to utilize a trusted third party. Credit card companies play this role, as do platforms such as Amazon and eBay. Crucially, the third party must be able to mediate in case of a dispute and determine whether the seller gets paid or the buyer receives a refund.

A key requirement for successful decentralized marketplaces is to weaken the role of such intermediaries, both because they are natural points of centralization and because unregulated intermediaries have tended to prove untrustworthy. In the infamous Silk Road marketplace, buyers would send payment to Silk Road, which would hold it in escrow. Note that escrow is necessary because it is not possible to reverse cryptocurrency transactions, unlike credit card payments. If all went well, Silk Road would forward the money to the seller; otherwise, it would mediate the dispute. Time and time again, the operators of these marketplaces have absconded with the funds in escrow, underscoring that this isn’t a secure model.

Lately, there have been various services that offer a more secure version of escrow payment. Using 2-of-3 multisignature transactions, the buyer, seller, and a trusted third party each hold one key. The buyer pays into a multisignature address that requires that any two of these three keys sign in order for the money to be spent. If the buyer and seller are in agreement, they can jointly issue payment. If there’s a dispute, the third party mediates. The third party and the winner of the dispute will then use their respective keys to issue a payout transaction to the winner.

This escrow protocol has two nice features. First, if there’s no dispute, the buyer and seller can settle without involving the third party. Second, the third party cannot run away with the money as it only holds one key, while two are necessary spend the escrowed funds.

Until now, the escrow conversation has generally stopped here. But in our paper we ask several further important questions. To start, there are privacy concerns. Unless the escrow protocol is carefully designed, anyone observing the blockchain might be able to spot escrow transactions. They might even be able to tell which transactions were disputed, and connect those to specific buyers and sellers.

In a previous paper, we showed that using multisignatures to split control over a wallet leads to major privacy leaks, and we advocated using threshold signatures instead of multisignatures. It turns out that using multisignatures for escrow has similar negative privacy implications. While using 2-of-3 threshold signatures instead of multisignatures would solve the privacy problem, it would introduce other undesirable features in the context of escrow as we explain in the paper.

Moreover, the naive escrow protocol above has a gaping security flaw: even though the third party cannot steal the money, it can refuse to mediate any disputes and thus keep the money locked up.

In addition to these privacy and security requirements, we study group escrow. In such a system, the transacting parties may choose multiple third parties from among a set of escrow service providers and have them mediate disputes by majority vote. Again, we analyze both the privacy and the security of the resulting schemes, as well as the details of group formation and communication.

Our goal in this paper is not to provide a definitive set of requirements for escrow services. We spoke with many Bitcoin escrow companies in the course of our research — it’s a surprisingly active space — and realized that there is no single set of properties that works for every use-case. For example, we’ve looked at privacy as a desirable property so far, but buyers may instead want to be able to examine the blockchain and identify how often a given seller was involved in disputes. In our paper, we present a toolbox of escrow protocols as well as a framework for evaluating them, so that anyone can choose the protocol that best fits their needs and be fully aware of the security and privacy implications of that choice.

We’ll present the paper at the Financial Cryptography conference in two weeks.

New Workshop on Technology and Consumer Protection

[Joe Calandrino is a veteran of Freedom to Tinker and CITP. As long time readers will remember,  he did his Ph.D. here, advised by Ed Felten. He recently joined the FTC as research director of OTech, the Office of Technology Research and Investigation. Today we have an exciting announcement. — Arvind Narayanan.]

Arvind Narayanan and I are thrilled to announce a new Workshop on Technology and Consumer Protection (ConPro ’17) to be co-hosted with the IEEE Symposium on Security and Privacy (Oakland) in May 2017:

Advances in technology come with countless benefits for society, but these advances sometimes introduce new risks as well. Various characteristics of technology, including its increasing complexity, may present novel challenges in understanding its impact and addressing its risks. Regulatory agencies have broad jurisdiction to protect consumers against certain harmful practices (typically called “deceptive and unfair” practices in the United States), but sophisticated technical analysis may be necessary to assess practices, risks, and more. Moreover, consumer protection covers an incredibly broad range of issues, from substantiation of claims that a smartphone app provides advertised health benefits to the adequacy of practices for securing sensitive customer data.

The Workshop on Technology and Consumer Protection (ConPro ’17) will explore computer science topics with an impact on consumers. This workshop has a strong security and privacy slant, with an overall focus on ways in which computer science can prevent, detect, or address the potential for technology to deceive or unfairly harm consumers. Attendees will skew towards academic and industry researchers but will include researchers from government agencies with a consumer protection mission, including the Federal Trade Commission—the U.S. government’s primary consumer protection body. Research advances presented at the workshop may help improve the lives of consumers, and discussions at the event may help researchers understand how their work can best promote consumer welfare given laws and norms surrounding consumer protection.

We have an outstanding program committee representing an incredibly wide range of computer science disciplines—from security, privacy, and e-crime to usability and algorithmic fairness—and touching on fields across the social sciences. The workshop will be an opportunity for these different disciplinary perspectives to contribute to a shared goal. Our call for papers discusses relevant topics, and we encourage anyone conducting research in these areas to submit their work by the January 10 deadline.

Computer science research—and computer security research in particular—excels at advancing innovative technical strategies to mitigate potential negative effects of digital technologies on society, but measures beyond strictly technical fixes also exist to protect consumers. How can our research goals, methods, and tools best complement laws, regulations, and enforcement? We hope this workshop will provide an excellent opportunity for computer scientists to consider these questions and find even better ways for our field to serve society.