June 29, 2017

Breaking, fixing, and extending zero-knowledge contingent payments

The problem of fair exchange arises often in business transactions — especially when those transactions are conducted anonymously over the internet. Alice would like to buy a widget from Bob, but there’s a circular problem: Alice refuses to pay Bob until she receives the widget whereas Bob refuses to send Alice the widget until he receives payment.

In a previous post, we described the fair-exchange problem and solutions for buying physical goods using Bitcoin. Today is the first of two posts in which we focus on purchasing digital goods and services. In a new paper together with researchers from City University of New York and IMDEA Software Institute, Madrid, we show that Zero-knowledge contingent payments (ZKCP), a well known protocol for the fair exchange of digital goods over the blockchain is insecure in its current form. We show how to fix ZKCP, and also extend it to a new class of problems. [Read more…]

What does it mean to ask for an “explainable” algorithm?

One of the standard critiques of using algorithms for decision-making about people, and especially for consequential decisions about access to housing, credit, education, and so on, is that the algorithms don’t provide an “explanation” for their results or the results aren’t “interpretable.”  This is a serious issue, but discussions of it are often frustrating. The reason, I think, is that different people mean different things when they ask for an explanation of an algorithm’s results.

Before unpacking the different flavors of explainability, let’s stop for a moment to consider that the alternative to algorithmic decisionmaking is human decisionmaking. And let’s consider the drawbacks of relying on the human brain, a mechanism that is notoriously complex, difficult to understand, and prone to bias. Surely an algorithm is more knowable than a brain. After all, with an algorithm it is possible to examine exactly which inputs factored in to a decision, and every detailed step of how these inputs were used to get to a final result. Brains are inherently less transparent, and no less biased. So why might we still complain that the algorithm does not provide an explanation?

We should also dispense with cases where the algorithm is just inaccurate–where a well-informed analyst can understand the algorithm but will see it as producing answers that are wrong. That is a problem, but it is not a problem of explainability.

So what are people asking for when they say they want an explanation? I can think of at least four types of explainability problems.

The first type of explainability problem is a claim of confidentiality. Somebody knows relevant information about how a decision was made, but they choose to withhold it because they claim it is a trade secret, or that disclosing it would undermine security somehow, or that they simply prefer not to reveal it. This is not a problem with the algorithm, it’s an institutional/legal problem.

The second type of explainability problem is complexity. Here everything about the algorithm is known, but somebody feels that the algorithm is so complex that they cannot understand it. It will always be possible to answer what-if questions, such as how the algorithm’s result would have been different had the person been one year older, or had an extra $1000 of annual income, or had one fewer prior misdemeanor conviction, or whatever. So complexity can only be a barrier to big-picture understanding, not to understanding which factors might have changed a particular person’s outcome.

The third type of explainability problem is unreasonableness. Here the workings of the algorithm are clear, and are justified by statistical evidence, but the result doesn’t seem to make sense. For example, imagine that an algorithm for making credit decisions considers the color of a person’s socks, and this is supported by unimpeachable scientific studies showing that sock color correlates with defaulting on credit, even when controlling for other factors. So the decision to factor in sock color may be justified on a rational basis, but many would find it unreasonable, even if it is not discriminatory in any way. Perhaps this is not a complaint about the algorithm but a complaint about the world–the algorithm is using a fact about the world, but nobody understands why the world is that way. What is difficult to explain in this case is not the algorithm, but the world that it is predicting.

The fourth type of explainability problem is injustice. Here the workings of the algorithm are understood but we think they are unfair, unjust, or morally wrong. In this case, when we say we have not received an explanation, what we really mean is that we have not received an adequate justification for the algorithm’s design.  The problem is not that nobody has explained how the algorithm works or how it arrived at the result it did. Instead, the problem is that it seems impossible to explain how the algorithm is consistent with law or ethics.

It seems useful, when discussing the explanation problem for algorithms, to distinguish these four cases–and any others that people might come up with–so that we can zero in on what the problem is. In the long run, all of these types of complaints are addressable–so that perhaps explainability is not a unique problem for algorithms but rather a set of commonsense principles that any system, algorithmic or not, must attend to.

Job Opening: Associate Director at Princeton CITP

Princeton’s Center for Information Technology Policy (CITP), which, among other things, hosts Freedom to Tinker, is looking for a new Associate Director. Please come and work with us!

CITP is an interdisciplinary nexus of expertise in technology, engineering, public policy, and the social sciences. In keeping with the strong University tradition of service, the Center’s researchteaching, and events address digital technologies as they interact with society.

The Associate Director is the primary public face of CITP and plays a vital role in the management and direction of the Center, as the principal administrator for our core research, education and outreach both on campus and beyond. The position involves a combination of academic and administrative tasks.

This individual develops, plans and executes the Center’s lecture series, workshops and policy briefings; recruits visiting researchers and policy experts, coordinating the selection and appointment process; contributes to the Center’s research initiatives; promotes and supports the Center’s undergraduate certificate offerings and other student programs; develops the Center budget in collaboration with the Director and is responsible for the careful and appropriate management of Center funds; edits the Center’s research blog; performs day-to-day Center operations; cultivates high profile research collaborations and joint public events with other institutions; manages public communications through the website and print materials; coordinates grant writing and development initiatives as appropriate; and supervises two administrative staff members.

This is a 3 year term position with the possibility of renewal.

All applicants must apply on the Princeton’s job website, requisition number: 2017-7403:

https://main-princeton.icims.com/jobs/7403/associate-director%2c-center-for-information-technology-policy/job?mobile=false&width=1200&height=500&bga=true&needsRedirect=false&jan1offset=-300&jun1offset=-240