May 22, 2018

How to constructively review a research paper

Any piece of research can be evaluated on three axes:

  • Correctness/validity — are the claims justified by evidence?
  • Impact/significance — how will the findings affect the research field (and the world)?
  • Novelty/originality — how big a leap are the ideas, especially the methods, compared to what was already known?

There are additional considerations such as the clarity of the presentation and appropriate citations of prior work, but in this post I’ll focus on the three primary criteria above. How should reviewers weigh these three components relative to each other? There’s no single right answer, but I’ll lay out some suggestions.

First, note that the three criteria differ greatly in terms of reviewers’ ability to judge them:

  • Correctness can be evaluated at review time, at least in principle.
  • Impact can at best be predicted at review time. In retrospect (say, 10 years after publication), informed peers will probably agree with each other about a paper’s impact.
  • Novelty, in contrast to the other two criteria, seems to be a fundamentally subjective notion.

We can all agree that incorrect papers should not be accepted. Peer review would lose its meaning without that requirement. In practice, there are complications ranging from the difficulty of verifying mathematical proofs to the statistical nature of research claims; the latter has led to replication crises in many fields. But as a principle, it’s clear that reviewers shouldn’t compromise on correctness.

Should reviewers even care about impact or novelty?

It’s less obvious why peer review should uphold standards of (predicted) impact or (perceived) novelty. If papers weren’t filtered for impact, presumably it would burden readers by making it harder to figure out which papers to pay attention to. So peer reviewers perform a service to readers by rejecting low-impact papers, but this type of gatekeeping does collateral damage: many world-changing discoveries were initially rejected as insignificant.

The argument for novelty of ideas and methods as a review criterion is different: we want to encourage papers that make contributions beyond their immediate findings, that is, papers that introduce methods that will allow other researchers to make new discoveries in the future.

In practice, novelty is often a euphemism for cleverness, which is a perversion of the intent. Readers aren’t served by needlessly clever papers. Who cares about cleverness? People who are evaluating researchers: hiring and promotion committees. Thus, publishing in a venue that emphasizes novelty becomes a badge of merit for researchers to highlight in their CVs. In turn, forums that publish such papers are seen as prestigious.

Because of this self-serving aspect, today’s peer review over-emphasizes novelty. Sure, we need occasional breakthroughs, but mostly science progresses in a careful, methodical way, and papers that do this important work are undervalued. In many fields of study, publishing is at risk of devolving into a contest where academics impress each other with their cleverness.

There is at least one prominent journal, PLoS One, whose peer reviewers are tasked with checking only correctness, with impact and novelty being left to be sorted out post-publication. But for most journals and peer-reviewed conferences, the limited number of publication slots means that there will inevitably be gatekeeping based on impact and/or novelty.

Suggestions for reviewers

Given this reality, here are four suggestions for reviewers. This list is far from comprehensive, and narrowly focused on the question of weighing the three criteria.

  1. Be explicit about how you rate the paper on correctness, impact, and novelty (and any other factors such as clarity of the writing). Ideally, review forms should insist on separate ratings for the criteria. This makes your review much more actionable for the authors: should they address flaws in the work, try harder to convince the world of its importance, or abandon it entirely?
  2. Learn to recognize your own biases in assessing impact and novelty, and accept that these assessments might be wrong or subjective. Be open to a discussion with other reviewers that might change your mind.
  3. Not every paper needs to maximize all three criteria. Consider accepting papers with important results even if they aren’t highly novel, and conversely, papers that are judged to be innovative even if the potential impact isn’t immediately clear. But don’t reward cleverness for the sake of cleverness; that’s not what novelty is supposed to be about.
  4. Above all, be supportive of authors. If you rated a paper low on impact or novelty, do your best to explain why.

Conclusion

Over the last 150 years, peer review has evolved to be more and more of a competition. There are some advantages to this model, but it makes it easy for reviewers to lose touch with the purpose of peer review and basic norms of civility. Once in a while, we need to ask ourselves critical questions about what we’re doing and how best to do it. I hope this post was useful for such a reflection.

 

Thanks to Ed Felten and Marshini Chetty for feedback on a draft.

 

When the business model *is* the privacy violation

Sometimes, when we worry about data privacy, we’re worried that data might fall into the wrong hands or be misused for unintended purposes. If I’m considering participating in a medical study, I’d want to know if insurance companies will obtain the data and use it against me. In these scenarios, we should look for ways to preserve the intended benefit while preventing unintended uses. In other words, achieving utility and privacy is not a zero-sum game. [1]

In other situations, the intended use is the privacy violation. The most prominent example is the tracking of our online and offline habits for targeted advertising. This business model is exactly what people object to, for a litany of reasons: targeting is creepy, manipulative, discriminatory, and reinforces harmful stereotypes. The data collection that enables targeted advertising involves an opaque surveillance infrastructure to which it’s impossible to give meaningfully informed consent, and the resulting databases give a few companies too much power over individuals and over democracy. [2]

In response to privacy laws, companies have tried to find technical measures that obfuscate the data but allow them carry on with the surveillance business as usual. But that’s just privacy theater. Technical steps that don’t affect the business model are of limited effectiveness, because the business model is fundamentally at odds with privacy; this is in fact a zero-sum game. [3]

For example, there’s an industry move to replace email addresses and other personal identifiers with hashed versions. But a hashed identifier is nevertheless a persistent, unique identifier that allows linking a person across databases, devices, and contexts, as well as targeting and manipulation on the basis of the associated data. Thus, hashing completely fails to address the underlying privacy concerns.

Policy makers and privacy advocates must recognize when privacy is a zero-sum game and when it isn’t. Policy makers like non-zero sum games because they can simultaneously satisfy different stakeholders. But they must acknowledge that sometimes this isn’t possible. In such cases, laws and regulations should avoid loopholes that companies might exploit by building narrow technical measures and claiming to be in compliance. [4]

Privacy advocates should recognize that framing a concern about data use practices as a privacy problem is a double-edged sword. Privacy can be a convenient label for a set of related concerns, but it gives industry a way to deflect attention from deeper ethical questions by interpreting privacy narrowly as confidentiality.

Thanks to Ed Felten and Nick Feamster for feedback on a draft.


[1] There is a vast computer science privacy literature predicated on the idea that we can have our cake and eat it too. For example, differential privacy seeks to enable analysis of data in the aggregate without revealing individual information. While there are disagreements on the specifics, such as whether de-identification results a win-win outcome, there is no question that the overall direction of privacy-preserving data analysis is an important one.

[2] In Mark Zuckerberg’s congressional testimony, he framed Facebook’s privacy woes as being about improper third-party access to the data. This is arguably a non-zero sum game, and one that Facebook is equipped to address without the need for legislation. However, the much bigger privacy problem is Facebook’s own data collection and business model, which is inherently at odds with privacy and is unlikely to be solved without legislation.

[3] There are research proposals for targeted advertising, such as Adnostic, that would improve privacy by drastically changing the business model, largely cutting out the tracking companies. Unsurprisingly, there has been no interest in these approaches from the traditional ad tech industry, but some browser vendors have experimented with similar ideas.

[4] As an example of avoiding the hashing loophole, the 2012 FTC privacy report is well written: it says that for data to be considered de-identified, “the company must achieve a reasonable level of justified confidence that the data cannot reasonably be used to infer information about, or otherwise be linked to, a particular consumer, computer, or other device.” It goes on to say that “reasonably” includes reasonable assumptions about the use of external data sources that might be available.

What’s new with BlockSci, Princeton’s blockchain analysis tool

Six months ago we released the initial version of BlockSci, a fast and expressive tool to analyze public blockchains. In the accompanying paper we explained how we used it to answer scientific questions about security, privacy, miner behavior, and economics using blockchain data. BlockSci has a number of other applications including forensics and as an educational tool.

Since then we’ve heard from a number of researchers and developers who’ve found it useful, and there’s already a published paper on ransomware that has made use of it. We’re grateful for the pull requests and bug reports on GitHub from the community. We’ve also used it to deep-dive into some of the strange corners of blockchain data. We’ve made enhancements including a 5x speed improvement over the initial version (which was already several hundred times faster than previous tools).

Today we’re happy to announce BlockSci 0.4.5, which has a large number of feature enhancements and bug fixes. As just one example, Bitcoin’s SegWit update introduces the concept of addresses that have different representations but are equivalent; tools such as blockchain.info are confused by this and return incorrect (or at least unexpected) values for the balance held by such addresses. BlockSci handles these nuances correctly. We think BlockSci is now ready for serious use, although it is still beta software. Here are a number of ideas on how you can use it in your projects or contribute to its development.

We plan to release talks and tutorials on BlockSci, and improve its documentation. I’ll give a brief talk about it at the MIT Bitcoin Expo this Saturday; then Harry Kalodner and Malte Möser will join me for a BlockSci tutorial/workshop at MIT on Monday, March 19, organized by the Digital Currency Initiative and Fidelity Labs. Videos of both events will be available.

We now have two priorities for the development of BlockSci. The first is to make it possible to implement almost all analyses in Python with the speed of C++. To enable this we are building a function composition interface to automatically translate Python to C++. The second is to better support graph queries and improved clustering of the transaction graph. We’ve teamed up with our colleagues in the theoretical computer science group to adapt sophisticated graph clustering algorithms to blockchain data. If this effort succeeds, it will be a foundational part of how we understand blockchains, just as PageRank is a fundamental part of how we understand the structure of the web. Stay tuned!