May 26, 2019

CITP to Launch Tech Policy Clinic; Hiring Clinic Lead

We’re excited to announce the CITP technology policy clinic, a first-of-its-kind interdisciplinary project to engage students and scholars directly in the policy process. The clinic will be supported by a generous alumni gift.

The technology policy clinic will adapt the law school clinic model to involve scholars at all levels in real-world policy activities related to technology—preparing written comments and briefs, working with startup companies, and collaborating with public-interest law groups. As an outgrowth of this work, CITP could provide federal, state and local policy makers with briefings on emerging technologies and could also create simple non-partisan guides to action for citizens and small businesses.

We’re looking to hire a Clinic Lead, an experienced policy professional to lead the clinic. For more information, go to https://citp.princeton.edu/clinic-lead/

CITP was founded as Princeton’s initiative to support research and education on technology policy issues. Over the years, CITP’s voice grew stronger as it uniquely leveraged its strength of world class computer scientists and engineers, to work alongside leading policy experts at the Woodrow Wilson School of Public Policy. The center has now established a recognized national voice in areas including AI policy, privacy and security, technology for governance and civil liberties, broadband policy, big data, cryptocurrencies, and the internet of things. As the national debate over technology and its impact on democracy has come to the forefront in recent times, the demand for technology policy experts has surged. CITP recognizes a need to take on a larger role in tackling some of these technology policy problems by providing on-the-ground training to Princeton’s extraordinary students. We’re eager to hire a Clinic Lead and get started!  

How to constructively review a research paper

Any piece of research can be evaluated on three axes:

  • Correctness/validity — are the claims justified by evidence?
  • Impact/significance — how will the findings affect the research field (and the world)?
  • Novelty/originality — how big a leap are the ideas, especially the methods, compared to what was already known?

There are additional considerations such as the clarity of the presentation and appropriate citations of prior work, but in this post I’ll focus on the three primary criteria above. How should reviewers weigh these three components relative to each other? There’s no single right answer, but I’ll lay out some suggestions.

First, note that the three criteria differ greatly in terms of reviewers’ ability to judge them:

  • Correctness can be evaluated at review time, at least in principle.
  • Impact can at best be predicted at review time. In retrospect (say, 10 years after publication), informed peers will probably agree with each other about a paper’s impact.
  • Novelty, in contrast to the other two criteria, seems to be a fundamentally subjective notion.

We can all agree that incorrect papers should not be accepted. Peer review would lose its meaning without that requirement. In practice, there are complications ranging from the difficulty of verifying mathematical proofs to the statistical nature of research claims; the latter has led to replication crises in many fields. But as a principle, it’s clear that reviewers shouldn’t compromise on correctness.

Should reviewers even care about impact or novelty?

It’s less obvious why peer review should uphold standards of (predicted) impact or (perceived) novelty. If papers weren’t filtered for impact, presumably it would burden readers by making it harder to figure out which papers to pay attention to. So peer reviewers perform a service to readers by rejecting low-impact papers, but this type of gatekeeping does collateral damage: many world-changing discoveries were initially rejected as insignificant.

The argument for novelty of ideas and methods as a review criterion is different: we want to encourage papers that make contributions beyond their immediate findings, that is, papers that introduce methods that will allow other researchers to make new discoveries in the future.

In practice, novelty is often a euphemism for cleverness, which is a perversion of the intent. Readers aren’t served by needlessly clever papers. Who cares about cleverness? People who are evaluating researchers: hiring and promotion committees. Thus, publishing in a venue that emphasizes novelty becomes a badge of merit for researchers to highlight in their CVs. In turn, forums that publish such papers are seen as prestigious.

Because of this self-serving aspect, today’s peer review over-emphasizes novelty. Sure, we need occasional breakthroughs, but mostly science progresses in a careful, methodical way, and papers that do this important work are undervalued. In many fields of study, publishing is at risk of devolving into a contest where academics impress each other with their cleverness.

There is at least one prominent journal, PLoS One, whose peer reviewers are tasked with checking only correctness, with impact and novelty being left to be sorted out post-publication. But for most journals and peer-reviewed conferences, the limited number of publication slots means that there will inevitably be gatekeeping based on impact and/or novelty.

Suggestions for reviewers

Given this reality, here are four suggestions for reviewers. This list is far from comprehensive, and narrowly focused on the question of weighing the three criteria.

  1. Be explicit about how you rate the paper on correctness, impact, and novelty (and any other factors such as clarity of the writing). Ideally, review forms should insist on separate ratings for the criteria. This makes your review much more actionable for the authors: should they address flaws in the work, try harder to convince the world of its importance, or abandon it entirely?
  2. Learn to recognize your own biases in assessing impact and novelty, and accept that these assessments might be wrong or subjective. Be open to a discussion with other reviewers that might change your mind.
  3. Not every paper needs to maximize all three criteria. Consider accepting papers with important results even if they aren’t highly novel, and conversely, papers that are judged to be innovative even if the potential impact isn’t immediately clear. But don’t reward cleverness for the sake of cleverness; that’s not what novelty is supposed to be about.
  4. Above all, be supportive of authors. If you rated a paper low on impact or novelty, do your best to explain why.

Conclusion

Over the last 150 years, peer review has evolved to be more and more of a competition. There are some advantages to this model, but it makes it easy for reviewers to lose touch with the purpose of peer review and basic norms of civility. Once in a while, we need to ask ourselves critical questions about what we’re doing and how best to do it. I hope this post was useful for such a reflection.

 

Thanks to Ed Felten and Marshini Chetty for feedback on a draft.

 

Roundup: My First Semester as a Post-Doc at Princeton

As Princeton thaws from under last week’s snow hurricane, I’m taking a moment to reflect on my first four months in the place I now call home.

This roundup post shares highlights from my first semester as a post-doc in Psychology, CITP, and Sociology.

Here in Princeton, I’m surviving winter in the best way I know how 🙂

So far, I have had an amazing experience:

  • The Paluck Lab (Psychology) and the Center for IT Policy, my main anchor points at Princeton, have been welcoming and supportive. When colleagues from both departments showed up at my IgNobel Prize viewing party in my first month, I knew I had found a good home <grin>
  • The Paluck Lab have become a wonderful research family, and they even did the LEGO duck challenge together with me!
    • Weekly lab meetings with the Paluck Lab have been a master-class in thinking about the relationship between research design and theory in the social sciences. I am so grateful to observe and participate in these conversations, since so much about research is unspoken, tacit knowledge.
    • With the help of my new colleagues, I’ve started to learn how to write papers for general science journals. I’ve also learned more about publishing in the field of psychology.
  • At CITP, I’ve learned much about thinking simultaneously as a regulator and computer scientist.
  • I’ve loved the conversations at the Kahneman-Treisman Center for Behavioral Policy, where I am now an affiliated postdoc
  • I’m looking forward to meeting more of my colleagues in Sociology this spring, now that I’ll be physically based in Princeton more consistently

Travel and Speaking

 

View of the French Alps from Lausanne

 

I’m so glad that I can scale down my travel this spring, phew!

A flock of birds takes flight in Antigua, Guatemala

 

Writing and Research

Princeton Life

Rockefeller College Cloisters, Princeton. On evenings when I have dinner here, I walk through these cloisters on my way hime.