October 26, 2021

CITP Call for Fellows 2022-23

The Center for Information Technology Policy (CITP) is an interdisciplinary center at Princeton University. The Center is a nexus of expertise in technology, engineering, public policy, and the social sciences on campus. In keeping with the strong University tradition of service, the Center’s research, teaching, and events address digital technologies as they interact with society.

CITP is seeking applications for the CITP Fellows Program for 2022-23. There are three tracks:

• Postdoctoral track: for people who recently received a Ph.D.
• Visiting Professional track: for academics and professionals (e.g., lawyers, journalists, technologists, former government officials, etc.)
• Microsoft Visiting Professor track: for academics

In this application cycle, we especially welcome applicants with interests in: Artificial Intelligence (AI), Data Science, Blockchain, Cryptocurrencies and Cryptography.

The Center for Information Technology Policy Fellows Program offers scholars and practitioners from diverse backgrounds the opportunity to join the Center’s community. The goals of this fully-funded, in-residence program are to support people doing important research and policy engagement related to the Center’s mission and to enrich the Center’s intellectual life. Fellows typically will conduct research with members of the Center’s community and engage in the Center’s public programs. The fellows’ program provides freedom to pursue projects of interest and a stimulating intellectual environment.

Application review will begin in the middle of December 2021.

For more information and to apply, please see our Fellows Program webpage.

National AI Research Infrastructure Needs to Support Independent Evaluation of Performance Claims

By Sayash Kapoor, Mihir Kshirsagar, and Arvind Narayanan

Our response to the National AI Research Resource RFI highlights the significance of supporting a research infrastructure that is designed to independently test the validity of the claims of AI performance. In particular, we draw attention to the widespread phenomenon of the industry peddling what we call “AI snake oil” —  promoting an AI solution that cannot work as promised.  Relatedly, we highlight how AI-based scientific research is often plagued by overly optimistic claims about its results and suffers from reproducibility failures. We also offer suggestions on how the NAIRR can promote responsible data stewardship models. We recommend that the Task Force’s implementation roadmap include establishing a public infrastructure that can critically evaluate AI performance claims as that is vital to ensuring that AI research serves our shared democratic values. Note, the AI Task Force has extended the deadline for submitting public responses to October 1, 2021.

We need a personal digital advocate

I recently looked up a specialized medical network. For weeks following the search, I was bombarded with ads for the network and other related services: the Internet clearly thought I was on the market for a new doctor. The funny thing is that I was looking this up for someone else and all this information, which was being pushed on me across browsers and across devices, was not really relevant. I wish I could muster such relentlessness and consistency for things that really matter to me!

This is but one example of the huge imbalance between the power of algorithms that track our economic interactions online and the power of individual consumers to have a say in the information being collected about them. So here we are being offered products that might not be in our best interest based on our past search histories. Even worse, individually, there is no way for us to know whether economic opportunities are being advertised equitably. This is true in small things, such as price discrimination on shoes, and in important things, such as job searches. Does your internet reality reflect a lower-wage world because you are known to the internet to be female?

Current rules and regulations that attempt to protect our best interests online are woefully lacking, which is understandable. They were never designed for the digital world. It is not just the difficulty of documenting (and proving) bad behavior such as discrimination or dark patterns, but the task of allocating responsibility – untangling the stack of intertwined ad technologies and entities responsible for the bad behavior. The most viable regulation-based proposals tend to revolve around holding companies such as Facebook or Google accountable through regulation. This approach is important but is limited by several factors: it does not work well across corporate and national boundaries, it exposes companies to a significant conflict of interests while implementing such regulations, and it does nothing to address the growing imbalance between the user and the data centers behind the phone screen.

What we would like to propose is a radically different approach to righting the balance of power between algorithms and individual users: the Personal Digital Advocate. The broad point of this advocate would be to give the consumers (both as individuals and as a group) an algorithm that will be answerable only to them and have equal computing power and equal access to information to that which companies currently possess. Here is a sample of the benefits such an advocate could provide:

  1. You can’t possibly know when you are being upsold by a company because your previous purchase history indicates that you are not going to check. The advocate will be able to detect that by having access to the prices on the same product that were offered to other people in the past several months.
  2. The advocate will be able to detect instances of gender and race-based discrimination in job searches by being able to detect that you are not getting access to the full range of jobs for which you qualify.
  3. Instead of incomplete, messy, often wrong data about you being bought and sold on the internet behind your back and outside of your control (which is the default mode now), you will be able to use your digital advocate to freely offer certain information about yourself to companies in a way that will benefit you, but also the company. For instance, suppose you are shopping for a minivan. You have looked at all kinds of brands, but you know that you will only buy a Toyota or a Honda. This is information that you might not mind sharing if there were a way to do so. It could mean that Kia dealerships will stop wasting their money and your time by advertising to you, and the ads you do get might actually become more relevant.

For a digital advocate to become viable, two major policy changes will need to take place – one in the legal and one in the technical domain:

Legal: In the absence of a legal framework, it will always be more profitable for a digital advocate to sell the consumer out (e.g., it is easy to see how it could start steering people toward certain products for a commission). Fortunately, a legal framework to prevent this already exists in other arenas. One good example is the lawyer/client relationship.  It might otherwise be very profitable for a law firm to betray a client and use his information against him (e.g., by leaking willingness to pay in a real estate deal and then collecting commission), but any lawyer who does that will immediately be disbarred, or worse. There needs to be a “bar” of sorts for the digital advocate.

Technical: At a technical level, a technological framework will need to be developed that would allow the advocate to access all the information it needs when it needs it. “Digital rights” laws such as GDPR and CCPA will need to incorporate a digital access mandate – allowing the end-user to nominate a bot to uphold her rights (such as the right to refuse cookies without having to go through a dark pattern, or the ability to download one’s data in a timely manner).

Regulations always tend to fall behind advances in technology. This is true across different industries and historically. For instance, there was a notable lag between the time when medications started being mass produced and the emergence of the FDA. Our ancestors who lived in the “gap” probably consumed “medications” ranging from harmlessly ineffective to outright dangerous. The algorithms that govern our online lives (which merges more and more with our regular lives) change more quickly than any other industry, and moreover, are able to adapt automatically to regulations. Regulations, which have trouble keeping up with progress in general, will especially struggle against such an adaptive opponent. Thus, the only sustainable way to protect ourselves online is to create an algorithm that will protect us and will be able to develop at the same rate as the ones that can wittingly or unwittingly harm us.

Mark Braverman is a professor of computer science at Princeton University, and is part of the theory group. His research focuses on algorithms and computational complexity theory, as well as building connections to other disciplines, including information theory, mathematical analysis, and mechanism design.

A longer version of this post is available in the form of an essay here.