May 19, 2022

 A Multi-pronged Strategy for Securing Internet Routing

By Henry Birge-Lee, Nick Feamster, Mihir Kshirsagar, Prateek Mittal, Jennifer Rexford

The Federal Communications Commission (FCC) is conducting an inquiry into how it can help protect against security vulnerabilities in the internet routing infrastructure. A number of large communication companies have weighed in on the approach the FCC should take. 

CITP’s Tech Policy Clinic convened a group of experts in information security, networking, and internet policy to submit an initial comment offering a public interest perspective to the FCC. This post summarizes our recommendations on why the government should take a multi-pronged strategy to promote security that involves incentives and mandates. Reply comments from the public are due May 11.

The core challenge in securing the internet routing infrastructure is that the original design of the network did not prioritize security against adversarial attacks. Instead, the original design focused on how to route traffic through decentralized networks with the goal of delivering information packets efficiently, while not dropping traffic. 

At the heart of this routing system is the Border Gateway Protocol (BGP), which allows independently-administered networks (Autonomous Systems or ASes) to announce reachability to IP address blocks (called prefixes) to neighboring networks. But BGP has no built-in mechanism to distinguish legitimate routes from bogus routes. Bogus routing information can redirect internet traffic to a strategic adversary, who can launch a variety of attacks, or the bogus routing can lead to accidental outages or performance issues. Network operators and researchers have been actively developing measures to counteract this problem.

At a high level, the current suite of BGP security measures depend on building systems to validate routes. But for these technologies to work, most participants have to adopt them or the security improvements will not be realized. In other words, it has many of the hallmarks of a “chicken and egg” situation. As a result, there is no silver bullet to address routing security.

Instead, we argue, the government needs a cross-layer strategy that embraces pushing different elements of the infrastructure to adopt security measures that protect legitimate traffic flows using a carrot-and-stick approach. Our comment identifies specific actions Internet Service Providers, Content Delivery Networks and Cloud Providers, Internet Exchange Points, Certificate Authorities, Equipment Manufacturers, and DNS Providers should take to improve security. We also recommend that the government funds and supports academic research centers that collect real-time data from a variety of sources that measure traffic and how it is routed across the internet.  

We anticipate several hurdles to our recommended cross-layer approach: 

First, to mandate the cross-layer security measures, the FCC has to have regulatory authority over the relevant players. And, to the extent a participant does not fall under the FCC’s authority, the FCC should develop a whole-of-government approach to secure the routing infrastructure.

Second, large portions of the internet routing infrastructure lie outside the jurisdiction of the United States. As such, there are international coordination issues that the FCC will have to navigate to achieve the security properties needed. That said, if there is a sufficient critical mass of providers who participate in the security measures, that could create a tipping point for a larger global adoption.

Third, the package of incentives and mandates that the FCC develops has to account for the risk that there will be recalcitrant small and medium sized firms who might undermine the comprehensive approach that is necessary to truly secure the infrastructure.

Fourth, while it is important to develop authenticated routes for traffic to counteract adversaries, there is an under-appreciated risk from a flipped threat model – the risk that an adversary takes control of an authenticated node and uses that privileged position to disrupt routing. There are no easy fixes to this threat – but an awareness of this risk can allow for developing systems to detect such actions, especially in international contexts.  

Holding Purveyors of “Dark Patterns” for Online Travel Bookings Accountable

Last week, my former colleagues at the New York Attorney General’s Office (NYAG), scored a $2.6 million settlement with Fareportal – a large online travel agency that used deceptive practices, known as “dark patterns,” to manipulate consumers to book online travel.

The investigation exposes how Fareportal, which operates under several brands, including CheapOair and OneTravel — used a series of deceptive design tricks to pressure consumers to buy tickets for flights, hotels, and other travel purchases. In this post, I share the details of the investigation’s findings and use them to highlight why we need further regulatory intervention to prevent similar conduct from becoming entrenched in other online services.

The NYAG investigation picks up on the work of researchers at Princeton’s CITP that exposed the widespread use of dark patterns on shopping websites. Using the framework we developed in a subsequent paper for defining dark patterns, the investigation reveals how the travel agency weaponized common cognitive biases to take advantage of consumers. The company was charged under the Attorney General’s broad authority to prohibit deceptive acts and practices. In addition to paying $2.6 million, the New York City-based company agreed to reform its practices.

Specifically, the investigation documents how Fareportal exploited the scarcity bias by displaying, next to the top two flight search results, a false and misleading message about the number of tickets left for those flights at the advertised price. It manipulated consumers through adding 1 to the number of tickets the consumer had searched for to show that there were only X+1 tickets left at that price. So, if you searched for one round trip ticket from Philadelphia to Chicago, the site would say “Only 2 tickets left” at that price, while a consumer searching for two such tickets would see a message stating “Only 3 tickets left” at the advertised price. 

In 2019, Fareportal added a design feature that exploited the bandwagon effect by displaying how many other people were looking at the same deal. The site used a computer-generated random number between 28 and 45 to show the number of other people “looking” at the flight. It paired this with a false countdown timer that displayed an arbitrary number that was unrelated to the availability of tickets. 

Similarly, Fareportal exported its misleading tactics to the making of hotel bookings on its mobile apps. The apps misrepresented the percentage of rooms shown that were “reserved” by using a computer-generated number keyed to when the customer was trying to book a room. So, for example, if the check-in date was 16-30 days away, the message would indicate that between 41-70% of the hotel rooms were booked, but if it was less than 7 days away, it showed that 81-99% of the rooms were reserved. But, of course, those percentages were pure fiction. The apps used a similar tactic for displaying the number of people “viewing” hotels in the area. This time, they generated the number based on the nightly rate for the fifth hotel returned in the search by using the difference between the numerical value of the dollar figure and the numerical value of the cents figure. (If the rate was $255.63, consumers were told 192 people were viewing the hotel listings in the area.)

Fareportal used these false scarcity indicators across its websites and mobile platforms for pitching products such as travel protection and seat upgrades, through inaccurately representing how many other consumers that had purchased the product in question. 

In addition, the NYAG charged Fareportal with using a pressure tactic of making consumers accept or decline purchase a travel protection policy to “protect the cost of [their] trip” before completing a purchase. This practice is described in the academic literature as a covert pattern that uses “confirmshaming” and “forced action” to influence choices. 

Finally, the NYAG took issue with how Fareportal manipulated price comparisons to suggest it was offering tickets at a discounted price, when in fact, most of the advertised tickets were never offered for sale at the higher comparison price. The NYAG rejected Fareportal’s attempt to use a small pop-up to cure the false impression conveyed by the visual slash-through image that conveyed the discount. Similarly, the NYAG called out how Fareportal hid its service fees by disguising them as being part of the “Base Price” of the ticket rather than the separate line item for “Taxes and Fees.” These tactics are described in the academic literature as using “misdirection” and “information hiding” to influence consumers. 


The findings from this investigation illustrate why dark patterns are not simply aggressive marketing practices, as some commentators contend, but require regulatory intervention. Specifically, such shady practices are difficult for consumers to spot and to avoid, and, as we argued, risk becoming entrenched across different travel sites who have the incentive to adopt similar practices. As a result, Fareportal, unfortunately, will not be the first or the last online service to deploy such tactics. But this creates an opportunity for researchers, consumer advocates, and design whistleblowers to step forward and spotlight such practices to protect consumers and help create a more trustworthy internet.    

Calling for Investing in Equitable AI Research in Nation’s Strategic Plan

By Solon Barocas, Sayash Kapoor, Mihir Kshirsagar, and Arvind Narayanan

In response to the Request for Information to the Update of the National Artificial Intelligence Research and Development Strategic Plan (“Strategic Plan”) we submitted comments  providing suggestions for how the Strategic Plan for government funding priorities should focus resources to address societal issues such as equity, especially in communities that have been traditionally underserved. 

The Strategic Plan highlights the importance of investing in research about developing trust in AI systems, which includes requirements for robustness, fairness, explainability, and security. We argue that the Strategic Plan should go further by explicitly including a commitment to making investments in research that examines how AI systems can affect the equitable distribution of resources. Specifically, there is a risk that without such a commitment, we make investments in AI research that can marginalize communities that are disadvantaged. Or, even in cases where there is no direct harm to a community, the research support focuses on classes of problems that benefit the already advantaged communities, rather than problems facing disadvantaged communities.  

We make five recommendations for the Strategic Plan:  

First, we recommend that the Strategic Plan outline a mechanism for a broader impact review when funding AI research. The challenge is that the existing mechanisms for ethics review of research projects – Institutional Review Boards (“IRB”) –  do not adequately identify downstream harms stemming from AI applications. For example, on privacy issues, an IRB ethics review would focus on the data collection and management process. This is also reflected in the Strategic Plan’s focus on two notions of privacy: (i) ensuring the privacy of data collected for creating models via strict access controls, and (ii) ensuring the privacy of the data and information used to create models via differential privacy when the models are shared publicly. 

But both of these approaches are focused on the privacy of the people whose data has been collected to facilitate the research process, not the people to whom research findings might be applied. 

Take, for example, the potential impact of face recognition for detecting ethnic minorities. Even if the researchers who developed such techniques had obtained approval from the IRB for their research plan, secured the informed consent of participants, applied strict access control to the data, and ensured that the model was differentially private, the resulting model could still be used without restriction for surveillance of entire populations, especially as institutional mechanisms for ethics review such as IRBs do not consider downstream harms during their appraisal of research projects. 

We recommend that the Strategic Plan include as a research priority supporting the development of alternative institutional mechanisms to detect and mitigate the potentially negative downstream effects of AI systems. 

Second, we recommend that the Strategic Plan include provisions for funding research that would help us understand the impact of AI systems on communities, and how AI systems are used in practice. Such research can also provide a framework for informing decisions on which research questions and AI applications are too harmful to pursue and fund. 

We recognize that it may be challenging to determine what kind of impact AI research might have as it affects a broad range of potential applications. In fact, many AI research findings will have dual use: some applications of these findings may promise exciting benefits, while others would seem likely to cause harm. While it is worthwhile to weigh these costs and benefits, decisions about where to invest resources should also depend on distributional considerations: who are the people likely to suffer these costs and who are those who will enjoy the benefits? 

While there have been recent efforts to incorporate ethics review into the publishing processes of the AI research community, adding similar considerations to the Strategic Plan would help to highlight these concerns much earlier in the research process. Evaluating research proposals according to these broader impacts would help to ensure that ethical and societal considerations are incorporated from the beginning of a research project, instead of remaining an afterthought.

Third, our comments highlight the reproducibility crisis in fields adopting machine learning methods and the need for the government to support the creation of computational reproducibility infrastructure and a reproducibility clearinghouse that sets up benchmark datasets for measuring progress in scientific research that uses AI and ML. We suggest that the Strategic Plan borrow from the NIH’s practices to make government funding conditional on disclosing research materials, such as the code and data, that would be necessary to replicate a study.

Fourth, we focus attention on the industry phenomenon of using a veneer of AI to lend credibility to pseudoscience as AI snake oil. We see evaluating validity as a core component of ethical and responsible AI research and development. The strategic plan could support such efforts by prioritizing funding for setting standards for and making tools available to independent researchers to validate claims of effectiveness of AI applications. 


Fifth, we document the need to address the phenomenon of “runaway datasets” — the practice of broadly releasing datasets used for AI applications without mechanisms of oversight or accountability for how that information can be used. Such datasets raise serious privacy concerns and they may be used to support research that is counter to the intent of the people who have contributed to them. The Strategic Plan can play a pivotal role in mitigating these harms by establishing and supporting appropriate data stewardship models, which could include supporting the development of centralized data clearinghouses to regulate access to datasets.