July 2, 2022

Most top websites are not following best practices in their password policies

By Kevin Lee, Sten Sjöberg, and Arvind Narayanan

Compromised passwords have consistently been the number one cause of data breaches by far, yet passwords remain the most common means of authentication on the web. To help, the information security research community has established best practices for helping users create stronger passwords. These include:

  • Block weak passwords that have appeared in breaches or can be easily guessed.
  • Use a strength meter to give users helpful real-time feedback. 
  • Don’t force users to include specific character-classes in their passwords. 

While these recommendations are backed by rigorous research, no one has thoroughly investigated whether websites are heeding the advice.

In a new study, we empirically evaluated compliance with these best practices. We reverse-engineered the password policies at 120 of the top English-language websites, like Google, Facebook, and Amazon. We found only 15 of them were following best practices. The remaining 105 / 120 either leave users at risk for password compromise or frustrated from being unable to use a sufficiently strong password (or both). The following table summarizes our findings:

We compare our key findings with best practices from prior research.

We found that more than half of the websites allowed the most common passwords, like “123456”, to be used. Attackers can guess these passwords with minimal effort, which opens the door to account hijacking.

Amazon allowed us to change the password on our account to “11111111”, a common and easily-guessed password.

Few websites had adopted strength meters, and of those, we found websites misusing meters to encourage complex passwords over strong, hard-to-guess passwords (e.g., preferring the predictable “Password123” over “bdmt7gg82nkc”—which we had randomly generated on our password manager). This not only defeats the purpose of password strength meters, but can lead to more user frustration.

Facebook using its password strength meter as a nudge towards incorporating specific character types in passwords.

Finally, we found almost half of the websites requiring users to include specific character-classes in their password, despite decades of research against it and outcry from users themselves

Intuit requires passwords include uppercase characters, lowercase characters, numbers, and symbols.

Our study reveals a huge gap between research and practice when it comes to password policies. Passwords have been heavily researched, yet few websites have implemented password policies that reflect the lessons learned. At the same time, research has not paid attention to practice. In our paper, we discuss ways for both sides to come together to address this disconnect. One idea for future research: directly engage with system administrators, in order to understand their mindset on password security. Perhaps password policy is meant to be security theater—giving users a sense of safety without actually improving security. Or maybe websites have shifted their attention to adopting other authentication technologies, like SMS-based multi-factor authentication (which also suffers from severe weaknesses, as we discovered in previous research on SIM swaps and number recycling). Perhaps websites have to deal with security audits from firms like Deloitte recommending outdated practices. Or maybe websites face other practical constraints that the information security community doesn’t know about. 

Our peer-reviewed paper is located at passwordpolicies.cs.princeton.edu.

 A Multi-pronged Strategy for Securing Internet Routing

By Henry Birge-Lee, Nick Feamster, Mihir Kshirsagar, Prateek Mittal, Jennifer Rexford

The Federal Communications Commission (FCC) is conducting an inquiry into how it can help protect against security vulnerabilities in the internet routing infrastructure. A number of large communication companies have weighed in on the approach the FCC should take. 

CITP’s Tech Policy Clinic convened a group of experts in information security, networking, and internet policy to submit an initial comment offering a public interest perspective to the FCC. This post summarizes our recommendations on why the government should take a multi-pronged strategy to promote security that involves incentives and mandates. Reply comments from the public are due May 11.

The core challenge in securing the internet routing infrastructure is that the original design of the network did not prioritize security against adversarial attacks. Instead, the original design focused on how to route traffic through decentralized networks with the goal of delivering information packets efficiently, while not dropping traffic. 

At the heart of this routing system is the Border Gateway Protocol (BGP), which allows independently-administered networks (Autonomous Systems or ASes) to announce reachability to IP address blocks (called prefixes) to neighboring networks. But BGP has no built-in mechanism to distinguish legitimate routes from bogus routes. Bogus routing information can redirect internet traffic to a strategic adversary, who can launch a variety of attacks, or the bogus routing can lead to accidental outages or performance issues. Network operators and researchers have been actively developing measures to counteract this problem.

At a high level, the current suite of BGP security measures depend on building systems to validate routes. But for these technologies to work, most participants have to adopt them or the security improvements will not be realized. In other words, it has many of the hallmarks of a “chicken and egg” situation. As a result, there is no silver bullet to address routing security.

Instead, we argue, the government needs a cross-layer strategy that embraces pushing different elements of the infrastructure to adopt security measures that protect legitimate traffic flows using a carrot-and-stick approach. Our comment identifies specific actions Internet Service Providers, Content Delivery Networks and Cloud Providers, Internet Exchange Points, Certificate Authorities, Equipment Manufacturers, and DNS Providers should take to improve security. We also recommend that the government funds and supports academic research centers that collect real-time data from a variety of sources that measure traffic and how it is routed across the internet.  

We anticipate several hurdles to our recommended cross-layer approach: 

First, to mandate the cross-layer security measures, the FCC has to have regulatory authority over the relevant players. And, to the extent a participant does not fall under the FCC’s authority, the FCC should develop a whole-of-government approach to secure the routing infrastructure.

Second, large portions of the internet routing infrastructure lie outside the jurisdiction of the United States. As such, there are international coordination issues that the FCC will have to navigate to achieve the security properties needed. That said, if there is a sufficient critical mass of providers who participate in the security measures, that could create a tipping point for a larger global adoption.

Third, the package of incentives and mandates that the FCC develops has to account for the risk that there will be recalcitrant small and medium sized firms who might undermine the comprehensive approach that is necessary to truly secure the infrastructure.

Fourth, while it is important to develop authenticated routes for traffic to counteract adversaries, there is an under-appreciated risk from a flipped threat model – the risk that an adversary takes control of an authenticated node and uses that privileged position to disrupt routing. There are no easy fixes to this threat – but an awareness of this risk can allow for developing systems to detect such actions, especially in international contexts.  

CITP Case Study on Regulating Facial Recognition Technology in Canada

Canada, like many jurisdictions in the United States, is grappling with the growing usage of facial recognition technology in the private and public sectors. This technology is being deployed at a rapid pace in airports, retail stores, social media platforms, and by law enforcement – with little oversight from the government. 

To help address this challenge, I organized a tech policy case study on the regulation of facial recognition technology with Canadian members of parliament – The Honorable Greg Fergus and Matthew Green. Both sit on the House of Commons’ Standing Committee on Access to Information, Privacy, and Ethics (ETHI) Committee and I served as a legislative aide to them through the Parliamentary Internship Programme before joining CITP. Our goal for the session was to put policymakers in conversation with subject matter experts. 

The core problem is that there is lack of accountability in the use of facial recognition technology that excarbates historical forms of discrimination and puts marginalized communities at risk for a wide range of harms. For instance, a recent story describes the fate of three black men who were wrongfully arrested because of being misidentified by facial recognition software. As the Canadian Civil Liberties Association argues, the police’s use of facial recognition technology, notably provided by the New York-based company, Clearview AI, “points to a larger crisis in police accountability when acquiring and using emerging surveillance tools.

A number of academics and researchers – such as DAIR Instititute’s Timnit Gebru and the Algorithmic Justice League’s Joy  Buolamwini, who documented the missclassification of darker-skinned women in a recent paper – are bringing attention to the discriminatory algorithms associated with facial recognition that have put racialized people, women, and members of the LGBTIQ community, at greater risk of false identification.  

Meanwhile, Canadian officials are beginning to tackle the real world consequences of the use of facial recognition. A year ago, the Office of the Privacy Commissioner found that Clearview AI, had scraped billions of images of people from from the internet in what “represented mass surveillance and was a clear violation of the privacy rights of Canadians.” 

Following that investigation, Clearview AI stopped providing services to the Canadian market, including the Royal Canadian Mounted Police. In light of these findings and the absence of dedicated legislation, the ETHI Committee began studying the uses of facial recognition technology in May 2021, and has recently resumed this work by focusing on the use by various levels of government in Canada, law enforcement agencies, and private corporations. 

The CITP case study session on March 24, began with a presentation by Angelina Wang, a graduate affiliate of CITP, who provided a technical overview where she explained the different functions and harms associated with this technology. Following Wang’s presentation, I provided a regulatory overview of how U.S. lawmakers have addressed facial recognition by noting the different legislative strategies deployed for law enforcement, private, and public sector uses. We then had a substantive, free-flowing discussion with CITP researchers and the policymakers about the challenges and opportunities for different regulatory strategies. 

Following CITP’s case study session, Wang and Dr. Elizabeth Anne Watkins, a CITP Fellow, were invited to testify before the ETHI committee in an April 4 hearing. Wang discussed the different tasks facial recognition technology can and cannot perform, how the models are created, why they are susceptible to adversarial attacks, and the ethical implications behind the creation of this technology. Dr. Watkins’ testimony provided an overview of the privacy, security, and safety concerns related to the private industry’s use of facial verification on workers as informed by her research.  The committee is expected to report its findings by the end of May 2022. 

We continue to do research on how Canada might regulate facial recognition technology and will publish those analyses in the coming months.