May 9, 2021

Phone number recycling creates serious security and privacy risks to millions of people

By Kevin Lee and Arvind Narayanan

35 million phone numbers are disconnected every year in the U.S., according to the Federal Communications Commission. Most of these numbers are not disconnected forever; after a while, carriers reassign them to new subscribers. Through the years, these new subscribers have sometimes reported receiving calls and messages meant for previous owners, as well as discovering that their number is already tied to existing accounts online

In this example from our study, the phone number (redacted in the screenshot) had a linked Facebook account but is available to Verizon subscribers through the online number-change interface.

While these new owner mixups may make for interesting dinner party stories, number recycling presents security and privacy issues as well. If a recycled number remains on a previous owner’s recovery settings for an online account, the adversary can obtain that number and break into that account. The adversary can also use that phone number to look for your other personally identifiable information online, and then impersonate you with that phone number and PII. These attacks have been talked about through anecdotes and speculation, but never thoroughly investigated.

In a new study, we empirically evaluated number recycling risks in the United States. We sampled 259 phone numbers available to new subscribers at two major carriers, and found that 215 of them were recycled and vulnerable to either account hijackings or PII indexingthe two scenarios we described prior. We estimated the inventory of available recycled numbers at one carrier to be about one million, with a largely fresh set of numbers becoming available every month. We also found design weaknesses in carriers’ online interfaces and number recycling policies that could facilitate number recycling attacks. Finally, we obtained 200 numbers from both carriers and monitored incoming communication. In just one week, we found 19 of the 200 numbers in the honeypot were still receiving sensitive communication meant for previous owners, such as authentication passcodes and calls from pharmacies.

The adversary can focus on likely recycled numbers…
…while ignoring possibly unused numbers.

Phone number recycling is a standard industry practice regulated by the FCC. There are only so many valid 10-digit phone numbers, which are allocated to carriers in blocks to individually assign to their subscribers. Eventually, there will be no more blocks to allocate to carriers; when that happens, expansion will essentially be capped. To prolong the usefulness of 10-digit dialing (think of all the systems that need replacing if we suddenly switch to 11 digits!), the FCC not only has strict requirements for carriers requesting new blocks, but also instructs them to reassign numbers from disconnected subscribers to new subscribers after a certain timeframe (45 to 90 days). Number recycling is one of the reasons we have been able to put off this doomsday scenario from 2005 to beyond 2050. It is also the reason vulnerable numbersand number recycling threatsare so prevalent.

In our paper, we recommend steps carriers, websites, and subscribers can take to reduce risk. For subscribers looking to change numbers, our primary recommendation is to park the number to use as an inexpensive secondary line. By doing so, subscribers can mitigate some of the threats from number recycling. Last October, we responsibly disclosed our findings to the carriers we studied and to CTIA—the U.S. trade association representing the wireless telecommunications industry. In December, both carriers responded by updating their number change support pages to clarify their number recycling policies and remind subscribers to update their online accounts after a number change. Although this is a step in the right direction, more work can be done by all stakeholders to illuminate and mitigate the issues.

Our paper draft is located at recyclednumbers.cs.princeton.edu.

New Research on Privacy and Security Risks of Remote Learning Software

This post and the paper is jointly authored by Shaanan Cohney, Ross Teixeira, Anne Kohlbrenner, Arvind Narayanan, Mihir Kshirsagar, Yan Shvartzshnaider, and Madelyn Sanfilippo. It emerged from a case study at CITP’s tech policy clinic.

As universities rely on remote educational technology to facilitate the rapid shift to online learning, they expose themselves to new security risks and privacy violations. Our latest research paper, “Virtual Classrooms and Real Harms,” advances recommendations for universities and policymakers to protect the interests of students and educators.

The paper develops a threat model that describes the actors, incentives, and risks in online education. Our model is informed by our survey of 105 educators and 10 administrators who identified their expectations and concerns. We use the model to conduct a privacy and security analysis of 23 popular platforms using a combination of sociological analyses of privacy policies and 129 state laws (available here), alongside a technical assessment of platform software.

Our threat model diagrams typical remote learning data flows. An “appropriate” flow is informed by established educational norms. The flow marked end-to-end encryption represents data that is not ordinarily accessible to the platform.

In the physical classroom, there are educational norms and rules that prevent surreptitious recording of the classroom and automated extraction of data. But when classroom interactions shift to a digital platform, not only does data collection become much easier, the social cues that discourage privacy harms are weaker and participants are exposed to new security risks. Popular platforms, like Canvas, Piazza, and Slack, take advantage of this changed environment to act in ways that would be objectionable in the physical classroom—such as selling data about interactions to advertisers or other third parties. As a result, the established informational norms in the educational context are severely tested by remote learning software.

We analyze the privacy policies of 23 major platforms to find where those policies conflict with educational norms. For example, 41% of the policies permitted a platform to share data with advertisers, which conflicts with at least 21 state laws, while 23% allowed a platform to share location data. However, the privacy policies are not the only documents that shape platform practices. Universities use Data Protection Addenda (DPAs) for the institutional licenses that they negotiate with the platform to supplement or even supplant the default privacy policy. We reviewed 50 DPAs from 45 Universities, finding that the addenda were able to cause platforms to significantly shift their data practices, including stricter limits on data retention and use.

We also discuss the limitations of current federal and state regulation to address the risks we identified. In particular, the current laws lack specific guidance for platforms and educational institutions to protect privacy and security and have limited penalties for noncompliance. More broadly, the existing legal framework is geared toward regulating specific information types and a small subset of actors, rather than specifying transmission principles for appropriate use that would be more durable as the technology evolves.

What can be done to better protect students and educators? We offer the following five recommendations:

  1. Educators should understand that there are significant differences between free (or individually licensed) versions of software and institutional versions. Universities need to work on informing educators about those differences and encourage them to use institutionally-supported software.
  2. Universities should use their ability to negotiate DPAs and institute policies to make platforms modify their default practices that are in tension with institutional values.
  3. Crucially, universities should not spend all their resources on a complex vetting process before licensing software. That path leads to significant usability problems for end users, without addressing the security and privacy concerns. Instead, universities should recognize that significant user issues tend to surface only after educators and students have used the platforms and create processes to collect those issues and have the software developers rapidly fix the problems.
  4. Universities should establish clear principles for how software should respect the norms of the educational context and require developers to offer products that let them customize the software for that setting.
  5. Federal and state regulations can be improved by making platforms more accountable for compliance with legal requirements, and giving institutions a mandate to require baseline security practices, much like financial institutions have to protect consumer information under the Federal Trade Commission’s Safeguards Rule.

The shift to virtual learning requires many sacrifices from educators and students already. As we integrate these new learning platforms in our educational systems, we should ensure they reflect established educational norms and do not require users to sacrifice usability, security, and privacy.

We thank the members of Remote Academia and the university administrators who participated in the study. Remote Academia is a global Slack-based community, that gives faculty and other education professionals a space to share resources and techniques for remote learning. It was created by Anne, Ross, and Shaanan.

Vulnerability reporting is dysfunctional

By Kevin Lee, Ben Kaiser, Jonathan Mayer, and Arvind Narayanan

In January, we released a study showing the ease of SIM swaps at five U.S. prepaid carriers.  These attacks—in which an adversary tricks telecoms into moving the victim’s phone number to a new SIM card under the attacker’s control—divert calls and SMS text messages away from the victim. This allows attackers to receive private information such as SMS-based authentication codes, which are often used in multi-factor login and password recovery procedures. 

We also uncovered 17 websites that use SMS-based multi-factor authentication (MFA) and SMS-based password recovery simultaneously, leaving accounts open to takeover from a SIM swap alone; an attacker can simply reset a victim’s account password and answer the security challenge when logging in. We responsibly disclosed the vulnerabilities to those websites in early January, urging them to make changes to disallow this configuration. Throughout the process, we encountered two wider issues: (1) lack of security reporting mechanisms, and (2) a general misunderstanding of authentication policies. As a result, 9 of these 17 websites, listed below, remain vulnerable by default.

Disclosure Process. On each website, we first looked for email addresses dedicated to vulnerability reporting; if none existed, we looked for the companies on bug bounty platforms such as HackerOne. If we were unable to reach a company through a dedicated security email or through bug bounty programs, as a last resort, we reached out through customer support channels. Sixty days after our reports, we re-tested the configurations at the companies, except for those that reported that they had fixed the vulnerabilities.

Outcomes. Three companies—Adobe, Snapchat, and eBay—acknowledged and promptly fixed the vulnerabilities we reported. In one additional case, the vulnerability was fixed, but only after we exhausted the three contact options and reached out to company personnel via a direct message on Twitter. In three cases—Blizzard, Microsoft, and Taxact—our vulnerability report did not produce the intended effect (Microsoft and Taxact did not understand the issue, Blizzard provided a generic acknowledgment email), but in our 60-day re-test, we found that the vulnerabilities had been fixed (without the companies notifying us). As such, we do not know whether the fixes were implemented in light of our research.

Among the responses we received, there were several failure modes, which were not mutually exclusive. 

  • In five cases, personnel did not understand our vulnerability report, despite our attempts to make it as clear as possible (see Appendix B of our paper). Three of them—Microsoft, Paypal, and Yahoo—demonstrated knowledge of SIM swap attacks, but did not realize that their SMS authentication policies were allowing for vulnerable accounts. Paypal, for instance, closed our report as out-of-scope, claiming that “the vulnerability is not in Paypal, as you mentioned this is an issue with the carriers and they need to fix it on their side.While phone number hijackings are the result of poor customer authentication procedures at the carriers, account hijackings resulting from SMS passcode interception are the result of poor authentication policies at websites. The remaining two websites—Taxact and Gaijin Entertainment—misinterpreted our disclosure as a feature request and feedback, respectively.
  • Three of the four reports we submitted to third-party bug bounty programs were disregarded due to the absence of a bug (our findings are not software errors, but rather, logically inconsistent customer authentication policies). Reports are screened by employees of the program, who are independent of the website, and passed on to the website’s security teams if determined to be in scope. These third-party platforms appear to be overly strict with their triage criteria, preventing qualified researchers from communicating with the companies. This issue is not unique to our study, either. A few weeks ago, security researchers also reported difficulties with submitting vulnerability reports to Paypal, which uses HackerOne as its sole security reporting mechanism. HackerOne employs mechanisms that restrict users from submitting future reports after too many closed reports, which could disincentivize users from reporting legitimate vulnerabilities.
  • In five cases, we received no response. 
  • All four attempts to report security vulnerabilities through customer support channels were fruitless: either we received no response or personnel did not understand the issue.   

We have listed all 17 responses in the table below. Unfortunately, nine of these websites use SMS-based MFA and SMS-based password recovery by default and remain so as of this writing. Among them are payment services PayPal and Venmo. The vulnerable websites cumulatively have billions of users. 

Recommendations

We recommend that companies make the following changes to their vulnerability response:

  1. Companies need to realize that policy-related vulnerabilities are very real, and should use threat modeling to detect these. There seems to be a general lack of knowledge about vulnerabilities arising from weak authentication policies.
  2. Companies should provide direct contact methods for security reporting procedures. A bug bounty program is not a substitute for a robust security reporting mechanism, yet some companies are using it as such. Furthermore, customer support channels—whose personnel are unlikely to be trained to respond to security vulnerability disclosures—add a level of indirection and can lead to vulnerability reports being forwarded to inappropriate teams.  

Our paper, along with our dataset, is located at issms2fasecure.com.

Thanks to Malte Möser for providing comments on a draft.