May 7, 2021

Phone number recycling creates serious security and privacy risks to millions of people

By Kevin Lee and Arvind Narayanan

35 million phone numbers are disconnected every year in the U.S., according to the Federal Communications Commission. Most of these numbers are not disconnected forever; after a while, carriers reassign them to new subscribers. Through the years, these new subscribers have sometimes reported receiving calls and messages meant for previous owners, as well as discovering that their number is already tied to existing accounts online

In this example from our study, the phone number (redacted in the screenshot) had a linked Facebook account but is available to Verizon subscribers through the online number-change interface.

While these new owner mixups may make for interesting dinner party stories, number recycling presents security and privacy issues as well. If a recycled number remains on a previous owner’s recovery settings for an online account, the adversary can obtain that number and break into that account. The adversary can also use that phone number to look for your other personally identifiable information online, and then impersonate you with that phone number and PII. These attacks have been talked about through anecdotes and speculation, but never thoroughly investigated.

In a new study, we empirically evaluated number recycling risks in the United States. We sampled 259 phone numbers available to new subscribers at two major carriers, and found that 215 of them were recycled and vulnerable to either account hijackings or PII indexingthe two scenarios we described prior. We estimated the inventory of available recycled numbers at one carrier to be about one million, with a largely fresh set of numbers becoming available every month. We also found design weaknesses in carriers’ online interfaces and number recycling policies that could facilitate number recycling attacks. Finally, we obtained 200 numbers from both carriers and monitored incoming communication. In just one week, we found 19 of the 200 numbers in the honeypot were still receiving sensitive communication meant for previous owners, such as authentication passcodes and calls from pharmacies.

The adversary can focus on likely recycled numbers…
…while ignoring possibly unused numbers.

Phone number recycling is a standard industry practice regulated by the FCC. There are only so many valid 10-digit phone numbers, which are allocated to carriers in blocks to individually assign to their subscribers. Eventually, there will be no more blocks to allocate to carriers; when that happens, expansion will essentially be capped. To prolong the usefulness of 10-digit dialing (think of all the systems that need replacing if we suddenly switch to 11 digits!), the FCC not only has strict requirements for carriers requesting new blocks, but also instructs them to reassign numbers from disconnected subscribers to new subscribers after a certain timeframe (45 to 90 days). Number recycling is one of the reasons we have been able to put off this doomsday scenario from 2005 to beyond 2050. It is also the reason vulnerable numbersand number recycling threatsare so prevalent.

In our paper, we recommend steps carriers, websites, and subscribers can take to reduce risk. For subscribers looking to change numbers, our primary recommendation is to park the number to use as an inexpensive secondary line. By doing so, subscribers can mitigate some of the threats from number recycling. Last October, we responsibly disclosed our findings to the carriers we studied and to CTIA—the U.S. trade association representing the wireless telecommunications industry. In December, both carriers responded by updating their number change support pages to clarify their number recycling policies and remind subscribers to update their online accounts after a number change. Although this is a step in the right direction, more work can be done by all stakeholders to illuminate and mitigate the issues.

Our paper draft is located at recyclednumbers.cs.princeton.edu.

Internet Voting is Still Inherently Insecure

Legislation for voting by internet is pending in Colorado, and other states have been on the verge of permitting ballots to be returned by internet.

But voting by internet is too insecure, too hackable, to use in U.S. elections.  Every scientific study comes to the same conclusion—the Defense Department’s study group in 2004, the National Academy of Sciences in 2018, and others.  Although the internet has evolved, the fundamental insecurities are the same: insecure client computers (your PC or phone), insecure servers (that collect the votes), and Americans’ lack of universal digital credentials.

Vendors of internet voting systems claim it’s different now:  they claim “online voting” is not “internet voting”; they say smartphones are not PCs, cloud-computing systems are more secure than privately hosted servers, dedicated apps are not web sites, and because blockchain.  So let’s examine the science.  Of course “online voting” is internet voting: your smartphones and laptops connect to servers and cloud servers through the public packet-switched network; even the phone network these days is part of the internet.  And if the voter sends a ballot electronically to an election office that prints and counts it, that’s certainly not a “paper ballot” in the sense that a voter can check what’s printed on it.

Smartphones are client computers on that same internet.  Smartphone operating systems (Apple’s iOS and Google’s Android) have improved their security in recent years, but serious new exploitable vulnerabilities are continually discovered: about 25 per year in iOS (2018-2020) and 103 per year in Android.  And there are an unknown number of undiscovered vulnerabilities that attackers may be exploiting.  If you prepare a ballot on your smartphone voting for candidate Smith, you cannot be sure whether a hacker has caused your voting app to transmit instead a vote for Jones.

Major cloud-computing providers such as AWS and Azure do a good job of securing their systems for the companies that they “host” (banks, retailers, voting apps).  But a bank or voting-app maker must write their own software to run in that cloud.  It’s difficult to get that software right, and bugs can lead to exploitable vulnerabilities that a hacker could use to change votes as they arrive.  AWS is not some sort of magical pixie dust that one sprinkles on software to make it unhackable.  Blockchain doesn’t help either: the vote can be hacked before it even gets into the blockchain.

We have no system of unforgeable digital credentials that we can give to every voter to authenticate their voting transaction.   In practice, internet-voting products marketed in 2020 (from Voatz and Democracy Live) contracted out digital authentication to privacy-invasive third-party companies who asked voters to hold up their driver’s license next to their face and take a picture, or captured “browser fingerprints” tracking personal information about the voter’s Web usage—revealing this and much other private information about the voter and the voter’s votes to these unaccountable third-party companies.   Traffic in stolen credentials would seriously compromise elections.

We still do online banking and shopping.  But banks have control over to whom they issue credit cards; can suspend a credit card at any instant if they suspect fraud; can decide what percentage of fraud they want to tolerate, balancing against convenience.  And most important, every individual transaction is traceable and auditable.  But with voting, none of those are true.  You have the right to the secret ballot, with an assurance that the system doesn’t know who you voted for.

The groups pressing hardest for internet voting are national organizations representing voters with disabilities.  They want voters with visual impairments or motor disabilities to be able to vote independently and conveniently from home.  Indeed, although every polling place (by federal law since 2002) has an “accessible” voting-machine to accommodate voters with disabilities, many of those machines are so ill-designed that they are accessible in name only.  We need better technology for such voters, and it’s worth investing in it.  There really are better accessible voting machines on the market for use in polling places and early vote centers, and more research would help too.  But we must not let wishful thinking lead us into hackable internet voting.   Wishing that internet voting could be made secure is not a justification for implementing it.  And in fact, surveys of voters with disabilities show that the vast majority want to vote on paper.

The clear consensus of computer scientists and cybersecurity experts is that paperless voting systems cannot be made sufficiently secure for use in public elections.  Paper ballots are our only practical choice—countable by machine, recountable by hand in case the machines were hacked or misconfigured, and auditable by hand to detect whether a recount is warranted.

Juan Gilbert’s Transparent BMD

Princeton’s Center for Information Technology Policy recently hosted a talk by Professor Juan Gilbert of the University of Florida, in which he demonstrated his interesting new invention and presented results from user studies.

What’s the problem with ballot-marking devices?

It’s well known that a voting system must use paper ballots to be trustworthy (at least with any known or foreseeable technology). But how should voters mark their ballots? Hand-marked paper ballots (HMPB) allow voters to fill in ovals with a pen, to be counted by an optical scanner. Ballot-marking devices (BMDs) allow voters to use a touchscreen (or other assistive device) and then print out a ballot card listing the voter’s choices.

The biggest problem with BMDs is that most voters don’t check the ballot card carefully, so that if the BMD were hacked and misrepresenting votes on the paper, the voters wouldn’t notice–and even if a few voters did notice, the BMDs would have successfully stolen the votes of many other voters.

One scientific study (not in a real election) showed that some process interventions–such as, “remind voters to check their ballots”–might improve the rate at which voters check their ballots. I am skeptical that those kinds of interventions will be consistently applied in thousands of polling places, or that voters will stay vigilant year after year. And even if the rate of checking can be improved from 6.6% to 50%, there’s still no clear remedy that can protect the outcome of the election as a whole.

The transparent BMD

Instead of reminding the voter, Professor Gilbert’s solution is to force them to look directly at the printout, immediately after voting each contest. In this video, at 0:36, see how the voter is asked to touch the screen directly in front of the spot on the paper where the vote was just printed.

Voter’s finger confirming a printed-on-paper vote by touching the screen directly in front of where the vote was printed.

He explains more in the CITP seminar he presented at Princeton. He also explains his user studies. When the BMD deliberately printed one vote wrong on the paper ballot (out of 12 contests on the ballot), 36% of voters noticed and said something about it–and another 41% noticed but didn’t say anything until asked. This is a significantly higher rate of detection than when using conventional BMDs. Hypothetically, if those 41% could somehow be prompted to speak up, then there’d be a 77% rate at which voters would detect and correct fraudulent vote-flipping.

Somehow, this physically embodied intervention seems more consistently effective than one that requires sustained cooperation from election administrators, poll workers, and voters–all of whom are only human.

Would this make BMDs safe to use?

Recall what the problem is: If the BMD cheats on X% of the votes in a certain contest, and only Y% of the voters check their ballot carefully, and only Z% of those will actually speak up, then only X*Y*Z% voters will speak up. In a very close election, X might be 1/100, Y has been measured as 1/15, and Z might be 1/2, so XYZ=1/3000. Professor Gilbert has demonstrated that (with the right technology) X can be improved to 76% (or 3/4) but Z is still about 1/2. Suppose further tinkering could improve Z to 3/4, then XYZ would be 1/178. That is, if the hacked BMD attempted to steal 1% of the votes, then 9/16 of those voters would notice (and ask the pollworkers for a do-over), so the net rate of theft would be only 7/16 of 1%, or about half a percent.

And in that hypothetical scenario, one voter out of every 178 would have asked for a do-over, saying “what printed on the paper isn’t what I selected on the touchscreen.” That’s (perhaps) two or three in every medium-size polling place–or, in a statewide election with 3 million voters, that’s more than 16,000 voters speaking up. If that happened, and if the margin of victory is less than half-a-percent, then what should the Secretary of State do?

The answer is still not clear. You can read this to see the difficulty.

So, the Transparent BMD is a really interesting research advance; it is a really good design idea; and Professor Gilbert’s user-studies are professionally done. But further research is needed to figure out how such machines could (safely) be used in real elections.

And there’s still no excuse for using conventional BMDs, with their abysmal rate at which voters check their ballot papers, as the default mode for all voters in a public election.


Further caveats. These are considerations for the evaluation of the practical security of “transparent BMDs” in elections, worth further study.

  1. If a voter speaks up and says “the machine changed my vote”, will the local pollworkers respond appropriately? Suppose there have been many elections in a row where the voting machines haven’t been hacked (which we certainly hope is the case!); then whatever training the pollworkers are supposed to have may have been omitted or forgotten.
  2. When analyzing whether a new physical design is more secure, one must be careful to assume that the hacker can install software that can behave any way that the hardware is capable of. Just to take one example, suppose the hacked BMD software is designed to behave like a conventional BMD: first accept all the voter’s choices, then print (without forcing the voter to touch the screen where the gaze is directed to the just-printed candidate). This gives the opportunity to deliberately misprint in a way that we know voters don’t detect very well. But would voters know that the BMD is not supposed to behave this way? I pose this just as an example of how to think about the “threat model” of voting machines.
  3. Those voters who noticed the machine cheating but didn’t speak up in the study, then claimed that if it were a real polling place they would speak up– really? In real life, there are many occurrences of voters seeing something they feel is wrong at the polling place, but waiting until they get home before calling someone to talk about it. Many people feel a bit intimidated in situations like this. So it’s difficult to translate what people say they will do, into what really they will do.
  4. Professor Gilbert suggests (in his talk) that he’ll change the prompt from “Please review your selection below. Touch your selection to continue.” to something like “Please review your selection below. If it is correct, touch it. If it is wrong, please notify a pollworker.” This does seem like it would improve the rate at which voters would report errors. It will be interesting to see.