November 21, 2024

A review of the FVAP UOCAVA workshop

The US Federal Voting Assistance Program (FVAP) is the Department of Defense Agency charged with assisting military and overseas voters with all aspects of voting, including registering to vote, obtaining ballots, and returning ballots. FVAP’s interpretations of Federal law (*) says that they must perform a demonstration of electronic return of marked ballots by overseas military voters (**) in a Federal election at the first Federal election that occurs one year after the adoption of guidelines by the US Election Assistance Commission. Since the EAC hasn’t adopted such guidelines yet (and isn’t expected to for at least another year or two), the clock hasn’t started ticking, so a 2012 demonstration is impossible and a 2014 demonstration looks highly unlikely. Hence, this isn’t a matter of imminent urgency; however, such systems are complex and FVAP is trying to get the ball rolling on what such a system would look like.

As has been discussed previously on this blog, nearly all computer security experts are very concerned about the prospect of marked ballot return over the internet (which we will henceforth refer to as “internet voting”). Issues include vulnerability of client computers, issues with auditability, concerns about usability and coercion, etc. On the flip side, many states and localities are marching full steam ahead on their own internet voting systems, generally ignoring the concerns of computer scientists, and focusing on the perceived greater convenience and hoped-for increased turnout. Many of these systems include email return of marked ballots, which computer scientists generally consider to be even riskier than web-based voting.

FVAP has been caught between the legal mandates and the technical experts. In an effort to break this logjam, they’ve organized a series of open fora – first in August 2010 just before USENIX Security in Washington DC, then in March 2011 just before the Electronic Verification Network workshop in Chicago IL, and last weekend just before USENIX Security in San Francisco CA. All three brought together representatives from FVAP, voting system vendors, election officials, computer scientists, and voting activists to discuss the issues. Several of the Freedom To Tinker bloggers have been present at all three meetings, and have been frustrated that the first two ended at an impasse – computer scientists saying “it doesn’t work” and FVAP (and others) saying “we need a solution anyway”.

Fortunately, the third meeting concluded in a far more constructive way. While all agree there are significant impediments, a consensus was reached that the best solution is a multi-stage competition, in much the same fashion as the National Institute of Standards and Technology (NIST) did for the Advanced Encryption Standard (AES) and is now performing for the Secure Hash Algorithm 3 (SHA-3).

The competition is structured as a series of phases that are completely open, which all expect to be at least somewhat controversial as some organizations (such as vendors) will want to protect their intellectual property. All submissions will be shared with the public, and competitive teams will be encouraged to critique each others’ submissions. In earlier phases this will be focused on the paper requirements and designs; in later phases this may include finding vulnerabilities in architectures and implementations. Submitters may claim patent and/or copyright on their submissions, but these must grant the public (including competitors) rights to use the submissions for analysis, including compiling, testing, and modifying software, for testing purposes. (However, submitters may preclude such use for production or resale purposes.) Thus, trade secrets will be precluded in the competitive process.

The competition will have three phases, each of which may include one or more iterations.

  • In the first phase (which as computer scientists was named “round 0”), submissions will focus on requirements for internet voting systems. Submitters will define characteristics that must be met in following phases. Submissions may also include use cases for which the requirements are applicable – for example, requirements that could apply in environments where all voters have smart cards, such as the US military. As described above, submissions will be open to the public, and anyone (especially submitters) will be encouraged to critique submissions to find the best aspects. At the conclusion of this round, FVAP will (possibly with the assistance of government experts) consolidate the requirements into a single set that will govern the following phase.
  • In the second phase (“round 1”), submissions will provide high level designs and detailed hardware and software architectures, along with procedures necessary for secure operation. The submissions for this round need to be detailed enough that a reasonably skilled person could implement a realization of the system, although many details such as user interfaces and database layouts will be undefined. As with the first phase, submissions will be open for critique. In this phase critiques will focus on identifying areas where designs do not meet the requirements defined in the first phase. The result may be modification of architectures to incorporate ideas from several teams. At the conclusion of this phase, FVAP will (again with assistance from government experts) narrow down the set of acceptable architectures. Or perhaps not – if no architecture is good enough to satisfy the requirements, FVAP may conclude that the experiment should not be run (and cancel the third phase).
  • In the third phase (“round 2”) submitters will create implementations of one or more of the architectures (perhaps even adopting architectures from other teams, if licensing terms permit). During the critique period, teams will seek to find security vulnerabilities in other implementations, and fix problems identified in their own implementation. Usability testing should be part of this phase, as systems too complex for voters to use effectively (even if secure) need to be identified and improved. At the conclusion of this phase, FVAP will identify one or more implementations that are adequate for meeting their demonstration project requirements. Or perhaps not – if no implementation is good enough, FVAP may conclude that the experiment should not be run.

What happens if there is no acceptable solution at the conclusion of the second or third phase? That’s possible – and if it happens, that may be cause for FVAP to request that Congress modify its charter to eliminate the requirement for online blank ballot return. If the best minds in the country conclude that internet voting is a perpetual motion machine, no amount of laws and regulations will make it possible.

How long will all this take? We estimate the entire process will take three or four years, allowing time for FVAP to publish a solicitation, organizations to create submission, the public critique period, FVAPs consolidation and decision making, and transition to the next phase.

In the meantime, there’s little doubt that some states will continue to move forward on the existing insecure solutions. We believe, and expect that most other computer scientists will agree, that this is a case to let science take its course before moving into implementation. We hope that FVAP will speak out publicly against such ill-advised experiments.

For now, we look forward to working with FVAP in realizing the first ever national internet voting competition.

(*) While there is some disagreement on interpretation of the law, since I’m not a lawyer and hence not competent to determine the accuracy of that interpretation, this blog entry presumes that the FVAP interpretation is correct.

(**) The term “military and overseas voters” means both military voters stationed away from their legal home (e.g., at a base in another state or overseas) and civilians living overseas (whether on a temporary basis such as contractors or on a permanent basis). Thus this includes people working for organizations like Peace Corps and embassies as well as expatriates. However, the FVAP mandate for internet voting only applies to overseas military voters, and not domestic military voters or overseas civilians.

Edited Aug 13 @ 12:17pmET: Changed first footnote to explain that I’m not a lawyer and hence not interpreting the law.

Edited Aug 15 @ 1:08pmET: Corrected name of EVN workshop.

Yet again, why banking online .NE. voting online

One of the most common questions I get is “if I can bank online, why can’t I vote online”. A recently released (but undated) document ”Supplement to Authentication in an Internet Banking Environment” from the Federal Financial Institutions Examination Council addresses some of the risks of online banking. Krebs on Security has a nice writeup of the issues, noting that the guidelines call for ‘layered security
programs’ to deal with these riskier transactions, such as:

  1. methods for detecting transaction anomalies;

  2. dual transaction authorization through different access devices;

  3. the use of out-of-band verification for transactions;

  4. the use of ‘positive pay’ and debit blocks to appropriately limit
    the transactional use of an account;

  5. ‘enhanced controls over account activities,’ such as transaction
    value thresholds, payment recipients, the number of transactions
    allowed per day and allowable payment days and times; and

  6. ’enhanced customer education to increase awareness of the fraud
    risk and effective techniques customers can use to mitigate the
    risk.’

[I’ve replaced bullets with numbers in Krebs’ posting in the above list to make it
easier to reference below.]

So what does this have to do with voting? Well, if you look at them
in turn and consider how you’d apply them to a voting system:

  1. One could hypothesize doing this – if 90% of the people in a
    precinct vote R or D, that’s not a good sign – but too late to do
    much. Suggesting that there be personalized anomaly detectors (e.g.,
    “you usually vote R but it looks like you’re voting D today, are you
    sure?”) would not be well received by most voters!

  2. This is the focus of a lot of work – but it increases the effort for the voter.

  3. Same as #2. But have to be careful that we don’t make it too hard
    for the voter! See for example SpeakUp: Remote Unsupervised Voting as an example of how this might be done.

  4. I don’t see how that would apply to voting, although in places like Estonia where you’re allowed to vote more than once (but only the last vote counts) one could imagine limiting the number of votes that can be cast by one ID. Limiting the number of votes from a single IP address is a natural application – but since many ISPs use the same (or a few) IP addresses for all of their customers thanks to NAT, this would disenfranchise their customers.

  5. “You don’t usually vote in primaries, so we’re not going to let you
    vote in this one either.” Yeah, right!

  6. This is about the only one that could help – and try doing it on
    the budget of an election office!

Unsaid, but of course implied by the financial industry list is that the goal is to reduce fraud to a manageable level. I’ve heard that 1% to 2% of the online banking transactions are fraudulent, and at that level it’s clearly not putting banks out of business (judging by profit numbers). However, whether we can accept as high a level of fraud in voting as in banking is another question.

None of this is to criticize the financial industry’s efforts to improve security! Rather, it’s to point out that try as we might, just because we can bank online doesn’t mean we should vote online.

Don't love the cyber bomb, but don't ignore it either

Cybersecurity is overblown – or not

A recent report by Jerry Brito and Tate Watkins of George Mason University titled “Loving The Cyber Bomb? The Dangers Of Threat Inflation In Cybersecurity Policy” has gotten a bit of press. This is an important topic worthy of debate, but I believe their conclusions are incorrect. In this posting, I’ll summarize their report and explain why I think they’re wrong.

Brito & Watkins (henceforth B&W) argue that the cyber threat is exaggerated, and its being driven by private industry anxious to feed at the public trough in a manner similar to the creation of the military industrial complex in the second half of the 20th century as an outgrowth of the Cold War.

The paper starts by describing how deliberate misinformation in the run-up to the Iraq war is an example of how public opinion can be manipulated by policy makers and private industry trying to sell a threat. My opinion of the Iraq war is not relevant to this discussion, but I believe they’re using to create a strawman which they then knock down.

Next, B&W they use the CSIS Commission Report on Cybersecurity for the 44th Presidency and Richard Clarke’s “Cyber War” to argue that the threat of cyber conflict has been overblown. With regard to the former, they criticize the confusion of probes (port scans) with real attacks, and argue that probes are not evidence of an attack or breach but more akin to doorknob rattling. While that’s certainly true (and an analogy that’s been made for years), if your doorknob is rattled thousands of times a day it’s a strong indication that you’re living in a bad neighborhood! They then note that there’s little unclassified proof of real threats, and hence the call for regulation by CSIS (and others) is inappropriate. Unfortunately, quantitative proof is hard to come by, but there are enough incidents that there can be little doubt as to the severity of the threat. Requiring quantitative data before we move to protection would be akin to demanding an open and accurate assessment of the number of foreign spies and the damage they do before we fund the CIA! Instead, we rely on experts in spycraft to assess the threat, and help define appropriate defenses. In the same way, we should rely on cybersecurity experts to provide an assessment of the risks and appropriate actions. I certainly agree with both CSIS and B&W that overclassification of the threats works to our detriment – if the public is unable to see the threat, it becomes hard to justify spending to defend against it. I’ve personally seen this in the commercial software industry, where the inability to provide hard data about cyber threats to senior management results in that threat being discounted, with consequent risk to businesses. But again, the problems with overclassification do not mean the problem doesn’t exist.

Regarding Clarke’s book, there’s been plenty of criticism of both technical inaccuracies and the somewhat hysterical tone. Those notwithstanding, Clarke generally has a good understanding of the types of threats and the risks. B&W’s claim that the only verifiable attacks are DDOS is simply untrue – there have been verified attacks against infrastructure like water systems, although some of the claimed attacks are other types of failures that could have been cyber-related, but aren’t. As an example, while Clarke claims that the northeast power blackout of 2003 was cyber-related, there’s adequate evidence that it was not – but there’s also adequate evidence that such an accidental failure could be caused by a deliberate attack. Similarly, the NYSE “flash crash” was not caused by a cyber attack, but demonstrates the fragility of modern highly computerized systems, and shows that a cyber attack could cause similar symptoms. That which can happen by accident can also happen intentionally, if an adversary desires.

As for B&W’s analogy to the military industrial complex that President Eisenhower so famously feared, and the increasing influence of cyberpork, I must reluctantly agree. Large defense contractors have, in recent years, flocked to cyber as it has become trendy and large budgets have become attractive, frequently more concerned with revenue than with solving problems. However, the problems existed (and were being discussed) by researchers and practitioners long before the influx of government contractors. The fact that they’re trying to make money off the problem doesn’t mean the problem doesn’t exist.

The final section of the paper, covering regulatory issues, has some good points, but it is so poisoned by the assumptions in the earlier sections of the paper that it’s hard to take seriously.

To summarize, we should distinguish between the existence of the problem (which is real and growing) versus the desire of some government contractors to cash in – the fact that the latter is occurring does not deny the reality of the former.

Oak Ridge, spear phishing, and i-voting

Oak Ridge National Labs (one of the US national energy labs, along with Sandia, Livermore, Los Alamos, etc) had a bunch of people fall for a spear phishing attack (see articles in Computerworld and many other descriptions). For those not familiar with the term, spear phishing is sending targeted emails at specific recipients, designed to have them do an action (e.g., click on a link) that will install some form of software (e.g., to allow stealing information from their computers). This is distinct from spam, where the goal is primarily to get you to purchase pharmaceuticals, or maybe install software, but in any case is widespread and not targeted at particular victims. Spear phishing is the same technique used in the Google Aurora (and related) cases last year, the RSA case earlier this year, Epsilon a few weeks ago, and doubtless many others that we haven’t heard about. Targets of spear phishing might be particular people within an organization (e.g., executives, or people on a particular project).

In this posting, I’m going to connect this attack to Internet voting (i-voting), by which I mean casting a ballot from the comfort of your home using your personal computer (i.e., not a dedicated machine in a precinct or government office). My contention is that in addition to all the other risks of i-voting, one of the problems is that people will click links targeted at them by political parties, and will try to cast their vote on fake web sites. The scenario is that operatives of the Orange party send messages to voters who belong to the Purple party claiming to be from the Purple party’s candidate for president and giving a link to a look-alike web site for i-voting, encouraging voters to cast their votes early. The goal of the Orange party is to either prevent Purple voters from voting at all, or to convince them that their vote has been cast and then use their credentials (i.e., username and password) to have software cast their vote for Orange candidates, without the voter ever knowing.

The percentage of users who fall prey to targeted attacks has been a subject of some controversy. While the percentage of users who click on spam emails has fallen significantly over the years as more people are aware of them (and as spam filtering has improved and mail programs have improved to no longer fetch images by default), spear phishing attacks have been assumed to be more effective. The result from Oak Ridge is one of the most significant pieces of hard data in that regard.

According to an article in The Register, of the 530 Oak Ridge employees who received the spear phishing email, 57 fell for the attack by clicking on a link (which silently installed software in their computers using to a security vulnerability in Internet Explorer which was patched earlier this week – but presumably the patch wasn’t installed yet on their computers). Oak Ridge employees are likely to be well-educated scientists (but not necessarily computer scientists) – and hence not representative of the population as a whole. The fact that this was a spear phishing attack means that it was probably targeted at people with access to sensitive information, whether administrative staff, senior scientists, or executives (but probably not the person running the cafeteria, for example). Whether the level of education and access to sensitive information makes them more or less likely to click on links is something for social scientists to assess – I’m going to take it as a data point and assume a range of 5% to 20% of victims will click on a link in a spear phishing attack (i.e., that it’s not off by more than a factor of two).

So as a working hypothesis based on this actual result, I propose that a spear phishing attack designed to draw voters to a fake web site to cast their votes will succeed with 5-20% of the targeted voters. With UOCAVA (military and overseas voters) representing around 5% of the electorate, I propose that a target of impacting 0.25% to 1% of the votes is not an unreasonable assumption. Now if we presume that the race is close and half of them would have voted for the “preferred” candidate anyway, this allows a spear phishing attack to capture an additional 0.12% to 0.50% of the vote.

If i-voting were to become more widespread – for example, to be available to any absentee voter – then these numbers double, because absentee voters are typically 10% of all voters. If i-voting becomes available to all voters, then we can guess that 5% to 20% of ALL votes can be coerced this way. At that point, we might as well give up elections, and go to coin tossing.

Considering the vast sums spent on advertising to influence voters, even for the very limited UOCAVA population, spear phishing seems like a very worthwhile investment for a candidate in a close race.

Do photo IDs help prevent vote fraud?

In many states, an ID is required to vote. The ostensible purpose is to prevent people from casting a ballot for someone else – dead or alive. Historically, it was also used to prevent poor and minority voters, who are less likely to have government IDs, from voting.

No one would (publicly) admit to the second goal today, so the first is always the declared purpose. But does it work?

In my experience as a pollworker in Virginia, the answer is clearly “no”. There are two basic problems – the rules for acceptable IDs are so broad (so as to avoid disenfranchisement) as to be useless, and pollworkers are given no training as to how to verify an ID.

Let’s start with what Virginia law says. The Code of Virginia 24.2-643 reads in part:

An officer of election shall ask the voter for his full name and current residence address and repeat, in a voice audible to party and candidate representatives present, the full name and address stated by the voter. The officer shall ask the voter to present any one of the following forms of identification: his Commonwealth of Virginia voter registration card, his social security card, his valid Virginia driver’s license, or any other identification card issued by a government agency of the Commonwealth, one of its political subdivisions, or the United States; or any valid employee identification card containing a photograph of the voter and issued by an employer of the voter in the ordinary course of the employer’s business. If the voter’s name is found on the pollbook, if he presents one of the forms of identification listed above, if he is qualified to vote in the election, and if no objection is made, […]

Let’s go through these one at a time.

  • A voter registration card has no photo or signature, and little other identifying information, there’s no way to validate it. Since voters don’t sign the pollbook in Virginia (as they do in some other states), there’s no signature to compare to even if it did have a signature. And since the voter card is just a piece of paper with no watermark, it’s easy to fabricate on a laser printer.
  • A Social Security Card (aside from the privacy issues of sharing the voter’s SSN with the pollworker) is usually marked “not for identification”. And it has no photo or address.
  • A Virginia driver’s license has enough information for identification (i.e., a photo and signature, as well as the voter’s address).
  • Other Virginia, locality, or Federal ID. Sounds good, but I have no clue what all the different possible IDs that fall into this category look like, so I have no idea as a pollworker how to tell whether they’re legitimate or not. (On the positive side, a passport is allowed by this clause – but it doesn’t have an address.)
  • Employee ID card. This is the real kicker. There are probably ten thousand employers in my county. Many of them don’t even follow a single standard for employee IDs (my own employer had several versions until earlier this year, when anyone with an old ID was “upgraded”). I don’t know the name of every employer, much less how to distinguish a valid ID from an invalid one. If the voter’s name and photo are on the card, along with some company name or logo, that’s probably good enough. Any address on the card is going to be of the employer, not the voter.

So if I want to commit fraud (a felony) and vote for someone else (living or dead), how hard is it? Simple: create a laminated ID with company name “Bob’s Plumbing Supply” and the name of the voter to be impersonated, memorize the victim’s address, and that’s all it takes.

Virginia law also allows the voter who doesn’t have an ID with him/her to sign an affidavit that they are who they say they are. Falsifying the affidavit is a felony, but it really doesn’t matter if you’re already committing a felony by voting for someone else.

Now let’s say the laws were tightened to require a driver’s license, military ID, or a passport, and no others (and eliminate the affidavit option). Then at least it would be possible to train pollworkers what an ID looks like. But there are still two problems. First, the law says the voter must present the ID, but it never says what the pollworker must do with it. And second, the pollworkers never receive any training in how to verify an ID – a bouncer at a bar gets more training in IDs than a pollworker safeguarding democracy. In Virginia, when renewing a driver’s license the person has the choice to continue to use the previous picture, or to wait in line a couple hours at a DMV site to get a new picture. Not surprisingly, most voters have old pictures. Mine is ten years old, and dates from when I had a full head of hair and a beard, both of which have long since disappeared. Will a pollworker be able to match the IDs? Probably not – but since no one ever tries, that doesn’t matter. And passports are good for 10 years, so the odds are that picture will be quite old too. I’m really bad at matching faces, so when I’m working as a pollworker I don’t even try.

There are some positive things about requiring an ID. Most voters present their drivers license, frequently without even being asked. If the name is complex or the voter has a heavy accent or the room is particularly noisy, or the pollworker is hard of hearing (or not paying close attention), having the written name is a help. But that’s about it.

So what can we learn from this? Photo ID laws for voting, especially those that allow for company ID cards, are almost useless for preventing voting fraud. It’s the threat of felony prosecution, combined with the fact that the vast majority of voters are honest, that prevents vote fraud… not the requirement for a photo ID.