December 24, 2024

It’s Time for India to Face its E-Voting Problem

The unjustified arrest of Indian e-voting researcher Hari Prasad, while an ordeal for Prasad and his family, and an embarrassment to the Indian authorities, has at least helped to focus attention on India’s risky electronic voting machines (EVMs).

Sadly, the Election Commission of India, which oversees the country’s elections, is still sticking to its position that the machines are “perfect” and “fully tamperproof”, despite evidence to the contrary including convincing peer-reviewed research by Prasad and colleagues, not to mention the common-sense fact that no affordable electronic device can ever hope to be perfect or tamperproof. The Election Commission can no longer plausibly deny that EVM vulnerabilities exist. The time has come for India to have an honest, public conversation about how it votes.

The starting point for this discussion must be to recognize the vulnerabilities of EVMs. Like paper ballots, the ballots stored in an EVM are subject to tampering during and after the election, unless they are monitored carefully. But EVMs, unlike paper ballots, are also subject to tampering before the election, perhaps months or years in advance. Indeed, for many EVMs these pre-election vulnerabilities are the most serious problem.

So which voting system should India use? That’s a question for the nation to decide based on its own circumstances, but it appears there is no simple answer. The EVMs have problems, and old-fashioned paper ballots have their own problems. Despite noisy claims to the contrary from various sides, showing that one is imperfect does not prove that the other must be used. Most importantly, the debate must recognize that there are more than two approaches — for example, most U.S. jurisdictions are now moving to systems that combine paper and electronics, such as precinct-count optical scan systems in which the voter marks a paper ballot that is immediately read by an electronic scanner. Whether a similar system would work well for India remains an open question, but there are many options, including new approaches that haven’t been invented yet, and India will need to do some serious analysis to figure out what is best.

To find the best voting system for India, the Election Commission will need all of the help it can get from India’s academic and technical communities. It will especially need help from people like Hari Prasad. Getting Prasad out of jail and back to work in his lab would not only serve justice — which should be reason enough to free him — but would also serve the voters of India, who deserve a better voting system than they have.

Electronic Voting Researcher Arrested Over Anonymous Source

Updates: 8/28 Alex Halderman: Indian E-Voting Researcher Freed After Seven Days in Police Custody
8/26 Alex Halderman: Indian E-Voting Researcher Remains in Police Custody
8/24 Ed Felten: It’s Time for India to Face its E-Voting Problem
8/22 Rop Gonggrijp: Hari is in jail 🙁

About four months ago, Ed Felten blogged about a research paper in which Hari Prasad, Rop Gonggrijp, and I detailed serious security flaws in India’s electronic voting machines. Indian election authorities have repeatedly claimed that the machines are “tamperproof,” but we demonstrated important vulnerabilities by studying a machine provided by an anonymous source.

The story took a disturbing turn a little over 24 hours ago, when my coauthor Hari Prasad was arrested by Indian authorities demanding to know the identity of that source.

At 5:30 Saturday morning, about ten police officers arrived at Hari’s home in Hyderabad. They questioned him about where he got the machine we studied, and at around 8 a.m. they placed him under arrest and proceeded to drive him to Mumbai, a 14 hour journey.

The police did not state a specific charge at the time of the arrest, but it appears to be a politically motivated attempt to uncover our anonymous source. The arresting officers told Hari that they were under “pressure [from] the top,” and that he would be left alone if he would reveal the source’s identity.

Hari was allowed to use his cell phone for a time, and I spoke with him as he was being driven by the police to Mumbai:

The Backstory

India uses paperless electronic voting machines nationwide, and the Election Commission of India, the country’s highest election authority, has often stated that the machines are “perfect” and “fully tamper-proof.” Despite widespread reports of election irregularities and suspicions of electronic fraud, the Election Commission has never permitted security researchers to complete an independent evaluation nor allowed the public to learn crucial technical details of the machines’ inner workings. Hari and others in India repeatedly offered to collaborate with the Election Commission to better understand the security of the machines, but they were not permitted to complete a serious review.

Then, in February of this year, an anonymous source approached Hari and offered a machine for him to study. This source requested anonymity, and we have honored this request. We have every reason to believe that the source had lawful access to the machine and made it available for scientific study as a matter of conscience, out of concern over potential security problems.

Later in February, Rop Gonggrijp and I joined Hari in Hyderabad and conducted a detailed security review of the machine. We discovered that, far from being tamperproof, it suffers from a number of weaknesses. There are many ways that dishonest election insiders or other criminals with physical access could tamper with the machines to change election results. We illustrated two ways that this could happen by constructing working demonstration attacks and detailed these findings in a research paper, Security Analysis of India’s Electronic Voting Machines. The paper recently completed peer review and will appear at the ACM Computer and Communications Security conference in October.

Our work has produced a hot debate in India. Many commentators have called for the machines to be scrapped, and 16 political parties representing almost half of the Indian parliament have expressed serious concerns about the use of electronic voting.

Earlier this month at EVT/WOTE, the leading international workshop for electronic voting research, two representatives from the Election Commission of India joined in a panel discussion with Narasimha Rao, a prominent Indian electronic voting critic, and me. (I will blog more about the panel in coming days.) After listening to the two sides argue over the security of India’s voting machines, 28 leading experts in attendance signed a letter to the Election Commission stating that “India’s [electronic voting machines] do not today provide security, verifiability, or transparency adequate for confidence in election results.”

Nevertheless, the Election Commission continues to deny that there is a security problem. Just a few days ago, Chief Election Commissioner S.Y. Quraishi told reporters that the machines “are practically totally tamper proof.”

Effects of the Arrest

This brings us to today’s arrest. Hari is spending Saturday night in a jail cell, and he told me he expects to be interrogated by the authorities in the morning. Hari has retained a lawyer, who will be flying to Mumbai in the next few hours and who hopes to be able to obtain bail within days. Hari seemed composed when I spoke to him, but he expressed great concern for his wife and children, as well as for the effect his arrest might have on other researchers who might consider studying electronic voting in India.

If any good has come from this, it’s that there has been an outpouring of support for Hari. He has received positive messages from people all over India.

Unfortunately, the entire issue distracts from the primary problem: India’s electronic voting machines have fundamental security flaws, and do not provide the transparency necessary for voters to have confidence in elections. To fix these problems, the Election Commission will need help from India’s technical community. Arresting and interrogating a key member of that community is enormously counterproductive.


Professor J. Alex Halderman is a computer scientist at the University of Michigan.

The Future of DRE Voting Machines

Last week at the EVT/WOTE workshop, Ari Feldman and I unveiled a new research project that we feel represents the future of DRE voting machines. DRE (direct-recording electronic) voting machines are ones where voters cast their ballots by pressing buttons or using a touch screen, and the primary record of the votes is stored in a computer memory. Numerous scientific studies have demonstrated that such machines can be reprogrammed to steal votes, so when we got our hands on a DRE called the Sequoia AVC Edge, we decided to do something different: we reprogrammed it to run Pac-Man.

As more states move away from using insecure DREs, there’s a risk that thousands of these machines will clog our landfills. Fortunately, our results show that they can be productively repurposed. We believe that in the not-so-distant future, recycled DREs will provide countless hours of entertainment in the basements of the nation’s nerds.

To see how we did it, visit our Pac-Man on the AVC Edge voting machine site.

India's Electronic Voting Machines Have Security Problems

A team led by Hari Prasad, Alex Halderman, and Rop Gonggrijp released today a technical paper detailing serious security problems with the electronic voting machines (EVMs) used in India.

The independent Electoral Commission of India, which is generally well respected, has dealt poorly with previous questions about EVM security. The chair of the Electoral Commission has called the machines “infallible” and “perfect” and has rejected any suggestion that security improvements are even possible. I hope the new study will cause the EC to take a more realistic approach to EVM security.

The researchers got their hands on a real Indian EVM which they were able to examine and analyze. They were unable to extract the software running in the machine (because that would have required rendering the machine unusable for elections, which they had agreed not to do) so their analysis focused on the hardware. They were able to identify several attacks that manipulated the hardware, either by replacing components or by clamping something on to a chip on the motherboard to modify votes. They implemented demonstration attacks, actually building proof-of-concept substitute hardware and vote-manipulation devices.

Perhaps the most interesting aspect of India’s EVMs is how simple they are. Simplicity is a virtue in security as in engineering generally, and researchers (including me) who have studied US voting machines have advocated simplifying their design. India’s EVMs show that while simplicity is good, it’s not enough. Unless there is some way to audit or verify the votes, even a simple system is subject to manipulation.

If you’re interested in the details, please read the team’s paper.

The ball is now in the Election Commission’s court. Let’s hope that they take steps to address the EVM problems, to give the citizens of the world’s largest democracy the transparent and accurate elections they deserve.

Software in dangerous places

Software increasingly manages the world around us, in subtle ways that are often hard to see. Software helps fly our airplanes (in some cases, particularly military fighter aircraft, software is the only thing keeping them in the air). Software manages our cars (fuel/air mixture, among other things). Software manages our electrical grid. And, closer to home for me, software runs our voting machines and manages our elections.

Sunday’s NY Times Magazine has an extended piece about faulty radiation delivery for cancer treatment. The article details two particular fault modes: procedural screwups and software bugs.

The procedural screwups (e.g., treating a patient with stomach cancer with a radiation plan intended for somebody else’s breast cancer) are heartbreaking because they’re something that could be completely eliminated through fairly simple mechanisms. How about putting barcodes on patient armbands that are read by the radiation machine? “Oops, you’re patient #103 and this radiation plan is loaded for patent #319.”

The software bugs are another matter entirely. Supposedly, medical device manufacturers, and software correctness people, have all been thoroughly indoctrinated in the history of Therac-25, a radiation machine from the mid-80’s whose poor software engineering (and user interface design) directly led to several deaths. This article seems to indicate that those lessons were never properly absorbed.

What’s perhaps even more disturbing is that nobody seems to have been deeply bothered when the radiation planning software crashed on them! Did it save their work? Maybe you should double check? Ultimately, the radiation machine just does what it’s told, and the software than plans out the precise dosing pattern is responsible for getting it right. Well, if that software is unreliable (which the article clearly indicates), you shouldn’t use it again until it’s fixed!

What I’d like to know more about, and which the article didn’t discuss at all, is what engineering processes, third-party review processes, and certification processes were used. If there’s anything we’ve learned about voting systems, it’s that the federal and state certification processes were not up to the task of identifying security vulnerabilities, and that the vendors had demonstrably never intended their software to resist the sorts of the attacks that you would expect on an election system. Instead, we’re told that we can rely on poll workers following procedures correctly. Which, of course, is exactly what the article indicates is standard practice for these medical devices. We’re relying on the device operators to do the right thing, even when the software is crashing on them, and that’s clearly inappropriate.

Writing “correct” software, and further ensuring that it’s usable, is a daunting problem. In the voting case, we can at least come up with procedures based on auditing paper ballots, or using various cryptographic techniques, that allow us to detect and correct flaws in the software (although getting such procedures adopted is a daunting problem in its own right, but that’s a story for another day). In the aviation case, which I admit to not knowing much about, I do know they put in sanity-checking software, that will detect when the the more detailed algorithms are asking for something insane and will override it. For medical devices like radiation machines, we clearly need a similar combination of mechanisms, both to ensure that operators don’t make avoidable mistakes, and to ensure that the software they’re using is engineered properly.