November 24, 2024

India's Electronic Voting Machines Have Security Problems

A team led by Hari Prasad, Alex Halderman, and Rop Gonggrijp released today a technical paper detailing serious security problems with the electronic voting machines (EVMs) used in India.

The independent Electoral Commission of India, which is generally well respected, has dealt poorly with previous questions about EVM security. The chair of the Electoral Commission has called the machines “infallible” and “perfect” and has rejected any suggestion that security improvements are even possible. I hope the new study will cause the EC to take a more realistic approach to EVM security.

The researchers got their hands on a real Indian EVM which they were able to examine and analyze. They were unable to extract the software running in the machine (because that would have required rendering the machine unusable for elections, which they had agreed not to do) so their analysis focused on the hardware. They were able to identify several attacks that manipulated the hardware, either by replacing components or by clamping something on to a chip on the motherboard to modify votes. They implemented demonstration attacks, actually building proof-of-concept substitute hardware and vote-manipulation devices.

Perhaps the most interesting aspect of India’s EVMs is how simple they are. Simplicity is a virtue in security as in engineering generally, and researchers (including me) who have studied US voting machines have advocated simplifying their design. India’s EVMs show that while simplicity is good, it’s not enough. Unless there is some way to audit or verify the votes, even a simple system is subject to manipulation.

If you’re interested in the details, please read the team’s paper.

The ball is now in the Election Commission’s court. Let’s hope that they take steps to address the EVM problems, to give the citizens of the world’s largest democracy the transparent and accurate elections they deserve.

Google Publishes Data on Government Data and Takedown Requests

Citizens have long wondered how often their governments ask online service providers for data about users, and how often governments ask providers to take down content. Today Google took a significant step on this issue, unveiling a site reporting numbers on a country-by-country basis.

It’s important to understand what is and isn’t included in the data on the Google site. First, according to Google, the data excludes child porn, which Google tries to block proactively, worldwide.

Second, the site reports requests made by government, not by private individuals. (Court orders arising from private lawsuits are included, because the court issuing the order is an arm of government.) Because private requests are excluded, the number of removal requests is lower than you might expect — presumably removal requests from governments are much less common than those from private parties such as copyright owners.

Third, Google is reporting the number of requests received, and not the number of users affected. A single request might affect many users; or several requests might focus on a single user. So we can’t use this data to estimate the number of citizens affected in any particular country.

Another caveat is that Google reports the country whose government submitted the request to Google, but this may not always be the government that originated the request. Under Mutual Legal Assistance Treaties, signatory countries agree to pass on law enforcement data requests for other signatories under some circumstances. This might account for some of the United States data requests, for example, if other countries asked the U.S. government to make data requests to Google. We would expect there to be some such proxy requests, but we can’t tell from the reported data how many there were. (It’s not clear whether Google would always be able to distinguish these proxy requests from direct requests.)

With these caveats in mind, let’s look at the numbers. Notably, Brazil tops both the data-requests list and the takedown-requests list. The likely cause is the popularity of Orkut, Google’s social network product, in Brazil. India, where Orkut is also somewhat popular, appears relatively high on the list as well. Social networks often breed disputes about impersonation and defamation, which could lead a government to order release of information about who is using a particular account.

The U.S. ranks second on the data-requests list but is lower on the takedown-requests list. This is consistent with the current U.S. trend toward broader data gathering by law enforcement, along with the relatively strong protection of free speech in the U.S.

Finally, China is a big question mark. According to Google, the Chinese government claims that the relevant data is a state secret, so Google cannot release it. The Chinese government stands conspicuously alone in this respect, choosing to deny its citizens even this basic information about their government’s activities.

There’s a lot more information I’d like to see about government requests. How many citizens are affected? How many requests does Google comply with? What kinds of data do governments seek about Google users? And so on.

Despite its limitations, Google’s site is a valuable step toward transparency about governments’ attempts to observe and control their citizens’ online activities. I hope other companies will follow suit, and that Google will keep pushing on this issue.

Pseudonyms: The Natural State of Online Identity

I’ve been writing recently about the problems that arise when you try to use cryptography to verify who is at the other end of a network connection. The cryptographic math works, but that doesn’t mean you get the identity part right.

You might think, from this discussion, that crypto by itself does nothing — that cryptographic security can only be bootstrapped from some kind of real-world identity verification. That’s the way it works for website certificates, where a certificate authority has to check your bona fides before it will issue you a certificate.

But this intuition turns out to be wrong. There is one thing that crypto can do perfectly, without any real-world support: providing pseudonyms. Indeed, crypto is so good at supporting pseudonyms that we can practically say that pseudonyms are the natural state of identity online.

To explain why this is true, I need to offer a gentle introduction to a basic crypto operation: digital signatures. Suppose John Doe (“JD”) wants to use digital signatures. First, JD needs to create a private cryptographic key, which he does by generating some random numbers and combining them according to a special geeky recipe. The result is a unique private key that only JD knows. Next, JD uses a certain procedure to determine the public key that corresponds to his private key. He announces the public key to everyone. The math guarantees that (1) JD’s public key is unique and corresponds to JD’s private key, and (2) a person who knows JD’s public key can’t figure out JD’s private key.

Now JD can make digital signatures. If JD wants to “sign” a certain message M, he combines M with JD’s private key in a special way, and the result is JD’s “signature on M”. Now anybody can verify the signature, using JD’s public key. Only JD can make the signature, because only JD knows JD’s private key; but anybody can verify the signature.

At no point in this process does JD tell anybody who he is — I called him “John Doe” for a reason. Indeed, JD’s public key is a perfect pseudonym: it conveys nothing about JD’s actual identity, yet it has a distinct “owner” whose presence can be verified. (“You’re really the person who created this public key? Then you should be able to make a signature on the message ‘squeamish ossifrage’ for me….”)

Using this method, anybody can make up a fresh pseudonym whenever they want. If you can generate random numbers and do some math (or have your computer do those things for you), then you can make a fresh pseudonym. You can make as many as you want, without needing to coordinate with anybody. This is all easy to do.

These methods, pseudonyms and signatures, are used even in cases where we want to verify somebody’s real-world identity. When you connect to (say) https://mail.google.com, Google’s web server gives you its public key — a pseudonym — along with a digital certificate that attests that that public key — that pseudonym — belongs to Google Inc. Binding public keys — pseudonyms — to real-world identities is tedious and messy, but of course this is often necessary in practice.

Online, identities are hard to manage. Pseudonyms are easy.

Side-Channel Leaks in Web Applications

Popular online applications may leak your private data to a network eavesdropper, even if you’re using secure web connections, according to a new paper by Shuo Chen, Rui Wang, XiaoFeng Wang, and Kehuan Zhang. (Chen is at Microsoft Research; the others are at Indiana.) It’s a sobering result — yet another illustration of how much information can be leaked by ordinary web technologies. It’s also really clever.

Here’s the background: Secure web connections encrypt traffic so that only your browser and the web server you’re visiting can see the contents of your communication. Although a network eavesdropper can’t understand the requests your browser sends, nor the replies from the server, it has long been known that an eavesdropper can see the size of the request and reply messages, and that these sizes sometimes leak information about which page you’re viewing, if the request size (i.e., the size of the URL) or the reply size (i.e., the size of the HTML page you’re viewing) is distinctive.

The new paper shows that this inference-from-size problem gets much, much worse when pages are using the now-standard AJAX programming methods, in which a web “page” is really a computer program that makes frequent requests to the server for information. With more requests to the server, there are many more opportunities for an eavesdropper to make inferences about what you’re doing — to the point that common applications leak a great deal of private information.

Consider a search engine that autocompletes search queries: when you start to type a query, the search engine gives you a list of suggested queries that start with whatever characters you have typed so far. When you type the first letter of your search query, the search engine page will send that character to the server, and the server will send back a list of suggested completions. Unfortunately, the size of that suggested completion list will depend on which character you typed, so an eavesdropper can use the size of the encrypted response to deduce which letter you typed. When you type the second letter of your query, another request will go to the server, and another encrypted reply will come back, which will again have a distinctive size, allowing the eavesdropper (who already knows the first character you typed) to deduce the second character; and so on. In the end the eavesdropper will know exactly which search query you typed. This attack worked against the Google, Yahoo, and Microsoft Bing search engines.

Many web apps that handle sensitive information seem to be susceptible to similar attacks. The researchers studied a major online tax preparation site (which they don’t name) and found that it leaks a fairly accurate estimate of your Adjusted Gross Income (AGI). This happens because the exact set of questions you have to answer, and the exact data tables used in tax preparation, will vary based on your AGI. To give one example, there is a particular interaction relating to a possible student loan interest calculation, that only happens if your AGI is between $115,000 and $145,000 — so that the presence or absence of the distinctively-sized message exchange relating to that calculation tells an eavesdropper whether your AGI is between $115,000 and $145,000. By assembling a set of clues like this, an eavesdropper can get a good fix on your AGI, plus information about your family status, and so on.

For similar reasons, a major online health site leaks information about which medications you are taking, and a major investment site leaks information about your investments.

The paper goes on to consider possible mitigations. The most obvious mitigation is to add padding to messages so that their sizes are not so distinctive — for example, every message might be padded to make its size a multiple of 256 bytes. This turns out to be less effective than you might expect — significant information can still leak even if messages are generously padded — and the padded messages are slower and more expensive to transmit.

We don’t know which sites the researchers studied, but it seems like a safe bet that most, if not all, of the sites in these product categories have similar problems. It’s important to keep these attacks in perspective — bear in mind that they can only be carried out by someone who can eavesdrop on the network between you and the site you’re visiting.

It’s becoming increasingly clear that securing web-based applications is very difficult, and that the basic tools for developing web apps don’t do much to help. The industry, and researchers, will be struggling with web app security issues for years to come.

Web Certification Fail: Bad Assumptions Lead to Bad Technology

It should be abundantly clear, from two recent posts here, that the current model for certifying the identity of web sites is deeply flawed. When you connect to a web site, and your browser displays an https URL and a happy lock or key icon indicating a secure connection, the odds that you’re connecting to an impostor site, despite your browser’s best efforts, are uncomfortably high.

How did this happen? The last two posts unpacked some of the detailed problems with the current system. Today I want to explore the root cause: today’s system is based on wildly unrealistic assumptions about organizations and trust.

The theory behind the system is simple. Browser vendors will identify a set of Certificate Authorities (CAs) who are trusted to certify identities. Browsers will automatically accept any identity certificate issued by any of the trusted CAs.

The first step in making this system work is identifying some CA who is trusted by everybody in the world.

If that last sentence didn’t strike you as odd, go back and read it again. That’s right, the system assumes that there is some party who is trusted by everyone in the world — a spectacularly naive assumption.

Network engineers like to joke about the “evil bit”, a hypothetical label put on each network packet, indicating whether the packet is evil. (See RFC 3514, Steve Bellovin’s classic parody standards document codifying the evil bit. I’ve always loved that the official Internet standards series accepts parody standards.) Well, the “trusted bit” for certificate authorities is pretty much as the same as the evil bit, only applied to organizations rather than network packets. Yet somehow we ended up with a design that relies on this “trusted bit”.

The reason, in part, is unclear thinking about institutional trust, abetted by the unclear language we often use in discussing trust online. For example, we tend to conflate two meanings of the word “trusted”. The first meaning of “trusted”, which is the everyday meaning, implies a judgment that a party is unlikely to misbehave. The second meaning of “trusted”, more common in military security settings, is a factual statement that someone is vulnerable to misbehavior by another. In an ideal world, we would make sure that someone was trusted in the first sense before they became trusted in the second sense, that is, we would make sure that someone was unlikely to misbehave before we we made ourselves vulnerable to their misbehavior. This isn’t easy to do — and we will forget entirely to do it if we confuse the two meanings of trusted.

The second linguistic problem is to use the passive-voice construction “A is trusted to do X” rather than the active form “B trusts A to do X.” The first form is problematic because it doesn’t say who is doing the trusting. Consider these two statements: (A) “CNNIC is a trusted certificate authority.” (B) “Everyone trusts CNNIC to be a certificate authority.” The first statement might sound plausible, but the second is obviously false.

If you try to explain to yourself why the existing web certification system is sound, while avoiding the two errors above (confusing two senses of “trusted”, and failing to say who is doing the trusting), you’ll see pretty quickly that the argument for the current system is tenuous at best. You’ll see, too, that we can’t fix the system by using different cryptography — what we need are new institutional arrangements.