April 20, 2014

avatar

My Supplemental E-Voting Testimony

Today I submitted supplemental written testimony, adding to my previous testimony from last week’s e-voting hearing before the House Administration Committee, Subcommittee on Elections. Today’s supplemental testimony is short, so I’ll just include it here. (The formatted version is available too.)

Thank you for the opportunity to submit this supplemental written testimony.

Some people have suggested that it might be possible to use an electronic verification system instead of the voter-verified paper ballot required by H.R. 811. For example, the verification system might be an electronic recording device developed separately from the voting machine. Congressman Ehlers mentioned this possibility during the hearing.

The idea behind such proposals is to use redundancy as a safeguard against fraud or malfunction, in the hope that a failure in one system will be redeemed by the correct behavior of the other.

Redundancy works best when the redundant systems fail independently. If System A fails whenever System B fails, then using A and B redundantly provides no benefit at all. On the other hand, if A always works perfectly when B fails, then redundancy can eliminate error entirely. Neither of these extreme cases will hold in practice. Instead we expect to see some correlation between failures of A and failures of B. Our goal is to minimize this correlation.

One way to avoid correlated failures is to make the two systems as different as possible. Common sense says that similar systems will tend to fail in similar ways and at similar times – exactly the kind of correlated failures that we want to avoid. Experience bears this out, which is why we generally want redundant systems to be as diverse as possible.

The desire for diversity is a strong argument for keeping a paper record alongside the electronic record of a voter’s ballot. Paper-plus-electronic redundancy offers much better diversity than electronic-plus-electronic redundancy would. Indeed, if we analyze the failure modes of electronic and paper systems, we see that they tend to fail in very different ways. To give just one example, in a well-designed paper ballot system the main risk of tampering is after the election, whereas in a well-designed electronic ballot system the main risk of tampering is before the election . A well-designed electronic-plus-paper system can in principle be more resistant to tampering than any system that uses either electronics or paper alone, because the paper component can resist pre-election tampering and the electronic component can resist post-election tampering.

[Footnote: In a well-designed paper system, the main tampering risk is that somebody will access the ballot box after the election and replace the real paper ballots with fraudulent ones. In a well-designed electronic system, the main tampering risk is that somebody will modify the system's software before the election. Unfortunately, most if not all of today's electronic voting systems are not “well-designed” in this sense – they are at significant risk of post-election tampering because they fail to use (or they use improperly) the advanced cryptographic methods that could greatly reduce the risk of post-election tampering.]

Another reason to be suspicious of electronic-plus-electronic redundancy is that claims of redundancy are often made for systems that are not at all independent. For example, most vendors of today’s paperless DRE voting machines claim to keep redundant electronic records of each ballot. In fact, what most of them do is keep two copies, in identical or similar memory chips, located in the same computer and controlled by a single software program. This is clearly inadequate, because the two copies lack diversity and will tend to fail at the same time.

Even assuming that other electronic-plus-electronic redundant systems can be suitably reliable and secure, we would need to trust that the certification process could tell the difference between adequate redundancy and the kind of pseudo-redundancy discussed in the previous paragraph. The certification process has historically had trouble making such judgments. Though there is evidence that the process is improving – and H.R. 811 would improve it further – much improvement is still necessary.

Requiring a paper ballot, on the other hand, is a bright-line rule that is easier to enforce. A bright-line rule will also inspire voter confidence, because compliance will be obvious to every voter.

avatar

FreeConference Suit: Neutrality Fight or Regulatory Squabble?

Last week FreeConference, a company that offers “free” teleconferencing services, sued AT&T for blocking access by AT&T/Cingular customers to FreeConference’s services. FreeConference’s complaint says the blocking is anticompetitive and violates the Communications Act.

FreeConference’s service sets up conference calls that connect a group of callers. Users are given an ordinary long-distance phone number to call. When they call the assigned number, they are connected to their conference call. Users pay nothing beyond the cost of the ordinary long-distance call they’re making.

As of last week, AT&T/Cingular started blocking access to FreeConference’s long-distance numbers from AT&T/Cingular mobile phones. Instead of getting connected to their conference calls, AT&T/Cingular users are getting an error message. AT&T/Cingular has reportedly admitted doing this.

At first glance, this looks like an unfair practice, with AT&T trying to shut down a cheaper competitor that is undercutting AT&T’s lucrative conference-call business. This is the kind of thing net neutrality advocates worry about – though strictly speaking this is happening on the phone network, not the Internet.

The full story is a bit more complicated, and it starts with FreeConference’s mysterious ability to provide conference calls for free. These days many companies provide free services, but they all have some way of generating revenue. FreeConference appears to generate revenue by exploiting the structure of telecom regulation.

When you make a long-distance call, you pay your long-distance provider for the call. The long-distance provider is required to pay connection fees to the local phone companies (or mobile companies) at both ends of the call, to offset the cost of connecting the call to the endpoints. This regulatory framework is a legacy of the AT&T breakup and was justified by the desire to have a competitive long-distance market coexist with local phone carriers that were near-monopolies.

FreeConference gets revenue from these connection fees. It has apparently cut a deal with a local phone carrier under which the carrier accepts calls for FreeConference, and FreeConference gets a cut of the carrier’s connection fees from those calls. If the connection fees are large enough – and apparently they are – this can be a win-win deal for FreeConference and the local carrier.

But of course somebody has to pay the fees. When an AT&T/Cingular customer calls FreeConference, AT&T/Cingular has to pay. They can pass on these fees to their customers, but this hardly seems fair. If I were an AT&T/Cingular customer, I wouldn’t be happy about paying more to subsidize the conference calls of other users.

To add another layer of complexity, it turns out that connection fees vary widely from place to place, ranging roughly from one cent to seven cents per minute. FreeConnection, predictably, has allied itself with a local carrier that gets a high connection fee. By routing its calls to this local carrier, FreeConnection is able to extract more revenue from AT&T/Cingular.

For me, this story illustrates everything that is frustrating about telecom. We start with intricately structured regulation, leading companies to adopt business models shaped by regulation rather than the needs of customers. The result is bewildering to consumers, who end up not knowing which services will work, or having to pay higher prices for mysterious reasons. This leads a techno-legal battle between companies that would, in an ideal world, be spending their time and effort developing better, cheaper products. And ultimately we end up in court, or creating more regulation.

We know a better end state is possible. But how do we get there from here?

[Clarification (2:20 PM): Added the "To add another layer ..." paragraph. Thanks to Nathan Williams for pointing out my initial failure to mention the variation in connection fees.]

avatar

Judge Strikes Down COPA

Last week a Federal judge struck down COPA, a law requiring adult websites to use age verification technology. The ruling by Senior Judge Lowell A. Reed Jr. held COPA unconstitutional because it is more restrictive of speech (but no more effective) than the alternative of allowing private parties to use filtering software.

This is the end of a long legal process that started with the passage of COPA in 1999. The ACLU, along with various authors and publishers, immediately filed suit challenging COPA, and Judge Reed struck down the law. The case was appealed up to the Supreme Court, which generally supported Judge Reed’s ruling but remanded the case back to him for further proceedings because enough time had passed that the technological facts might have changed. Judge Reed held another trial last fall, at which I testified. Now he has ruled, again, that COPA is unconstitutional.

The policy issue behind COPA is how to keep kids from seeing harmful-to-minors (HTM) material. Some speech is legally obscene, which means it is so icky that it does not qualify for First Amendment free speech protection. HTM material is not obscene – adults have a legally protected right to read it – but is icky enough that kids don’t have a right to see it. In other words, there is a First Amendment right to transmit HTM material to adults but not to kids.

Congress has tried more than once to pass laws keeping kids away from HTM material online. The first attempt, the Communications Decency Act (CDA), was struck down by the Supreme Court in 1997. When Congress responded by passing COPA in 1999, it used the Court’s CDA ruling as a roadmap in writing the new law, in the hope that doing so would make COPA consistent with free speech.

Unlike the previous CDA ruling, Judge Reed’s new COPA ruling doesn’t seem to give Congress a roadmap for creating a new statute that would pass constitutional muster. COPA required sites publishing HTM material to use age screening technology to try to keep kids out. The judge compared COPA’s approach to an alternative in which individual computer owners had the option of using content filtering software. He found that COPA’s approach was more restrictive of protected speech and less effective in keeping kids away from HTM material. That was enough to make COPA, as a content-based restriction on speech, unconstitutional.

Two things make the judge’s ruling relatively roadmap-free. First, it is based heavily on factual findings that Congress cannot change – things like the relative effectiveness of filtering and the amount of HTM material that originates overseas beyond the effective reach of U.S. law. (Filtering operates on all material, while COPA’s requirements could have been ignored by many overseas sites.) Second, the alternative it offers requires only voluntary private action, not legislation.

Congress has already passed laws requiring schools and libraries to use content filters, as a condition of getting Federal funding and with certain safeguards that are supposed to protect adult access. The courts have upheld such laws. It’s not clear what more Congress can do. Judge Reed’s filtering alternative is less restrictive because it is voluntary, so that computers that aren’t used by kids, or on which parents have other ways of protecting kids against HTM material, can get unfiltered access. An adult who wants to get HTM material will be able to get it.

Doubtless Congress will make noise about this issue in the upcoming election year. Protecting kids from the nasty Internet is too attractive politically to pass up. Expect hearings to be held and bills to be introduced; but the odds that we’ll get a new law that makes much difference seem pretty low.

avatar

Testifying at E-Voting Hearing

I’m testifying about the Holt e-voting bill this morning, at a hearing of the U.S. House of Representatives, Committee on House Administrion, Subcommittee on Elections. I haven’t found a webcast URL, but you can read my written testimony.

avatar

OLPC: Too Much Innovation?

The One Laptop Per Child (OLPC) project is rightly getting lots of attention in the tech world. The idea – putting serious computing and communication technologies into the hands of kids all over the world – could be transformative, if it works.

Recently our security reading group at Princeton studied BitFrost, the security architecture for OLPC. After the discussion I couldn’t help thinking that BitFrost seemed too innovative.

“Too innovative?” you ask. What’s wrong with innovation? Let me explain. Though tech pundits often praise “innovation” in the abstract, the fact is that most would-be innovations fail. In engineering, most new ideas either don’t work or aren’t really an improvement over the status quo. Sometimes the same “new” idea pops up over and over, reinvented each time by someone who doesn’t know about the idea’s past failures.

In the long run, failures are weeded out and the few successes catch on, so the world gets better. But in the short run most innovations fail, which makes the urge to innovate dangerous.

Fred Brooks, in his groundbreaking The Mythical Man-Month, referred to the second-system effect:

An architect’s first work is apt to be spare and clean. He knows he doesn’t know what he’s doing, so he does it carefully and with great restraint.

As he designs the first work, frill after frill and embellishment after embellishment occur to him. These get stored away to be used “next time.” Sooner or later the first system is finished, and the architect, with firm confidence and a demonstrated mastery of that class of systems, is ready to build a second system.

This second is the most dangerous system a man ever designs. When he does his third and later ones, his prior experiences will confirm each other as to the general characteristics of such systems, and their differences will identify those parts of his experience that are particular and not generalizable.

The general tendency is to over-design the second system, using all the ideas and frills that were cautiously sidetracked on the first one. The result, as Ovid says, is a “big pile.”

The danger, in the second sytem, is the desire to reinvent everything, to replace the flawed but serviceable approaches of the past. The third-system designer, having learned his (or her – things have changed since Brooks wrote) lesson, knows to innovate only in the lab, or in a product only where innovation is necessary.

But here’s the OLPC security specification (lines 115-118):

What makes the OLPC XO laptops radically different is that they represent the first time that all these security measures have been carefully put together on a system slated to be introduced to tens or hundreds of millions of users.

OLPC needs to be innovative in some areas, but I don’t think security is one of them. Sure, it would be nice to have a better security model, but until we know that model is workable in practice, it seems risky to try it out on millions of kids.

avatar

Viacom, YouTube, and Privacy

Yesterday’s top tech policy story was the copyright lawsuits filed by Viacom, the parent company of Comedy Central, MTV, and Paramount Pictures, against YouTube and its owner Google. Viacom’s complaint accuses YouTube of direct, contributory, and vicarious copyright infringement, and inducing infringement. The complaint tries to paint YouTube as a descendant of Napster and Grokster.

Viacom argues generally that YouTube should have done more to help it detect and stop infringement. Interestingly, Viacom points to the privacy features of YouTube as part of the problem, in paragraph 43 of the complaint:

In addition, YouTube is deliberately interfering with copyright owners’ ability to find infringing videos even after they are added to YouTube’s library. YouTube offers a feature that allows users to designate “friends” who are the only persons allowed to see videos they upload, preventing copyright owners from finding infringing videos with this limitation…. Thus, Plaintiffs cannot necessarily find all infringing videos to protect their rights through searching, even though that is the only avenue YouTube makes available to copyright owners. Moreover, YouTube still makes the hidden infringing videos available for viewing through YouTube features like the embed, share, and friends functions. For example, many users are sharing full-length copies of copyrighted works and stating plainly in the description “Add me as a friend to watch.”

Users have many good reasons to want to limit access to noninfringing uploaded videos, for example to make home movies available to family members but not to the general public. It would be a shame, and YouTube would be much less useful, if there were no way to limit access. Equivalently, if any copyright owner could override the limits, there would be no privacy anymore – remember that we’re all copyright owners.

Is Viacom really arguing that YouTube shouldn’t let people limit access to uploaded material? Viacom doesn’t say this directly, though it is one plausible reading of their argument. Another reading is that they think YouTube should have an extra obligation to police and/or filter material that isn’t viewable by the public.

Either way, it’s troubling to see YouTube’s privacy features used to attack the site’s legality, when we know those features have plenty of uses other than hiding infringement. Will future entrepreneurs shy away from providing private communication, out of fear that it will be used to brand them as infringers? If the courts aren’t careful, that will be one effect of Viacom’s suit.

avatar

Protect E-Voting — Support H.R. 811

After a long fight, we have reached the point where a major e-voting reform bill has a chance to become U.S. law. I’m referring to HR 811, sponsored by my Congressman, Rush Holt, and co-sponsored by many others. After reading the bill carefully, and discussing with students and colleagues the arguments of its supporters and critics, I am convinced that it is a very good bill that deserves our support.

The main provisions of the bill would require e-voting technologies to have a paper ballot that is (a) voter-verified, (b) privacy-preserving, and (c) durable. Paper ballots would be hand-recounted, and compared to the electronic count, at randomly-selected precincts after every election.

The most important decision in writing such a bill is which technologies should be categorically banned. The bill would allow (properly designed) optical scan systems, touch-screen systems with a suitable paper trail, and all-paper systems. Paperless touchscreens and lever machines would be banned.

Some activists have argued that the bill doesn’t go far enough. A few say that all use of computers in voting should be banned. I think that’s a mistake, because it sacrifices the security benefits computers can provide, if they’re used well.

Others argue that touch-screen voting machines should be banned even if they have good paper trails. I think that goes too far. Touchscreens can be a useful part of a good voting system, if they’re used in the right context and with a good paper trail. We shouldn’t let the worst of today’s insecure paperless touchscreens – machines that should never have been certified in the first place, and anyway would be banned by the Holt Bill for lacking a suitable paper ballot – sour us on the better uses of touchscreens that are possible.

One of the best parts of the bill is its random audit requirement, which selects 3% of precincts (or more in close races) at which the paper ballots will be hand counted and compared to the electronic records. This serves two useful purposes: detecting error or fraud that might have affected the election result, and providing a routine quality-control check on the vote-counting process. This part of the bill reflects a balance between the states’ freedom to run their own elections and the national interest in sound election management.

On the whole this is a good, strong bill. I support it, and I urge you to support it too.

avatar

How I Became a Policy Wonk

It’s All-Request Friday, when I blog on topics suggested by readers. David Molnar writes,

I’d be interested to hear your thoughts on how your work has come to have significant interface with public policy questions. Was this a conscious decision, did it “just happen,” or somewhere in between? Is this the kind of work you thought you’d be doing when you first set out to do research? What would you do differently, if you could do it again, and what in retrospect were the really good decisions you made?

I’ll address most of this today, leaving the last sentence for another day.

When I started out in research, I had no idea public policy would become a focus of my work. The switch wasn’t so much a conscious decision as a gradual realization that events and curiosity had led me into a new area. This kind of thing happens all the time in research: we stumble around until we reach an interesting result and then, with the benefit of hindsight, we construct a just-so story explaining why that result was natural and inevitable. If the result is really good, then the just-so story is right, in a sense – it justifies the result and it explains how we would have gotten there if only we hadn’t been so clueless at the start.

My just-so story has me figuring out three things. (1) Policy is deep and interesting. (2) Policy affects me directly. (3) Policy and computer security are deeply connected.

Working on the Microsoft case first taught me that policy is deep and interesting. The case raised obvious public policy issues that required deep legal, economic, and technical thinking, and deep connections between the three, to figure out. As a primary technical advisor to the Department of Justice, I got to talk to top-notch lawyers and economists about these issues. What were the real-world consequences of Microsoft doing X? Would would be the consequences if they were no longer allowed to do Y? Theories weren’t enough because concrete decisions had to be made (not by me, of course, but I saw more of the decision-making process than most people did). These debates opened a window for me, and I saw in a new way the complex flow from computer science in the lab to computer products in the market. I saw, too, how public policy modulates this flow.

The DMCA taught me that policy affects me directly. The first time I saw a draft of the DMCA, before it was even law, I knew it would mean trouble for researchers, and I joined a coalition of researchers who tried to get a research exemption inserted. The DMCA statute we got was not as bad as some of the drafts, but it was still problematic. As fate would have it, my own research triggered the first legal battle to protect research from DMCA overreaching. That was another formative experience.

The third realization, that policy and computer security are joined at the hip, can’t be tied to any one experience but dawned on me slowly. I used to tell people at cocktail parties, after I had said I work on computer security and they had asked what in the world that meant, that computer security is “the study of who can do what to whom online.” This would trigger either an interesting conversation or an abrupt change of topic. What I didn’t know until somebody pointed it out was that Lenin had postulated “who can do what to whom” (and the shorthand “who-whom”) as the key question to ask in politics. And Lenin, though a terrible role model, did know a thing or two about political power struggles.

More to the point, it seems that almost every computer security problem I work on has a policy angle, and almost every policy problem I work on has a computer security angle. Policy and security try, by different means, to control what people can do, to protect people from harmful acts and actors, and to ensure freedom of action where it is desired. Working on security makes my policy work better, and vice versa. Many of the computer scientists who are most involved in policy debates come from the security community. This is not an accident but reflects the deep connections between the two fields.

(Have another topic to suggest for All-Request Friday? Suggest it in the comments here.)

avatar

How Computers Can Make Voting More Secure

By now there is overwhelming evidence that today’s paperless computer-based voting technologies have such serious security and reliability problems that we should not be using them. Computers can’t do the job by themselves; but what role should they play in voting?

It’s tempting to eliminate computers entirely, returning to old-fashioned paper voting, but I think this is a mistake. Paper has an important role, as I’ll describe below, but paper systems are subject to well-known problems such as ballot-box stuffing and chain voting, as well as other user-interface and logistical challenges.

Security does require some role for paper. Each vote must be recorded in a manner that is directly verified by the voter. And the system must be software-independent, meaning that its accuracy cannot rely on the correct functioning of any software system. Today’s paperless e-voting systems satisfy neither requirement, and the only practical way to meet the requirements is to use paper.

The proper role for computers, then, is to backstop the paper system, to improve it. What we want is not a computerized voting system, but a computer-augmented one.

This mindset changes how we think about the role of computers. Instead of trying to make computers do everything, we will look instead for weaknesses and gaps in the paper system, and ask how computers can plug them.

There are two main ways computers can help. The first is in helping voters cast their votes. Computers can check for errors in ballots, for example by detecting an invalid ballot while the voter is still in a position to fix it. Computers can present the ballot in audio format for the blind or illiterate, or in multiple languages. (Of course, badly designed computer interfaces can do harm, so we have to be careful.) There must be a voter-verified paper record at the end of the vote-casting process, but computers, used correctly, can help voters create and validate that record, by acting as ballot-marking devices or as scanners to help voters spot mismarked ballots.

The second way computers can help is by improving security. Usually the e-voting security debate is about how to keep computers from making security too much worse than it was before. Given the design of today’s e-voting systems, this is appropriate – just bringing these systems up to the level of security and reliability in (say) the Xbox and Wii game consoles would be nice. Even in a computer-augmented system, we’ll need to do a better job of vetting the computers’ design – if a job is worth doing with a computer, it’s worth doing correctly.

But once we adopt the mindset of augmenting a paper-based system, security looks less like a problem and more like an opportunity. We can look for the security weaknesses of paper-based systems, and ask how computers can help to address them. For example, paper-based systems are subject to ballot-box stuffing – how can computers reduce this risk?

Surprisingly, the designs of current e-voting technologies, even the ones with paper trails, don’t do all they can to compensate for the weaknesses of paper. For example, the current systems I’ve seen keep electronic records that are subject to straightforward post-election tampering. Researchers have studied approaches to this problem, but as far as I know none are used in practice.

In future posts, we’ll discuss design ideas for computer-augmented voting.

avatar

Fact check: The New Yorker versus Wikipedia

In July—when The New Yorker ran a long and relatively positive piece about Wikipedia—I argued that the old-media method of laboriously checking each fact was superior to the wiki model, where assertions have to be judged based on their plausibility. I claimed that personal experience as a journalist gave me special insight into such matters, and concluded: “the expensive, arguably old fashioned approach of The New Yorker and other magazines still delivers a level of quality I haven’t found, and do not expect to find, in the world of community-created content.”

Apparently, I was wrong. It turns out that EssJay, one of the Wikipedia users described in The New Yorker article, is not the “tenured professor of religion at a private university” that he claimed he was, and that The New Yorker reported him to be. He’s actually a 24-year-old, sans doctorate, named Ryan Jordan.

Jimmy Wales, who is as close to being in charge of Wikipedia as anybody is, has had an intricate progression of thought on the matter, ably chronicled by Seth Finklestein. His ultimate reaction (or at any rate, his current public stance as of this writing) is on his personal page in Wikipedia

I only learned this morning that EssJay used his false credentials in content disputes… I understood this to be primarily the matter of a pseudonymous identity (something very mild and completely understandable given the personal dangers possible on the Internet) and not a matter of violation of people’s trust.

As Seth points out, this is an odd reaction since it seems simultaneously to forgive EssJay for lying to The New Yorker (“something very mild”) and to hold him much more strongly to account for lying to other Wikipedia users. One could argue that lying to The New Yorker—and by extension to its hundreds of thousands of subscribers—was in the aggregate much worse than lying to the Wikipedians. One could also argue that Mr. Jordan’s appeal to institutional authority, which was as successful as it was dishonest, raises profound questions about the Wikipedia model.

But I won’t make either of those arguments. Instead, I’ll return to the issue that has me putting my foot in my mouth: How can a reader decide what to trust? I predicted you could trust The New Yorker, and as it turns out, you couldn’t.

Philip Tetlock, a long-time student of the human penchant for making predictions, has found (in a book whose text I can’t link to, but which I encourage you to read) that people whose predictions are falsified typically react by making excuses. They typically claim that they are off the hook because the conditions based on which they predicted a certain result were actually not as they seemed at the time of the inaccurate prediction. This defense is available to me: The New Yorker fell short of its own standards, and took EssJay at his word without verifying his identity or even learning his name. He had, as all con men do, a plausible-sounding story, related in this case to a putative fear of professional retribution that in hindsight sits rather uneasily with his claim that he had tenure. If the magazine hadn’t broken its own rules, this wouldn’t have gotten into print.

But that response would be too facile, as Tetlock rightly observes of the general case. Granted that perfect fact checking makes for a trustworthy story; how do you know when the fact checking is perfect and when it is not? You don’t. More generally, predictions are only as good as someone’s ability to figure out whether or not the conditions are right to trigger the predicted outcome.

So what about this case: On the one hand, incidents like this are rare and tend to lead the fact checkers to redouble their meticulousness. On the other, the fact claims in a story that are hardest to check are often for the same reason the likeliest ones to be false. Should you trust the sometimes-imperfect fact checking that actually goes on?

My answer is yes. In the wake of this episode The New Yorker looks very bad (and Wikipedia only moderately so) because people regard an error in The New Yorker to be exceptional in a way the exact same error in Wikipedia is not. This expectations gap tells me that The New Yorker, warts and all, still gives people something they cannot find at Wikipedia: a greater, though conspicuously not total, degree of confidence in what they read.