November 26, 2024

On the future of voting technologies: simplicity vs. sophistication

Yesterday, I testified before a hearing of Colorado’s Election Reform Commission. I made a small plug, at the end of my testimony, for a future generation of electronic voting machines that would use crypto machinery for end-to-end / software independent verification. Normally, the politicos tend to ignore this and focus on the immediately actionable stuff (e.g., current-generation DREs are unacceptably insecure; optical-scan is the best thing presently on the market). Not this time. I got a bunch of questions asking me to explain how a crypto voting system can be verifiable, how you can prove that the machine is behaving properly, and so forth. Pretty amazing. What I realized, however, is that it’s really hard to explain crypto machinery to non-CS people. I did my best, but it was clear from conversations afterward that a few minutes of Q&A did little to give them any confidence that crypto voting machinery really works.

Another of the speakers, Neil McBurnett, was talking about doing variable sampling-rate audits (as a function of how close the tally is). Afterward, he lamented to me, privately, how hard it is to explain basic concepts like what it means for something to be “statistically significant.”

There’s a clear common theme here. How do we explain to the public the basic scientific theories that underly the problems that voting systems face? My written testimony (reused from an earlier hearing in Texas) includes links to papers, and some people will follow up. Others won’t. My big question is whether we have a research challenge to invent progressively simpler systems that still have the right security properties, or whether we have an education challenge to explain that a certain amount of complexity is worthwhile for the good properties that can be achieved. (Uglier question: is it a desirable goal to weaken the security properties in return for greater simplicity? What security properties would you sacrifice?)

Certainly, with our own VoteBox system, which uses a variation on Benaloh‘s voter-initiated ballot challenge mechanism, one of the big open questions is whether real voters, who just want to cast their votes and don’t care about the security mechanisms, will be tripped up by the extra question at the end that’s fundamental to the mechanism. We’re going to need to run human subject tests against these aspects of the machine design, and if they fail in practice, it’s going to be a trip back to the drawing board.

[Sidebar: I’m co-teaching a class on elections with Bob Stein (a political scientist) and Mike Byrne (a psychologist). The students are a mix of Rice undergrads, most of whom aren’t computer scientists. I experimentally built a lecture that began by teaching just enough number theory to explain how El Gamal cryptography works and how it allows for homomorphic vote tallying. Then I described how VoteBox uses this mechanism, and wrapped up with an explanation of how to do Benaloh-style challenges. I left out a lot of details, like how you generate large prime numbers, or how you construct NIZK proofs, but I seemed to have the class along with me for the lecture. If I can sell the idea of end-to-end cryptographic mechanisms to undergraduate non-science students, then there may yet be some hope.]

Discerning Voter Intent in the Minnesota Recount

Minnesota election officials are hand-counting millions of ballots, as they perform a full recount in the ultra-close Senate race between Norm Coleman and Al Franken. Minnesota Public Radio offers a fascinating gallery of ballots that generated disputes about voter intent.

A good example is this one:

A scanning machine would see the Coleman and Franken bubbles both filled, and call this ballot an overvote. But this might be a Franken vote, if the voter filled in both slots by mistake, then wrote “No” next to Coleman’s name.

Other cases are more difficult, like this one:

Do we call this an overvote, because two bubbles are filled? Or do we give the vote to Coleman, because his bubble was filled in more completely?

Then there’s this ballot, which is destined to be famous if the recount descends into ligitation:

[Insert your own joke here.]

This one raises yet another issue:

Here the problem is the fingerprint on the ballot. Election laws prohibit voters from putting distinguishing marks on their ballots, and marked ballots are declared invalid, for good reason: uniquely marked ballots can be identified later, allowing a criminal to pay the voter for voting “correctly” or punish him for voting “incorrectly”. Is the fingerprint here an identifying mark? And if so, how can you reject this ballot and accept the distinctive “Lizard People” ballot?

Many e-voting experts advocate optical-scan voting. The ballots above illustrate one argument against opscan: filling in the ballot is a free-form activity that can create ambiguous or identifiable ballots. This creates a problem in super-close elections, because ambiguous ballots may make it impossible to agree on who should have won the election.

Wearing my pure-scientist hat (which I still own, though it sometimes gets dusty), this is unsurprising: an election is a measurement process, and all measurement processes have built-in errors that can make the result uncertain. This is easily dealt with, by saying something like this: Candidate A won by 73 votes, plus or minus a 95% confidence interval of 281 votes. Or perhaps this: Candidate A won with 57% probability. Problem solved!

In the real world, of course, we need to declare exactly one candidate to be the winner, and a lot can be at stake in the decision. If the evidence is truly ambiguous, somebody is going to end up feeling cheated, and the most we can hope for is a sense that the rules were properly followed in determining the outcome.

Still, we need to keep this in perspective. By all reports, the number of ambiguous ballots in Minnesota is miniscule, compared to the total number cast in Minnesota. Let’s hope that, even if some individual ballots don’t speak clearly, the ballots taken collectively leave no doubt as to the winner.

The future of photography

Several interesting things are happening in the wild world of digital photography as it’s colliding with digital video. Most notably, the new Canon 5D Mark II (roughly $2700) can record 1080p video and the new Nikon D90 (roughly $1000) can record 720p video. At the higher end, Red just announced some cameras that will ship next year that will be able to record full video (as fast as 120 frames per second in some cases) at far greater than HD resolutions (for $12K, you can record video at a staggering 6000×4000 pixels). You can configure a Red camera as a still camera or as a video camera.

Recently, well-known photographer Vincent Laforet (perhaps best known for his aerial photographs, such as “Me and My Human“) got his hands on a pre-production Canon 5D Mark II and filmed a “mock commercial” called “Reverie”, which shows off what the camera can do, particularly its see-in-the-dark low-light abilities. If you read Laforet’s blog, you’ll see that he’s quite excited, not just about the technical aspects of the camera, but about what this means to him as a professional photographer. Suddenly, he can leverage all of the expensive lenses that he already owns and capture professional-quality video “for free.” This has all kinds of ramifications for what it means to cover an event.

For example, at professional sporting events, video rights are entirely separate from the “normal” still photography rights given to the press. It’s now the case that every pro photographer is every bit as capable of capturing full resolution video as the TV crew covering the event. Will still photographers be contractually banned from using the video features of their cameras? Laforet investigated while he was shooting the Beijing Olympics:

Given that all of these rumours were going around quite a bit in Beijing [prior to the announcement of the Nikon D90 or Canon 5D Mark II] – I sat down with two very influential people who will each be involved at the next two Olympic Games. Given that NBC paid more than $900 million to acquire the U.S. Broadcasting rights to this past summer games, how would they feel about a still photographer showing up with a camera that can shoot HD video?

I got the following answer from the person who will be involved with Vancouver which I’ll paraphrase: Still photographers will be allowed in the venues with whatever camera they chose, and shoot whatever they want – shooting video in it of itself, is not a problem. HOWEVER – if the video is EVER published – the lawsuits will inevitably be filed, and credentials revoked etc.

This to me seems like the reasonable thing to do – and the correct approach. But the person I spoke with who will be involved in the London 2012 Olympic Games had a different view, again I paraphrase: “Those cameras will have to be banned. Period. They will never be allowed into any Olympic venue” because the broadcasters would have a COW if they did. And while I think this is not the best approach – I think it might unfortunately be the most realistic. Do you really think that the TV producers and rights-owners will “trust” photographers not to broadcast anything they’ve paid so much for. Unlikely.

Let’s do a thought experiment. Red’s forthcoming “Scarlet FF35 Mysterium Monstro” will happily capture 6000×4000 pixels at 30 frames per second. If you multiply that out, assuming 8 bits per pixel (after modest compression), you’re left with the somewhat staggering data rate of 720MB/s (i.e., 2.6TB/hour). Assuming you’re recording that to the latest 1.5TB hard drives, that means you’re swapping media every 30 minutes (or you’re tethered to a RAID box of some sort). Sure, your camera now weighs more and you’re carrying around a bunch of hard drives (still lost in the noise relative to the weight that a sports photographer hauls around in those long telephoto lenses), but you manage to completely eliminate the “oops, I missed the shot” issue that dogs any photographer. Instead, the “shoot” button evolves into more of a bookmarking function. “Yeah, I think something interesting happened around here.” It’s easy to see photo editors getting excited by this. Assuming you’ve got access to multiple photographers operating from different angles, you can now capture multiple views of the same event at the same time. With all of that data, synchronized and registered, you could even do 3D reconstructions (made famous/infamous by the “bullet time” videos used in the Matrix films or the Gap’s Khaki Swing commercial). Does the local newspaper have the rights to do that to an NFL game or not?

Of course, this sort of technology is going to trickle down to gear that mere mortals can afford. Rather than capturing every frame, maybe you now only keep a buffer of the last ten seconds or so, and when you press the “shoot” button, you get to capture the immediate past as well as the present. Assuming you’ve got a sensor that let’s you change the exposure on the fly, you can also now imagine a camera capturing a rapid succession of images at different exposures. That means no more worries about whether you over or under-exposed your image. In fact, the camera could just glue all the images together into a high-dynamic-range (HDR) image, which yields sometimes fantastic results.

One would expect, in the cutthroat world of consumer electronics, that competition would bring features like this to market as fast as possible, although that’s far from a given. If you install third-party firmware on a Canon point-and-shoot, you get all kinds of functionality that the hardware can support but which Canon has chosen not to implement. Maybe Canon would rather you spend more money for more features, even if the cheaper hardware is perfectly capable. Maybe they just want to make common feature easy to use and not overly clutter the UI. (Not that any camera vendors are doing particularly well on ease of use, but that’s a topic for another day.)

Freedom to Tinker readers will recognize some common themes here. Do I have the right to hack my own gear? How will new technology impact old business models? In the end, when industries collide, who wins? My fear is that the creative freelance photographer, like Laforet, is likely to get pushed out by the big corporate sponsor. Why allow individual freelancers to shoot a sports event when you can just spread professional video cameras all over the place and let newspapers buy stills from those video feeds? Laforet discussed these issues at length; his view is that “traditional” professional photography, as a career, is on its way out and the future is going to be very, very different. There will still be demand for the kind of creativity and skills that a good photographer can bring to the game, but the new rules of the game have yet to be written.

Total Election Awareness

Ed recently made a number of predictions about election day (“Election 2008: What Might Go Wrong”). In terms of long lines and voting machine problems, his predictions were pretty spot on.

On election day, I was one of a number of volunteers for the Election Protection Coalition at one of 25 call centers around the nation. Kim Zetter describes the OurVoteLive project, involving 100 non-profit organizations, ten thousand volunteers that answered 86,000 calls with a 750 line call-center operation (“U.S. Elections — It Takes a Village”):

The Election Protection Coalition, a network of more than 100 legal, voting rights and civil liberties groups was the force behind the 1-866-OUR-VOTE hotline, which provided legal experts to answer nearly 87,000 calls that came in over 750 phone lines on Election Day and dispatched experts to address problems in the field as they arose.

Pam Smith of the Verified Voting Foundation made sure each call center had a voting technologist responsible for responding to voting machine reports and advising mobile legal volunteers how to respond on the ground. It was simply a massive operation. Matt Zimmerman and Tim Jones of the Electronic Frontier Foundation and their team get serious props as developers and designers of the their Total Election Awareness (TEA) software behind OurVoteLive.

As Kim describes in the Wired article, the call data is all available in CSV, maps, tables, etc.: http://www.ourvotelive.org/. I just completed a preliminary qualitative analysis of the 1800 or so voting equipment incident reports: “A Preliminary Analysis of OVL Voting Equipment Reports”. Quite a bit of data in there with which to inform future efforts.

How Fragile Is the Internet?

With Barack Obama’s election, we’re likely to see a revival of the network neutrality debate. Thus far the popular debate over the issue has produced more heat than light. On one side have been people who scoff at the very idea of network neutrality, arguing either that network neutrality is a myth or that we’d be better off without it. On the other are people who believe the open Internet is hanging on by its fingernails. These advocates believe that unless Congress passes new regulations quickly, major network providers will transform the Internet into a closed network where only their preferred content and applications are available.

One assumption that seems to be shared by both sides in the debate is that the Internet’s end-to-end architecture is fragile. At times, advocates on both sides debate seem to think that AT&T, Verizon, and Comcast have big levers in their network closets labeled “network neutrality” that they will set to “off” if Congress doesn’t stop them. In a new study for the Cato Institute, I argue that this assumption is unrealistic. The Internet has the open architecture it has for good technical reasons. The end-to-end principle is deeply embedded in the Internet’s architecture, and there’s no straightforward way to change it without breaking existing Internet applications.

One reason is technical. Advocates of regulation point to a technology called deep packet inspection as a major threat to the Internet’s open architecture. DPI allows network owners to look “inside” Internet packets, reconstructing the web page, email, or other information as it comes across the wire. This is an impressive technology, but it’s also important to remember its limitations. DPI is inherently reactive and brittle. It requires human engineers to precisely describe each type of traffic that is to be blocked. That means that as the Internet grows ever more complex, more and more effort would be required to keep DPI’s filters up to date. It also means that configuration problems will lead to the accidental blocking of unrelated traffic.

The more fundamental reason is economic. The Internet works as well as it does precisely because it is decentralized. No organization on Earth has the manpower that would have been required to directly manage all of the content and applications on the Internet. Networks like AOL and Compuserve that were managed that way got bogged down in bureaucracy while they were still a small fraction of the Internet’s current size. It is not plausible that bureaucracies at Comcast, AT&T, or Verizon could manage their TCP/IP networks the way AOL ran its network a decade ago.

Of course what advocates of regulation fear is precisely that these companies will try to manage their networks this way, fail, and screw the Internet up in the process. But I think this underestimates the magnitude of the disaster that would befall any network provider that tried to convert their Internet service into a proprietary network. People pay for Internet access because they find it useful. A proprietary Internet would be dramatically less useful than an open one because network providers would inevitably block an enormous number of useful applications and websites. A network provider that deliberately broke a significant fraction of the content or applications on its network would find many fewer customers willing to pay for it. Customers that could switch to a competitor would. Some others would simply cancel their home Internet service and rely instead on Internet access at work, school, libraries, etc. And many customers that had previously taken higher-speed Internet service would downgrade to basic service. In short, even in an environment of limited competition, reducing the value of one’s product is rarely a good business strategy.

This isn’t to say that ISPs will never violate network neutrality. A few have done so already. The most significant was Comcast’s interference with the BitTorrent protocol last year. I think there’s plenty to criticize about what Comcast did. But there’s a big difference between interfering with one networking protocol and the kind of comprehensive filtering that network neutrality advocates fear. And it’s worth noting that even Comcast’s modest interference with network neutrality provoked a ferocious response from customers, the press, and the political process. The Comcast/BitTorrent story certainly isn’t going to make other ISPs think that more aggressive violations of network neutrality would be a good business strategy.

So it seems to me that new regulations are unnecessary to protect network neutrality. They are likely to be counterproductive as well. As Ed has argued, defining network neutrality precisely is surprisingly difficult, and enacting a ban without a clear definition is a recipe for problems. In addition, there’s a real danger of what economists call regulatory capture—that industry incumbents will find ways to turn regulatory authority to their advantage. As I document in my study, this is what happened with 20th-century regulation of the railroad, airline, and telephone industries. Congress should proceed carefully, lest regulations designed to protect consumers from telecom industry incumbents wind up protecting incumbents from competition instead.