November 28, 2024

Another E-Voting Glitch: Miscalibrated Touchscreens

Voters casting early ballots in New Mexico report that the state’s touchscreen voting machines sometimes record a vote for the wrong candidate, according to a Jim Ludwick story in the Albuquerque Journal. (Link via DocBug)

[Kim Griffith] went to Valle Del Norte Community Center in Albuquerque, planning to vote for John Kerry. “I pushed his name, but a green check mark appeared before President Bush’s name,” she said.

Griffith erased the vote by touching the check mark at Bush’s name. That’s how a voter can alter a touch-screen ballot.

She again tried to vote for Kerry, but the screen again said she had voted for Bush. The third time, the screen agreed that her vote should go to Kerry.

She faced the same problem repeatedly as she filled out the rest of the ballot. On one item, “I had to vote five or six times,” she said.

Michael Cadigan, president of the Albuquerque City Council, had a similar experience when he voted at City Hall.

“I cast my vote for president. I voted for Kerry and a check mark for Bush appeared,” he said.

He reported the problem immediately and was shown how to alter the ballot.

Cadigan said he doesn’t think he made a mistake the first time. “I was extremely careful to accurately touch the button for my choice for president,” but the check mark appeared by the wrong name, he said.

In Sandoval County, three Rio Rancho residents said they had a similar problem, with opposite results. They said a touch-screen machine switched their presidential votes from Bush to Kerry.

County officials blame the voters, saying that they must have inadvertently touched the screen elsewhere.

My guess is that the touchscreens are miscalibrated. Touchscreens use one mechanism to paint images onto the screen, and a separate mechanism to measure where the screen has been touched. Usually the touch sensor has to be calibrated to make sure that the coordinate system used by the touch sensor matches up with the coordinate system used by the screen-painting mechanism. If the sensor isn’t properly calibrated, touches made on one part of the image will be registered elsewhere. For example, touches might be registered an inch or two below the place they really occur.

(Some PDAs, such as Palm systems, calibrate their touchscreens when they boot, by presenting the user with a series of crosshairs and asking the user to touch the center of each one. If you’re a Palm user, you have probably seen this.)

Touchscreens are especially prone to calibration problems when they have gone unused for a long time, as will tend to happen with voting machines.

My guess is that few poll workers know how to recognize this problem, and fewer still know how to fix it if it happens. One solution is to educate poll workers better. Another solution is to avoid using technologies that are prone to geeky errors like touchscreen miscalibration.

This is yet another reminder to proofread your vote before it is cast.

UPDATE (3:15 PM): Joe Hall points to an argument by Doug Jones that problems of this sort represent another type of touchscreen calibration problem. If the voter rests a palm or a thumb on the edge of the touchscreen surface, this can (temporarily) mess up the screen’s calibration. That seems like another plausible explanation of the New Mexico voters’ complaints. Either way, touchscreens may misread the voter’s intention. Again: don’t forget to double-check that the technology (no matter what it is ) seems to be registering your vote correctly.

LAMP and Regulatory Arbitrage

Today, MIT’s LAMP system goes back on line, with a new design. LAMP (“Library Access to Music Project”) streams music to the MIT campus via the campus cable TV system. Any student can connect to LAMP’s website and choose a sequence of songs. The chosen songs are then scheduled for playing on one of sixteen campus TV channels.

According to MIT, transmission of music via LAMP is legal because it is covered by music licenses that MIT has purchased in connection with the campus radio station. In other words, LAMP is just like another set of sixteen campus radio stations that happen to be controllable by MIT students across the Web. I don’t know whether this legal argument is correct, but it sounds plausible and MIT appears to stand behind it.

You may recall that LAMP launched last year but was shut down a few days later when copyright owners argued that LoudEye, which had sold MIT digital files to use in that incarnation of LAMP, did not have the legal right to sell those files for such uses.

Now LAMP is back, with the original design’s efficient digital back end replaced by a new setup in which an array of low-end CD jukeboxes are controlled by special computers. This allows LAMP to get its music from ordinary CDs, as many radio stations do.

From an engineering standpoint, the new design of LAMP is overly complex, fragile, and inefficient. That’s not surprising, because lawyers must have had a big impact on the design.

LAMP is a great example of regulatory arbitrage – the adoption of otherwise-inefficient behavior in order to shift from one legal or regulatory regime to another. There’s one set of copyright rules for radio stations and another set for webcasters. LAMP transmits music over the cable-TV system, rather than the more efficient Internet system, in order to stay on the radio-station side of the line. There’s one set of rules for direct access to digital music on CDs and another set of rules for copies stored on hard disks. LAMP uses CDs in jukeboxes, rather than more efficient hard-disk storage, in order to stay on the CD side of that legal line.

We’re going to see more and more of this kind of regulatory arbitrage by engineers. Copyright law is getting more complicated and is accumulating more technology-specific rules, so there are more and more legal lines across which designers will want to step. At the same time, technology is becoming more powerful and more flexible, giving designers an ever wider menu of design options. The logical outcome is a twisting of technology design to satisfy predetermined legal categories rather than engineering efficiency.

Tit for Tat

Recent news stories, picked up all over blogland, reported that Tit-for-Tat has been dethroned as the best strategy in iterated prisoners’ dilemma games. In a computer tournament, a team from Southampton University won with a new strategy, beating the Tit-for-Tat strategy for the first time.

Here’s the background. Prisoners’ Dilemma is a game with two players. Each player chooses a move, which is either Cooperate or Defect. Then the players reveal their moves to each other. If both sides Cooperate, they each get three points. If both Defect, they each get one point. If one player Cooperates and the other Defects, then the defector gets five points and the cooperator gets none. The game is interesting because no matter what one’s opponent does, one is better off chosing to Defect; but the most mutually beneficial result occurs when both players Cooperate.

Things get more interesting when you iterate the game, so that the same pair of players plays many times in a row. A player can then base its strategy on what the opponent has done recently, which changes the opponent’s incentives in an subtle ways. This game is an interesting abstract model of adversarial social relationships, so people are interested in understanding its strategy tradeoffs.

For at least twenty years, the best-looking strategy has been Tit-for-Tat, in which one starts out by Cooperating and then copies whatever action the opponent used last. This strategy offers an appealing combination of initial friendliness with measured retaliation for an opponent’s Defections. In tournaments among computer players, Tit-for-Tat won consistently.

But this year, the Southampton team unveiled a new strategy that won the latest tournament. Many commentators responded by declaring that Tit-for-Tat had been dethroned. But I think that conclusion is wrong, for reasons I’ll explain.

But first, let me explain the new Southampton strategy. (This is based on press accounts, but I’m confident that it’s at least pretty close to correct.) They entered many players in the tournament. Their players divide into two groups, which I’ll call Stars and Stooges. The Stars try to win the tournament, and the Stooges sacrifice themselves so the Stars can win. When facing a new opponent, one of these players starts out by making a distinctive sequence of moves. Southampton’s players watch for this distinctive sequence, which allows them to tell whether their opponents are other Southampton players. When two Southampton players are playing each other, they collude to maximize their scores (or at least the score of the Star(s), if any, among them). When a Star plays an outsider, it tries to score points normally; but when a Stooge plays an outsider, it always Defects, to minimize the opponent’s score. Thus the Stooges sacrifice themselves so that the Stars can win. And indeed, the final results show a few Stars at the top of the standings (above Tit-for-Tat players) and a group of Stooges near the bottom.

If we look more closely, the Southampton strategy doesn’t look so good. Apparently, Tit-for-Tat still scores higher than the average Southampton player – the sacrifice (in points) made by the Stooges is not fully recouped by the Stars. So Tit-for-Tat will still be the best strategy, both for a lone player, and for a team of players, assuming the goal is to maximize the sum of the team members’ scores. (Note that a team of Tit-for-Tat players doesn’t need to use the Southampton trick for recognizing fellow team members, since Tit-for-Tat players who play each other will always cooperate, which is the team-optimal thing to do.)

So it seems that all the Southampton folks discovered is a clever way to exploit the rules of this particular tournament, with its winner-take-all structure. That’s clever, but I don’t think it has much theoretical significance.

UPDATE (Friday 22 October): The comments on this post are particularly good.

Preemptive Blame-Shifting by the E-Voting Industry

The November 2nd election hasn’t even happened yet, and already the e-voting industry is making excuses for the election-day failures of their technology. That’s right – they’re rebutting future reports of future failures. Here’s a sample:

Problem

Voting machines will not turn on or operate.

Explanation

Voting machines are not connected to an active power source. Machines may have been connected to a power strip that has been turned off or plugged into an outlet controlled by a wall switch. Power surges or outages caused by electrical storms or other natural occurrences are not unheard of. If the power source to the machine has been lost, voting machines will generally operate on battery power for brief periods. Once battery power is lost, however, the machines will cease to function (although votes cast on such machines will not be lost). Electronic voting machines may require the election official or precinct worker to enter a password in order to operate. Lost or forgotten passwords may produce lengthy delays as this information is retrieved from other sources.

In the past, of course, voting machines have failed to operate for other reasons, as in the 2002 California gubernatorial recall election, when Diebold machines, which turned out to be uncertified, failed to boot properly at many polling places in San Diego and Alameda counties. (Verified-voting.org offers a litany of these and other observed e-voting failures.)

The quote above comes from a document released by the Election Technology Council, a trade group of e-voting vendors. (The original, tellingly released only in the not-entirely-secure Word format, is here.)

The tone of the ETC document is clear – our technology is great, but voters and poll workers aren’t smart enough to use it correctly. Never mind that the technology is deeply flawed (see, e.g., my discussion of Diebold’s insecure protocols, not to mention all of the independent studies of the technology). Never mind that the vendors are the ones who design the training regimes whose inadequacy they blame. Never mind that it is their responsibility to make their products usable.

[Link credit: Slashdot]

Privacy, Recording, and Deliberately Bad Crypto

One reason for the growing concern about privacy these days is the ever-decreasing cost of storing information. The cost of storing a fixed amount of data seems to be dropping at the Moore’s Law rate, that is, by a factor of two every 18 months, or equivalently a factor of about 100 every decade. When storage costs less, people will store more information. Indeed, if storage gets cheap enough, people will store even information that has no evident use, as long as there is even a tiny probability that it will turn out to be valuable later. In other words, they’ll store everything they can get their hands on. The result is that more information about our lives will be accessible to strangers.

(Some people argue that the growth in available information is on balance a good thing. I want to put that argument aside here, and ask you to accept only that technology is making more information about us available to strangers, and that an erosion of our legitimate privacy interests is among the consequences of that trend.)

By default, information that is stored can be accessed cheaply. But it turns out that there are technologies we can use to make stored information (artificially) expensive to access. For example, we can encrypt the information using a weak encryption method that can be broken by expending some predetermined amount of computation. To access the information, one would then have to buy or rent sufficient computer time to break the encryption method. The cost of access could be set to whatever value we like.

(For techies, here’s how it works. (There are fancier methods. This one is the simplest to explain.) You encrypt the data, using a strong cipher, under a randomly chosen key K. You provide a hint about the value of K (e.g. upper and lower bounds on the value of K), and then you discard K. Reconstructing the data now requires doing an exhaustive search to find K. The size of the search required depends on how precise the hint is.)

This method has many applications. For example, suppose the police want to take snapshots of public places at fixed intervals, and we want them to be able to see any serious crimes that happen in front of their cameras, but we don’t want them to be able to browse the pictures arbitrarily. (Again, I’m putting aside the question of whether it’s wise for us to impose this requirement.) We could require them to store the pictures in such a way that retrieving any one picture carried some moderate cost. Then they would be able to access photos of a few crimes being committed, but they couldn’t afford to look at everything.

One drawback of this approach is that it is subject to Moore’s Law. The price of accessing a data item is paid not in dollars but in computing cycles, a resource whose dollar cost is cut in half every 18 months. So what is expensive to access now will be relatively cheap in, say, ten years. For some applications, that’s just fine, but for others it may be a problem.

Sometimes this drop in access cost may be just what you want. If you want to make a digital time capsule that cannot be opened now but will be easy to open 100 years from now, this method is perfect.