April 24, 2014

avatar

New EVoting-Experts Group Blog

evoting-experts.com is a new group blog devoted to e-voting issues. Members include leading experts on the technology, including David Dill, Ed Felten, Joe Hall, Avi Rubin, Adam Stubblefield, and Dan Wallach (with more to come, we hope).

The site’s goal is to provide one-stop shopping for e-voting news and analysis, to the public and the press, on election day and thereafter.

Check it out, and please help us spread the word about the site.

avatar

The Big-Head Principle

Over the next few days, Americans will be asking themselves which candidate has what it takes to be president, or at least which one has what it takes to win the election. To answer this question, we must first determine exactly what it does take. Based on personal observation, I think I may know.

Bill Clinton is the only U.S. president I have seen up close. He walked about ten feet from me in the Princeton graduation procession a few years ago. And I couldn’t help noticing that he had a really big head. When I say this, I don’t mean he was very smart, and I don’t mean he had an inflated opinion of himself – though both of those things may well be true. I mean, quite literally, that his head was considerably larger than average for a man of his size. So much so that his head size is the one and only thing I remember about my near-encounter with him. Perhaps having a large head helps one to succeed in politics.

If you think about it, we are often drawn to big-headed creatures. Mickey Mouse. Frankenstein’s monster. Barney the dinosaur. Bart Simpson. Mister Potato Head. Spongebob Squarepants. What is it about big-heads that makes us want to watch them?

Perhaps the explanation is that babies have disproportionately large heads, and we are genetically programmed to like babies. Or perhaps large heads can better show sympathetic emotion.

In any case, head size is clearly an important factor in politics, a factor we can use to divine a hidden law of American politics – the candidate with the bigger head usually wins. Call it the Big-Head Principle.

Which candidate has the bigger head in this election? Video coverage shows the candidates shaking hands after the debates. Looking at the two men side by side, in the same shot, it’s clear that John Kerry has the bigger head.

Being nonpartisan, we will not endorse a candidate; but we can make a prediction. According to the Big-Head Principle, John Kerry will be the next president of the United States.

avatar

CallerID and Bad Authentication

A new web service allows anybody to make phone calls with forged CallerID (for a fee), according to a Kevin Poulsen story at SecurityFocus. (Another such service had been open briefly a few months ago.) This isn’t surprising, given the known insecurity of the CallerID system, which trusts the system where a call originates to provide accurate information about the calling number.

This is more than just a prankster’s delight, since some technologies are designed to use CallerID as if it were a secure identifier of the calling number. Poulsen reports, for instance, that T-Mobile uses CallerID to authenticate its customers’ access to their voicemail. If I can call the T-Mobile voicemail system, while sending CallerID information indicating that the call is coming from your phone, then I can access your voicemail box.

Needless to say, it’s a bad idea to use an insecure identifier to authenticate accesses to any service. Still, this mistake is often made.

A common example of the same mistake is to use IP addresses (the numeric addresses that designate “places” on the Internet) to authenticate users of an Internet service. For example, if Princeton University subscribes to some online database, the database service may allow access from any of the IP addressess belonging to Princeton. This is a bad idea, since IP addresses can sometimes be spoofed and various legitimate services can make an access seem to come from one address when it’s really coming from another.

If I were to run a web proxy within the Princeton network, then anybody accessing the web through my proxy might (depending on the circumstances) appear to be using a Princeton IP address. My web proxy might therefore allow anybody on the web to access the proprietary database. Some users might deliberately use my proxy to gain unauthorized access, and some users might be using the proxy for other, legitimate reasons and be surprised to have open access to the database. In either case, the access would be enabled by the database company’s decision to rely on IP addresses to control access.

In practice, people who design web proxies and similar services often find themselves jumping through hoops to try to prevent this kind of problem, even though it’s not their fault. One isn’t supposed to rely on IP addresses for authentication, but many people do. The result is that developers of new services may find themselves either (a) inadvertently enabling unauthorized access to other services, or (b) spending extra time and effort to shore up the insecure systems of others. Some of my colleagues who developed CoDeeN, a cool distributed web proxy system, found themselves wrestling with this problem and ultimately chose to add complexity to their design to protect some IP-address-based authentication systems. (They wrote an interesting paper about all of the “bad traffic” that showed up when they set up CoDeeN.)

It will be interesting to see how the CallerID story develops. My guess is that people will stop relying on the accuracy of CallerID, as spoofing becomes more widespread.

avatar

Pro-Competition Ruling in Lexmark Case

Yesterday the Sixth Circuit Court of Appeals ruled in Lexmark v. Static Control. The Court said, in effect, that Lexmark could not leverage copyright and DMCA claims to keep a competitor from making toner cartridges that work with Lexmark printers. This reversed a lower court decision.

[Backstory: Lexmark-brand toner cartridges contain a short computer program (about 50 bytes). Software in a Lexmark printer checks whether newly inserted toner cartridges contain that program, and refuse to work with cartridges that don't. Static Control makes a chip containing the same short program, so that third-party cartridges containing the Static Control chip can work in Lexmark printers. Lexmark sued, claiming copyright infringement (for copying the program) and DMCA violations (for circumventing the program-verification step). The original trial court issued a preliminary injunction against Static Control, which the Sixth Circuit just overruled.]

The ruling is very good news on both copyright and DMCA fronts. The fundamental issue in both fronts was whether a company could use copyright or the DMCA, in conjunction with a technical lockout mechanism, to prevent a competitor from making products that worked with (or interoperated) with its products.

The interesting copyright issue is whether a copyright owner can leverage copyright to limit interoperability. Consider this hypothetical: Alice writes a computer program which I’ll call A. Alice writes a copyrighted poem, and she programs A so that it will accept input only from programs that first send a copy of the poem. Alice gives permission for Bob’s program B to send the poem, but she refuses permission to everybody else. When Charlie makes a program that sends the poem, Alice sues him from infringing the poem’s copyright. Charlie proves that there is no way for his program to interoperate with A, except by sending the poem. Should Charlie be liable for copyright infringement?

This hypothetical doesn’t exactly match the facts of the present case, as far as I can tell, but it’s pretty close. The Court ruled that Static Control was allowed to copy Lexmark’s short computer program (which is analogous to the poem), to the extent that that copying was required in order to interoperate. So Lexmark could not leverage its copyright to prevent interoperability.

On the DMCA side, Lexmark had argued (and the lower court had agreed) that the printer mechanism that checked for the presence of the small toner-cartridge program was, under the DMCA, a technical protection mechanism that controlled access to Lexmark’s software, and that Static Control had circumvented that mechanism in violation of the DMCA. The key word here is “access”. The lower court said that the mechanism controlled “access” because it controlled the user’s ability to make use of the software, and “to make use of” is one definition of the word “access”. The Court of Appeals disagreed, saying that this was not the kind of “access” that Congress meant to protect in passing the DMCA. What Congress meant by “access”, the Court said, is the ability to read the program itself, not the ability to interact with or use it. Since Lexmark’s technical mechanism did not control the ability to read the program, it was not an access control in the sense meant by the DMCA, and hence Static Control had not violated the DMCA.

This is consistent with another court’s ruling in an earlier case, Chamberlain v. Skylink, involving garage door openers.

To sum up, this ruling is a big victory for interoperability. It also strikes an important blow against one overreaching reading of the DMCA, by limiting the scope of the access control provision. The DMCA is still deeply problematic in other ways, but we can hope that this ruling has narrowed its scope a bit.

avatar

Another E-Voting Glitch: Miscalibrated Touchscreens

Voters casting early ballots in New Mexico report that the state’s touchscreen voting machines sometimes record a vote for the wrong candidate, according to a Jim Ludwick story in the Albuquerque Journal. (Link via DocBug)

[Kim Griffith] went to Valle Del Norte Community Center in Albuquerque, planning to vote for John Kerry. “I pushed his name, but a green check mark appeared before President Bush’s name,” she said.

Griffith erased the vote by touching the check mark at Bush’s name. That’s how a voter can alter a touch-screen ballot.

She again tried to vote for Kerry, but the screen again said she had voted for Bush. The third time, the screen agreed that her vote should go to Kerry.

She faced the same problem repeatedly as she filled out the rest of the ballot. On one item, “I had to vote five or six times,” she said.

Michael Cadigan, president of the Albuquerque City Council, had a similar experience when he voted at City Hall.

“I cast my vote for president. I voted for Kerry and a check mark for Bush appeared,” he said.

He reported the problem immediately and was shown how to alter the ballot.

Cadigan said he doesn’t think he made a mistake the first time. “I was extremely careful to accurately touch the button for my choice for president,” but the check mark appeared by the wrong name, he said.

In Sandoval County, three Rio Rancho residents said they had a similar problem, with opposite results. They said a touch-screen machine switched their presidential votes from Bush to Kerry.

County officials blame the voters, saying that they must have inadvertently touched the screen elsewhere.

My guess is that the touchscreens are miscalibrated. Touchscreens use one mechanism to paint images onto the screen, and a separate mechanism to measure where the screen has been touched. Usually the touch sensor has to be calibrated to make sure that the coordinate system used by the touch sensor matches up with the coordinate system used by the screen-painting mechanism. If the sensor isn’t properly calibrated, touches made on one part of the image will be registered elsewhere. For example, touches might be registered an inch or two below the place they really occur.

(Some PDAs, such as Palm systems, calibrate their touchscreens when they boot, by presenting the user with a series of crosshairs and asking the user to touch the center of each one. If you’re a Palm user, you have probably seen this.)

Touchscreens are especially prone to calibration problems when they have gone unused for a long time, as will tend to happen with voting machines.

My guess is that few poll workers know how to recognize this problem, and fewer still know how to fix it if it happens. One solution is to educate poll workers better. Another solution is to avoid using technologies that are prone to geeky errors like touchscreen miscalibration.

This is yet another reminder to proofread your vote before it is cast.

UPDATE (3:15 PM): Joe Hall points to an argument by Doug Jones that problems of this sort represent another type of touchscreen calibration problem. If the voter rests a palm or a thumb on the edge of the touchscreen surface, this can (temporarily) mess up the screen’s calibration. That seems like another plausible explanation of the New Mexico voters’ complaints. Either way, touchscreens may misread the voter’s intention. Again: don’t forget to double-check that the technology (no matter what it is ) seems to be registering your vote correctly.

avatar

LAMP and Regulatory Arbitrage

Today, MIT’s LAMP system goes back on line, with a new design. LAMP (“Library Access to Music Project”) streams music to the MIT campus via the campus cable TV system. Any student can connect to LAMP’s website and choose a sequence of songs. The chosen songs are then scheduled for playing on one of sixteen campus TV channels.

According to MIT, transmission of music via LAMP is legal because it is covered by music licenses that MIT has purchased in connection with the campus radio station. In other words, LAMP is just like another set of sixteen campus radio stations that happen to be controllable by MIT students across the Web. I don’t know whether this legal argument is correct, but it sounds plausible and MIT appears to stand behind it.

You may recall that LAMP launched last year but was shut down a few days later when copyright owners argued that LoudEye, which had sold MIT digital files to use in that incarnation of LAMP, did not have the legal right to sell those files for such uses.

Now LAMP is back, with the original design’s efficient digital back end replaced by a new setup in which an array of low-end CD jukeboxes are controlled by special computers. This allows LAMP to get its music from ordinary CDs, as many radio stations do.

From an engineering standpoint, the new design of LAMP is overly complex, fragile, and inefficient. That’s not surprising, because lawyers must have had a big impact on the design.

LAMP is a great example of regulatory arbitrage – the adoption of otherwise-inefficient behavior in order to shift from one legal or regulatory regime to another. There’s one set of copyright rules for radio stations and another set for webcasters. LAMP transmits music over the cable-TV system, rather than the more efficient Internet system, in order to stay on the radio-station side of the line. There’s one set of rules for direct access to digital music on CDs and another set of rules for copies stored on hard disks. LAMP uses CDs in jukeboxes, rather than more efficient hard-disk storage, in order to stay on the CD side of that legal line.

We’re going to see more and more of this kind of regulatory arbitrage by engineers. Copyright law is getting more complicated and is accumulating more technology-specific rules, so there are more and more legal lines across which designers will want to step. At the same time, technology is becoming more powerful and more flexible, giving designers an ever wider menu of design options. The logical outcome is a twisting of technology design to satisfy predetermined legal categories rather than engineering efficiency.

avatar

Tit for Tat

Recent news stories, picked up all over blogland, reported that Tit-for-Tat has been dethroned as the best strategy in iterated prisoners’ dilemma games. In a computer tournament, a team from Southampton University won with a new strategy, beating the Tit-for-Tat strategy for the first time.

Here’s the background. Prisoners’ Dilemma is a game with two players. Each player chooses a move, which is either Cooperate or Defect. Then the players reveal their moves to each other. If both sides Cooperate, they each get three points. If both Defect, they each get one point. If one player Cooperates and the other Defects, then the defector gets five points and the cooperator gets none. The game is interesting because no matter what one’s opponent does, one is better off chosing to Defect; but the most mutually beneficial result occurs when both players Cooperate.

Things get more interesting when you iterate the game, so that the same pair of players plays many times in a row. A player can then base its strategy on what the opponent has done recently, which changes the opponent’s incentives in an subtle ways. This game is an interesting abstract model of adversarial social relationships, so people are interested in understanding its strategy tradeoffs.

For at least twenty years, the best-looking strategy has been Tit-for-Tat, in which one starts out by Cooperating and then copies whatever action the opponent used last. This strategy offers an appealing combination of initial friendliness with measured retaliation for an opponent’s Defections. In tournaments among computer players, Tit-for-Tat won consistently.

But this year, the Southampton team unveiled a new strategy that won the latest tournament. Many commentators responded by declaring that Tit-for-Tat had been dethroned. But I think that conclusion is wrong, for reasons I’ll explain.

But first, let me explain the new Southampton strategy. (This is based on press accounts, but I’m confident that it’s at least pretty close to correct.) They entered many players in the tournament. Their players divide into two groups, which I’ll call Stars and Stooges. The Stars try to win the tournament, and the Stooges sacrifice themselves so the Stars can win. When facing a new opponent, one of these players starts out by making a distinctive sequence of moves. Southampton’s players watch for this distinctive sequence, which allows them to tell whether their opponents are other Southampton players. When two Southampton players are playing each other, they collude to maximize their scores (or at least the score of the Star(s), if any, among them). When a Star plays an outsider, it tries to score points normally; but when a Stooge plays an outsider, it always Defects, to minimize the opponent’s score. Thus the Stooges sacrifice themselves so that the Stars can win. And indeed, the final results show a few Stars at the top of the standings (above Tit-for-Tat players) and a group of Stooges near the bottom.

If we look more closely, the Southampton strategy doesn’t look so good. Apparently, Tit-for-Tat still scores higher than the average Southampton player – the sacrifice (in points) made by the Stooges is not fully recouped by the Stars. So Tit-for-Tat will still be the best strategy, both for a lone player, and for a team of players, assuming the goal is to maximize the sum of the team members’ scores. (Note that a team of Tit-for-Tat players doesn’t need to use the Southampton trick for recognizing fellow team members, since Tit-for-Tat players who play each other will always cooperate, which is the team-optimal thing to do.)

So it seems that all the Southampton folks discovered is a clever way to exploit the rules of this particular tournament, with its winner-take-all structure. That’s clever, but I don’t think it has much theoretical significance.

UPDATE (Friday 22 October): The comments on this post are particularly good.

avatar

Preemptive Blame-Shifting by the E-Voting Industry

The November 2nd election hasn’t even happened yet, and already the e-voting industry is making excuses for the election-day failures of their technology. That’s right – they’re rebutting future reports of future failures. Here’s a sample:

Problem

Voting machines will not turn on or operate.

Explanation

Voting machines are not connected to an active power source. Machines may have been connected to a power strip that has been turned off or plugged into an outlet controlled by a wall switch. Power surges or outages caused by electrical storms or other natural occurrences are not unheard of. If the power source to the machine has been lost, voting machines will generally operate on battery power for brief periods. Once battery power is lost, however, the machines will cease to function (although votes cast on such machines will not be lost). Electronic voting machines may require the election official or precinct worker to enter a password in order to operate. Lost or forgotten passwords may produce lengthy delays as this information is retrieved from other sources.

In the past, of course, voting machines have failed to operate for other reasons, as in the 2002 California gubernatorial recall election, when Diebold machines, which turned out to be uncertified, failed to boot properly at many polling places in San Diego and Alameda counties. (Verified-voting.org offers a litany of these and other observed e-voting failures.)

The quote above comes from a document released by the Election Technology Council, a trade group of e-voting vendors. (The original, tellingly released only in the not-entirely-secure Word format, is here.)

The tone of the ETC document is clear – our technology is great, but voters and poll workers aren’t smart enough to use it correctly. Never mind that the technology is deeply flawed (see, e.g., my discussion of Diebold’s insecure protocols, not to mention all of the independent studies of the technology). Never mind that the vendors are the ones who design the training regimes whose inadequacy they blame. Never mind that it is their responsibility to make their products usable.

[Link credit: Slashdot]

avatar

Privacy, Recording, and Deliberately Bad Crypto

One reason for the growing concern about privacy these days is the ever-decreasing cost of storing information. The cost of storing a fixed amount of data seems to be dropping at the Moore’s Law rate, that is, by a factor of two every 18 months, or equivalently a factor of about 100 every decade. When storage costs less, people will store more information. Indeed, if storage gets cheap enough, people will store even information that has no evident use, as long as there is even a tiny probability that it will turn out to be valuable later. In other words, they’ll store everything they can get their hands on. The result is that more information about our lives will be accessible to strangers.

(Some people argue that the growth in available information is on balance a good thing. I want to put that argument aside here, and ask you to accept only that technology is making more information about us available to strangers, and that an erosion of our legitimate privacy interests is among the consequences of that trend.)

By default, information that is stored can be accessed cheaply. But it turns out that there are technologies we can use to make stored information (artificially) expensive to access. For example, we can encrypt the information using a weak encryption method that can be broken by expending some predetermined amount of computation. To access the information, one would then have to buy or rent sufficient computer time to break the encryption method. The cost of access could be set to whatever value we like.

(For techies, here’s how it works. (There are fancier methods. This one is the simplest to explain.) You encrypt the data, using a strong cipher, under a randomly chosen key K. You provide a hint about the value of K (e.g. upper and lower bounds on the value of K), and then you discard K. Reconstructing the data now requires doing an exhaustive search to find K. The size of the search required depends on how precise the hint is.)

This method has many applications. For example, suppose the police want to take snapshots of public places at fixed intervals, and we want them to be able to see any serious crimes that happen in front of their cameras, but we don’t want them to be able to browse the pictures arbitrarily. (Again, I’m putting aside the question of whether it’s wise for us to impose this requirement.) We could require them to store the pictures in such a way that retrieving any one picture carried some moderate cost. Then they would be able to access photos of a few crimes being committed, but they couldn’t afford to look at everything.

One drawback of this approach is that it is subject to Moore’s Law. The price of accessing a data item is paid not in dollars but in computing cycles, a resource whose dollar cost is cut in half every 18 months. So what is expensive to access now will be relatively cheap in, say, ten years. For some applications, that’s just fine, but for others it may be a problem.

Sometimes this drop in access cost may be just what you want. If you want to make a digital time capsule that cannot be opened now but will be easy to open 100 years from now, this method is perfect.

avatar

DoJ To Divert Resources to P2P Enforcement

Last week the Department of Justice issued a report on intellectual property enforcement. Public discussion has been slow to develop, since the report seems to be encoded in some variant of the PDF format that stops many people from reading it. (I could read it fine on one of my computers, but ran into an error message saying the file was encrypted on the rest of my machines. Does anybody have a non-crippled version?)

The report makes a strong case for the harmfulness of intellectual property crimes, and then proceeds to suggest some steps to strengthen enforcement. I couldn’t help noticing, though, that the enforcement effort is not aimed at the most harmful crimes cited in the report.

The report leads with the story of a criminal who sold counterfeit medicines, which caused a patient to die because he was not taking the medicines he (and his doctors) thought he was. This is a serious crime. But what makes it serious is the criminal’s lying about the chemical composition of the medicines, not his lying about their brand name. This kind of counterfeiting is best treated as an attack on public safety rather than a violation of trademark law.

(This is not to say that counterfeiting of non-safety-critical products should be ignored, only that counterfeiting of safety-critical products can be much more serious.)

Similarly, the report argues that for-profit piracy, mostly of physical media, should be treated seriously. It claims that such piracy funds organized crime, and it hints (without citing evidence) that physical piracy might fund terrorism too. All of which argues for a crackdown on for-profit distribution of copied media.

But when it comes to action items, the report’s target seems to shift away from counterfeiting and for-profit piracy, and toward P2P file sharing. Why else, for example, would the report bother to endorse the Induce Act, which does not apply to counterfeiters or for-profit infringers but only to the makers of products, such as P2P software, that merely allow not-for-profit infringement?

It’s hard to believe, in today’s world, that putting P2P users in jail is the best use of our scarce national law-enforcement resources. Copyright owners can already bring down terrifying monetary judgments on P2P infringers. If we’re going to spend DoJ resources on attacking IP crime, let’s go after counterfeiters (especially of safety-critical products) and large-scale for-profit infringers. As Adam Shostack notes, to shift resources to enforcing less critical IP crimes, at a time when possible-terrorist wiretaps go unheard and violent fugitive cases go uninvestigated, is to lose track of our priorities.