October 14, 2024

Sequoia's Explanation, and Why It's Not the Whole Story

I wrote yesterday about discrepancies in the results reported by Sequoia AVC Advantage voting machines in New Jersey.

Sequoia issued a memo giving their explanation for what might have happened. Here’s the relevant part:

During a primary election, the “option switches” on the operator panel must be used to activate the voting machine. The operator panel has a total of 12 buttons numbered 1 through 12. Each party participating in the primary election is assigned one of the option switch buttons. The poll worker presses a party option switch button based on the voter authorization slip given to the voter after signing the poll book, and then the poll worker presses the green “Activate” button. This action causes that party’s contests to be activated on the ballot face inside the voting booth.

Let’s assume the Democrat party is assigned option switch 6 while the Republican Party is assigned options switch 12. If a Democrat voter arrives, the poll worker presses the “6” button followed by the green “Activate” button. The Democrat contests are activated and the voter votes the ballot. For a Republican voter, the poll worker presses the “12” button followed by the green “Activate” button, which then activates the Republican contests and the voter votes the ballot. This is the correct and proper method of machine activation when using option switches.

However, we have found that when a poll worker selects the lower of the two assigned selection codes, followed by pressing an unused selection code and then pressing the green “Activate” button, the higher numbered party on the operator panel has its contests activated instead while the selection code button for the original party stays active on the operator panel.

Using the above example with the Democrat Party as option switch 6 and the Republican Party as option switch 12, the poll worker presses button 6 for Democrat. The red light next to button number 6 lights up and the operator panel display will show DEM. The poll worker then presses any unused option switch. The red light stays lit next to option switch 6 and the display still says DEM. Now the poll worker presses the green “Activate” button. The red light stays lit next to button number 6, but the operator panel display now says REP and the ballot in the voting booth will activate the Republican party contests.

In each and every case where a machine displays the party turnout issue at the close of the polls, this is the situation that would have caused it, and it can be duplicated on any machine. In addition, for this situation to have occurred, the voter that was in the voting booth at the time of the poll workers action would have voted the opposite party ballot instead of telling the poll worker that the incorrect ballot was activated and the machine would not allow them to vote the party they intended. If they had informed the poll worker, they could have made the party selection change and the voter would have then voted the correct ballot style.

Several points are in order.

First, it’s obvious from this description, and from the fact that this happened on so many machines across the state, that even if Sequoia’s explanation is entirely correct, there was some kind of engineering error on Sequoia’s part that caused the machines to misbehave. Sequoia has tried to paint the anomalies as poll worker error, but that’s not plausible in light of Sequoia’s own explanation.

Consider the scenario described above: there is a moment when the red light next to the DEM button is lit, the operator panel displays DEM, then the poll worker presses the Activate button – and the Republican ballot is activated. No competent engineer would design a system to work that way.

No competent engineer would design this system to ever display REP in the operator panel while simultaneously lighting only the DEM light.

No competent engineer would design this system to ever activate the Republican ballot when the poll worker had pressed the DEM button but had not pressed the REP button.

Sequoia’s own explanation makes clear that they made an engineering error that caused the voting machine to behave incorrectly.

Second, this doesn’t look like fraud, only error. A malicious attacker who had access to a machine would have had much more powerful, and much less detectable, options at his disposal.

Third, Sequoia seems to avoid saying that what they describe is the only possible cause of such errors. Note the careful wording, “In each and every case where a machine displays [an error], this is the situation that would have caused it …” (emphasis added). They don’t say this “did” cause the errors; they say it “would have”. The sentence is either clumsy or artfully worded.

Fourth, Sequoia’s explanation involves a voter seeing the wrong party’s ballot being activated, and not complaining about it. Assuming (as press accounts say) that the problem happened about sixty times in New Jersey, one would expect that many voters noticed and complained. And one would expect that in at least one of those cases, a poll worker would have noticed that the operator panel was displaying REP and DEM at the same time. Yet there don’t seem to be reports of such behavior.

Fifth, Sequoia doesn’t characterize fully the cases where this problem might occur, so election officials don’t know, for example, which past elections might have been affected.

The bottom line is clear. An investigation is needed – an independent investigation, done by someone not chosen by Sequoia, not paid by Sequoia, and not reporting to Sequoia.

Evidence of New Jersey Election Discrepancies

Press reports on the recent New Jersey voting discrepancies have been a bit vague about the exact nature of the evidence that showed up on election day. What has the county clerks, and many citizens, so concerned? Today I want to show you some of the evidence.

The evidence is a “summary tape” printed by a Sequoia AVC Advantage voting machine in Hillside, New Jersey when the polls closed at the end of the presidential primary election. The tape is timestamped 8:02 PM, February 5, 2008.

The summary tape is printed by poll workers as part of the ordinary procedure for closing the polls. It is signed by several poll workers and sent to the county clerk along with other records of the election.

Let me show you closeups of two sections of the tape. (Here’s the full tape, in TIF format.)

Above you can see the vote totals on this machine for each candidate. On the Democratic side, the tally is Obama 182, Clinton 179. On the Republican side it’s Giuliani 1, Romney 13, McCain 40, Paul 3, Huckabee 4.

Above is the “Option Switch Totals” section, which shows the number of times each party’s ballot was activated: 362 Democratic and 60 Republican.

This doesn’t add up. The machine says the Republican ballot was activated 60 times; but it shows a total of 61 votes cast for Republican candidates. It says the Democratic ballot was activated 362 times; but it shows a total of 361 votes for Democratic candidates. (New Jersey has a closed primary, so voters can cast ballots only in their own registered party.)

What’s alarming here is not the size of the discrepancy but its nature. This is a single voting machine, disagreeing with itself about how many Republicans voted on it. Imagine your pocket calculator couldn’t make up its mind whether 1+13+40+3+4 was 60 or 61. You’d be pretty alarmed, and you wouldn’t trust your calculator until you were very sure it was fixed. Or you’d get a new calculator.

This wasn’t an isolated instance, either. In Union County alone, at least eight other AVC Advantage machines exhibited similar problems, as did dozens more machines in other counties.

Sequoia, the vendor, is trying to prevent any independent investigation of what happened.

Tomorrow: Sequoia’s story about how this happened, and why it’s inadequate.

UPDATE (March 20): We now have copies of nine anomalous tapes, including the one shown above. They’re on our New Jersey voting documents page.

Interesting Email from Sequoia

A copy of an email I received has been passed around on various mailing lists. Several people, including reporters, have asked me to confirm its authenticity. Since everyone seems to have read it already, I might as well publish it here. Yes, it is genuine.

====

Sender: Smith, Ed [address redacted]@sequoiavote.com
To: ,
Subject: Sequoia Advantage voting machines from New Jersey
Date: Fri, Mar 14, 2008 at 6:16 PM

Dear Professors Felten and Appel:

As you have likely read in the news media, certain New Jersey election officials have stated that they plan to send to you one or more Sequoia Advantage voting machines for analysis. I want to make you aware that if the County does so, it violates their established Sequoia licensing Agreement for use of the voting system. Sequoia has also retained counsel to stop any infringement of our intellectual properties, including any non-compliant analysis. We will also take appropriate steps to protect against any publication of Sequoia software, its behavior, reports regarding same or any other infringement of our intellectual property.

Very truly yours,
Edwin Smith
VP, Compliance/Quality/Certification
Sequoia Voting Systems

[contact information and boilerplate redacted]

Privacy: Beating the Commitment Problem

I wrote yesterday about a market failure relating to privacy, in which a startup company can’t convincingly commit to honoring its customers’ privacy later, after the company is successful. If companies can’t commit to honoring privacy, then customers won’t be willing to pay for privacy promises – and the market will undersupply privacy.

Today I want to consider how to attack this problem. What can be done to enable stronger privacy commitments?

I was skeptical of legal commitments because, even though a company might make a contractual promise to honor some privacy rules, customers won’t have the time or training to verify that the promise is enforceable and free of loopholes.

One way to attack this problem is to use standardized contracts. A trusted public organization might design a privacy contract that companies could sign. Then if a customer knew that a company had signed the standard contract, and if the customer trusted the organization that wrote the contract, the customer could be confident that the contract was strong.

But even if the contract is legally bulletproof, the company might still violate it. This risk is especially acute with a cash-strapped startup, and even more so if the startup might be located offshore. Many startups will have shallow pockets and little presence in the user’s locality, so they won’t be deterred much by potential breach-of-contract lawsuits. If the startup succeeds, it will eventually have enough at stake that it will have to keep the promises that its early self made. But if it fails or is on the ropes, it will be strongly tempted to try cheating.

How can we keep a startup from cheating? One approach is to raise the stakes by asking the startup to escrow money against the possibility of a violation – this requirement could be build into the contract.

Another approach is to have the actual data held by a third party with deeper pockets – the startup would provide the code that implements its service, but the code would run on equipment managed by the third party. Outsourcing of technical infrastructure is increasingly common already, so the only difference from existing practice would be to build a stronger wall between the data stored on the server and the company providing the code that implements the service.

From a technical standpoint, this wall might be very difficult to build, depending on what exactly the service is supposed to do. For some services the wall might turn out to be impossible to build – there are some gnarly technical issues here.

There’s no easy way out of the privacy commitment problem. But we can probably do more to attack it than we do today. Many people seem to have given up on privacy online, which is a real shame.

Privacy and the Commitment Problem

One of the challenges in understanding privacy is how to square what people say about privacy with what they actually do. People say they care deeply about privacy and resent unexpected commercial use of information about them; but they happily give that same information to companies likely to use and sell it. If people value their privacy so highly, why do they sell it for next to nothing?

To put it another way, people say they want more privacy than the market is producing. Why is this? One explanation is that actions speak louder than words, people don’t really want privacy very much (despite what they say), and the market is producing an efficient level of privacy. But there’s another possibility: perhaps a market failure is causing underproduction of privacy.

Why might this be? A recent Slate essay by Reihan Salam gives a clue. Salam talks about the quandry faced by companies like the financial-management site Wesabe. A new company building up its business wants to reassure customers that their information will be treated with the utmost case. But later, when the company is big, it will want to monetize the same customer information. Salam argues that these forces are in tension and few if any companies will be able to stick with their early promises to not be evil.

What customers want, of course, is not good intentions but a solid commitment from a company that it will stay privacy-friendly as it grows. The problem is that there’s no good way for a company to make such a commitment. In principle, a company could make an ironclad legal commitment, written into a contract with customers. But in practice customers will have a hard time deciphering such a contract and figuring out how much it actually protects them. Is the contract enforceable? Are there loopholes? The average customer won’t have a clue. He’ll do what he usually does with a long website contract: glance briefly at it, then shrug and click “Accept”.

An alternative to contracts is signaling. A company will say, repeatedly, that its intentions are pure. It will appoint the right people to its advisory board and send its executives to say the right things at the right conferences. It will take conspicuous, almost extravagant steps to be privacy-friendly. This is all fine as far as it goes, but these signals are a poor substitute for a real commitment. They aren’t too difficult to fake. And even if the signals are backed by the best of intentions, everything could change in an instant if the company is acquired – a new management team might not share the original team’s commitment to privacy. Indeed, if management’s passion for privacy is holding down revenue, such an acquisition will be especially likely.

There’s an obvious market failure here. If we postulate that at least some customers want to use web services that come with strong privacy commitments (and are willing to pay the appropriate premium for them), it’s hard to see how the market can provide what they want. Companies can signal a commitment to privacy, but those signals will be unreliable so customers won’t be willing to pay much for them – which will leave the companies with little incentive to actually protect privacy. The market will underproduce privacy.

How big a problem is this? It depends on how many customers would be willing to pay a premium for privacy – a premium big enough to replace the revenue from monetizing customer information. How many customers would be willing to pay this much? I don’t know. But I do know that people might care a lot about privacy, even if they’re not paying for privacy today.