April 24, 2014


Holiday Stories

It’s time for our holiday hiatus. See you back here in the new year.

As a small holiday gift, we’re pleased to offer updated versions of some classic Christmas stories.

How the Grinch Pwned Christmas: The Grinch, determined to stop Christmas, hacks into Amazon’s servers and cancels all deliveries to Who-ville. The Whos celebrate anyway, gathering in a virtual circle and exchanging user-generated content. When the Grinch sees this, his heart grows two sizes and he priority-ships replacement gifts to Who-ville.

Rudolph the Net-Nosed Reindeer: Rudolph is shunned by his reindeer peers for having a goofy WiFi-enabled nose. But he becomes a hero one foggy Christmas Eve by using the nose to access Google Maps, helping Santa navigate to the homes of good children.

Gift of the eMagi: Poor husband and wife find perfect gifts for each other and bid aggressively for them on eBay. Unbeknownst to them, they’re bidding against each other for the same gift. Determined to express their love by paying whatever it takes to get the gift, they bid themselves into bankruptcy.

NSA Claus is Coming to Town: He sees you when you’re sleeping. He knows when you’re awake. He knows if you’ve been bad or good, so be good or go to Gitmo.

The Little DRM-er Boy: A boy wants to share his recorded drum solo with Baby Jesus, but the file is tethered to a faraway computer. With the aid of three downloads from the East, he rips an MP3 and emails it the Mary and Joseph just in time for Christmas Night.

It’s a Wonderful Second Life: George Bailey believes that Second Life would have been better if he had never signed on at all. He jumps off a bridge … and floats slowly to the ground. Clarence Linden, George’s guardian avatar, restores the server backup from before George signed on, and watches with George while griefers run wild. George sees the error of his ways, and Clarence restores his account.

A Vista Carol: Ebenezer “Steve” Ballmer runs a coding shop in Merry Old Redmond. He forces programmer Bob Cratchit to work overtime on Christmas to meet the Vista ship date. At night, Ballmer is visited by three Ghost images: Windows Past, Windows Present, and Windows Future. [Fill in your own jokes here.] The next morning, Ballmer sends Bob home for Christmas, in exchange for a promise to keep his Blackberry on during dinner.

[Thanks to Alex Halderman and my family for help writing the stories.]


Sharecropping 2.0? Not Likely

Nick Carr has an interesting post arguing that sites like MySpace and Facebook are essentially high-tech sharecropping, exploiting the labor of the many to enrich the few. He’s wrong, I think, but in an instructive way.

Here’s the core of his argument:

What’s being concentrated, in other words, is not content but the economic value of content. MySpace, Facebook, and many other businesses have realized that they can give away the tools of production but maintain ownership over the resulting products. One of the fundamental economic characteristics of Web 2.0 is the distribution of production into the hands of the many and the concentration of the economic rewards into the hands of the few. It’s a sharecropping system, but the sharecroppers are generally happy because their interest lies in self-expression or socializing, not in making money, and, besides, the economic value of each of their individual contributions is trivial. It’s only by aggregating those contributions on a massive scale – on a web scale – that the business becomes lucrative. To put it a different way, the sharecroppers operate happily in an attention economy while their overseers operate happily in a cash economy. In this view, the attention economy does not operate separately from the cash economy; it’s simply a means of creating cheap inputs for the cash economy.

As Mike at Techdirt observes, it’s a mistake to think of the attention economy and the cash economy as separate. Attention can be converted into cash – that’s what advertising does – and vice versa. Often it’s hard to distinguish attention-seekers from cash-seekers: is that guy eating bugs on Survivor doing it for attention or money?

It’s a mistake, too, to think that MySpace provides nothing of real value to its users. I think of MySpace as a low-end Web hosting service. Most sites, including this blog, pay a hosting company to manage servers, store content, serve out pages, and so on. If all you want is to put up a few pages, full-on hosting service is overkill. What you want instead is a simple system optimized for ease of use, and that’s basically what MySpace provides. Because it provides less than a real hosting service, MySpace can offer a more attractive price point – zero – which has the additional advantage of lowering transaction costs.

The most interesting assumption Carr makes is that MySpace is capturing most of the value created by its users’ contributions. Isn’t it possible that MySpace’s profit is small, compared to the value that its users get from using the site?

Underlying all of this, perhaps, is a common but irrational discomfort with transactions where no cash changes hands. It’s the same discomfort we see in some weak critiques of open-source, which look at a free-market transaction involving copyright licenses and somehow see a telltale tinge of socialism, just because no cash changes hands in the transaction. MySpace makes a deal with its users. Based on the users’ behavior, they seem to like the deal.


Soft Coercion and the Secret Ballot

Today I want to continue our discussion of the secret ballot. (Previous posts: 1, 2.) One purpose of the secret ballot is to prevent coercion: if ballots are strongly secret, then the voter cannot produce evidence of how he voted, allowing him to lie safely to the would-be coercer about how he voted.

Talk about coercion usually centers on lead-pipe scenarios, where somebody issues a direct threat to a voter. Nice kneecaps you have there … be a shame if something unfortunate should happen to them.

But coercion needn’t be so direct. Consider this scenario: Big Johnny is a powerful man in town. Disturbing rumors swirl around him, but nothing has ever been proven. Big Johnny is pals with the mayor, and it’s no secret that Big Johnny wants the mayor reelected. The word goes around town that Big Johnny can tell how you vote, though nobody is quite sure how he does it. When you get to the polling place, Big Johnny’s cousin is one of the poll workers. You’re no fan of the mayor, but you don’t know much about his opponent. How do you vote?

What’s interesting about this scenario is that it doesn’t require Big Johnny to do anything. No lawbreaking is necessary, and the scheme works even if Big Johnny can’t actually tell how you vote, as long as the rumor that he can is at all plausible. You’re free to vote for the other guy, but Big Johnny’s influence will tend to push your vote toward the mayor. It’s soft coercion.

This sort of scheme would work today. E-voting systems are far from transparent. Do you know what is recorded in the machine’s memory cartridge? Big Johnny’s pals can get the cartridge. Is your vote time-stamped? Big Johnny’s cousin knows when you voted. Are the votes recorded in the order they were cast? Big Johnny’s cousin knows that you were the 37th voter today.

Paper ballots aren’t immune to such problems, either. Are you sure the blank paper ballot they gave you wasn’t marked? Remember: scanners can see things you can’t. And high-res scanners might be able to recognize tiny imperfections in that sheet of paper, or distinctive ink-splatters in its printing. Sure, the ballots are counted by hand, right there in the precinct, but what happens to them afterward?

There’s no perfect defense against this problem, but a good start is to insist on transparency in the election technology, and to research useful technologies and procedures. It’s a hard problem, and we have a long way to go.


Voting, Secrecy, and Phonecams

Yesterday I wrote about the recent erosion of the secret ballot. One cause is the change in voting technology, especially voting by mail. But even if we don’t change our voting technology at all, changes in other technologies are still eroding the secret ballot.

Phonecams are a good example. You probably carry into the voting booth a silent camera, built into a mobile phone, that can transmit photos around the world within seconds. Many phones can shoot movies, making it even easier to document your vote. Here is an example shot in 2004.

Could such a video be faked? Probably. But if your employer or union boss threatens your job unless you deliver a video of yourself voting “correctly”, will you bet your job that your fake video won’t be detected? I doubt it.

This kind of video recording subverts the purpose of the voting booth. The booth is designed to ensure the secret ballot by protecting voters from being observed while voting. Now a voter can exploit the privacy of the voting booth to create evidence of his vote. It’s not an exact reversal – at least the phonecam attack requires the voter’s participation – but it’s close.

One oft-suggested approach to fighting this problem is to have a way to revise your vote later, or to vote more than once with only one of the votes being real. This approach sounds promising at first, but it seems to cause other problems.

For example, imagine that you can get as many absentee ballots as you want, but only one of them counts and the others will be ignored. Now if somebody sees you complete and mail in a ballot, they can’t tell whether they saw your real vote. But if this is going to work, there must be no way to tell, just by looking at a ballot, whether it is real. The Board of Elections can’t send you an official letter saying which ballot is the real one – if they did, you could show that letter to a third party. (They could send you multiple letters, but that wouldn’t help – how could you tell which letter was the real one?) They can notify you orally, in person, but that makes it harder to get a ballot and lets the clerk at the Board of Elections quietly disenfranchise you by lying about which ballot is real.

(I’m not saying this problem is impossible to solve, only that (a) it’s harder than you might expect, and (b) I don’t know a solution.)

Approaches where you can cancel or revise your vote later have similar problems. There can’t be a “this is my final answer” button, because you could record yourself pushing it. But if there is no way to rule out later revisions to your vote, then you have to worry about somebody else coming along later and changing your vote.

Perhaps the hardest problem in voting system design is how to reconcile the secret ballot with accuracy. Methods that protect secrecy tend to undermine accuracy, and vice versa. Clever design is needed to get enough secrecy and enough accuracy at the same time. Technology seems to be making this tradeoff even nastier.


Erosion of the Secret Ballot

Voting technology has changed greatly in recent years, leading to problems with accuracy and auditability. These are important, but another trend has gotten less attention: the gradual erosion of the secret ballot.

It’s useful to distinguish two separate conceptions of the secret ballot. Let’s define weak secrecy to mean that the voter has the option of keeping his ballot secret, and strong secrecy to mean that the voter is forced to keep his ballot secret. To put it another way, weak secrecy means the ballot is secret if the voter cooperates in maintaining its secrecy; strong secrecy means the ballot is secret even if the voter wants to reveal it.

The difference is important. No system can stop a voter from telling somebody how he voted. But strong secrecy prevents the voter from proving how he voted, whereas weak secrecy does not rule out such a proof. Strong secrecy therefore deters vote buying and coercion, by stopping a vote buyer from confirming that he is getting what he wants – a voter can take the payment, or pretend to knuckle under to the coercion, while still voting however he likes. With weak secrecy, the buyer or coercer can demand proof.

In theory, our electoral system is supposed to provide strong secrecy, as a corrective to an unfortunate history of vote buying and coercion. But in practice, our system provides only weak secrecy.

The main culprit is voting by mail. A mail-in absentee ballot is only weakly secret, the voter can mark and mail the ballot in front of a third party, or the voter can just give the blank ballot to the third party to be filled out. Any voter who wants to reveal his vote can request an absentee ballot. (Some states allow absentee voting only for specific reasons, but in practice people who are willing to sell their votes will also be willing to lie about their justification for absentee voting.)

Strong secrecy seems to require the voter to cast his ballot in a private booth, which can only be guaranteed at an officially run polling place.

The trend toward voting by mail is just one of the forces eroding the secret ballot. Some e-voting technologies fail to provide even weak secrecy, for example by recording ballots in the order they were cast, thereby allowing officials or pollwatchers who record the order of voters’ appearance (as happens in many places) to connect each recorded vote to a voter.

Worse yet, even if a complex voting technology does protect secrecy, this may do little good if voters aren’t confident that the system really protects them. If everybody “knows” that the party boss can tell who votes the wrong way, the value of secrecy will be lost no matter what the technology does. For this reason, the trend toward complex black-box technologies may neutralize the benefits of secrecy.

If secrecy is being eroded, we can respond by trying to restore it, or we can decide instead to give up on secrecy or fall back to weak secrecy. Merely pretending to enforce strong secrecy looks like a recipe for bad policy.

(Thanks to Alex Halderman and Harlan Yu for helpful conversations on this topic.)


Paper Trail Standard Advances

On Tuesday, the Technical Guidelines Development Committee (TGDC), the group drafting the next-generation Federal voting-machine standards, voted unanimously to have the standards require that new voting machines be software-independent, which in practice requires them to have some kind of paper trail.

(Officially, TGDC is drafting “guidelines”, but the states generally require compliance with the guidelines, so they are de facto standards. For brevity, I’ll call them standards.)

The first attempt to pass such a requirement failed on Monday, on a 6-6 vote; but a modified version passed unanimously on Tuesday. The most interesting modification was an exception for existing machines: new machines will have to be software-independent but already existing machines won’t. There’s no scientific or security rationale for treating new and old machines differently, so this is clearly a political compromise designed to lower the cost of compliance by sacrificing some security.

If you believe, as almost all computer scientists do, that paper trails are necessary today for security, you’ll be happy to see the requirement for new machines, but disappointed that existing paperless voting machines will be allowed to persist.

Whether you see the glass as half full or half empty depends on whether you see the quest for paper trails as mainly legal or mainly political, that is, whether you look to courts or legislatures for progress.

In court, the exception for existing machines will be strong, assuming it’s written clearly into the standard. It will be hard to get rid of the old machines by filing lawsuits, or at least the new standards won’t be useful in court. If anything, the new standards may be seen as ratifying the decision to stick with old, insecure machines.

In legislatures, on the other hand, the standard will be an official ratification of the fact that paper trails are preferable. The latest, greatest technology will use paper trails, and paperless designs will look old-fashioned. The exception for old machines will look like a money-saving compromise, and few legislators will want to be seen as risking democracy to save money.

As for me, I see legislatures more than courts, and politics more than lawyering, as driving the trend toward paper trails. Thirty-five states either have a paper trail statewide or require one to be adopted by 2008. The glass is already 70% full, and the new standards will help fill it the rest of the way.


Spam is Back

A quiet trend broke into the open today, when the New York Times ran a story by Brad Stone on the recent increase in email spam. The story claims that the volume of spam has doubled in recent months, which seems about right. Many spam filters have been overloaded, sending system administrators scrambling to buy more filtering capacity.

Six months ago, the conventional wisdom was that we had gotten the upper hand on spammers by using more advanced filters that relied on textual analysis, and by identifying and blocking the sources of spam. One smart venture capitalist I know declared spam to be a solved problem.

But now the spammers have adopted new tactics: sending spam from botnets (armies of compromised desktop computers), sending images rather than text, adding randomly varying noise to the messages to make them harder to analyze, and providing fewer URLs in messages. The effect of these changes is to neutralize the latest greatest antispam tools; and so the spammers are pulling back ahead, for now.

In the long view, not much has changed. The arms race will continue, with each side deploying new tricks in response to the other side’s moves, unless one side is forced out by economics, which looks unlikely.

To win, the good guys must make the cost of sending a spam message exceed the expected payoff from that message. A spammer’s per-message cost and payoff are both very small, and probably getting smaller. The per-message payoff is probably decreasing as spammers are forced to new payoff strategies (e.g., switching from selling bogus “medical” products to penny-stock manipulation). But their cost to send a message is also dropping as they start to use other people’s computers (without paying) and those computers get more and more capable. Right now the cost is dropping faster, so spam is increasing.

From the good guys’ perspective, the cost of spam filtering is increasing. Organizations are buying new spam-filtering services and deploying more computers to run them. The switch to image-based spam will force filters to use image analysis, which chews up a lot more computing power than the current textual analysis. And the increased volume of spam will make things even worse. Just as the good guys are trying to raise the spammers’ costs, the spammers’ tactics are raising the good guys’ costs.

Spam is growing problem in other communication media too. Blog comment spam is rampant – this blog gets about eight hundred spam comments a day. At the moment our technology is managing them nicely (thanks to akismet), but that could change. If the blog spammers get as clever as the email spammers, we’ll be in big trouble.


For Once, BCS Controversy Not the Computers' Fault

It’s that time of year again. You know, the time when sports pundits bad-mouth the Bowl Championship Series (BCS) for picking the wrong teams to play in college football’s championship game. The system is supposed to pick the two best teams. This year it picked Ohio State, clearly the best team, and Florida, a controversial choice given that Michigan arguably had better results.

Something like this happens every year. What makes this year different is that for once it’s not being blamed on computers.

BCS uses a numerical formula combining rankings from various sources, including human polls and computerized rankings. In past years, the polls and computers differed slightly. The problem generally was that the computers missed the important nuances that human voters see. Computers didn’t know that games at the beginning of the year count much less, or that last year’s ranking is supposed to influence this year’s, or that games count more if they’re nationally televised, or that there’s a special bonus for Notre Dame or a retiring coach. And so the computers and humans sometimes disagreed.

Human pundits sided unsurprisingly with the humans. The computer pundits all sided with the computers, but without an effective talk radio presence they were shouted down.

This year the computers cleverly ducked responsibility by rating Florida and Michigan exactly even, thereby forcing humans to take the heat for picking one or the other. The humans picked Florida. Problem was, the humans had previously rated Michigan above Florida but somehow flipped the two at the end, on the basis of not much new evidence (Florida performing as expected against a good opponent). The bottom line was simple: an Ohio State-Florida game would be cooler than an Ohio State-Michigan one – yet another factor the computers didn’t know about.

Since this year’s controversy is the humans’ fault, will the computers be given more weight next year? Don’t count on it.


NIST Recommends Not Certifying Paperless Voting Machines

In an important development in e-voting policy, NIST has issued a report recommending that the next-generation federal voting-machine standards be written to prevent (re-)certification of today’s paperless e-voting systems. (NIST is the National Institute of Standards and Technology, a government agency, previously called the National Bureau of Standards, that is a leading source of independent technology expertise in the U.S. government.) The report is a recommendation to another government body, the Technical Guidelines Development Committee (TGDC), which is drafting the 2007 federal voting-machine standards. The new report is notable for its direct tone and unequivocal recommendation against unverifiable paperless voting systems, and for being a recommendation of NIST itself and not just of the report’s individual authors.

[UPDATE (Dec. 2): NIST has now modified the document's text, for example by removing the "NIST recommends..." language in some places and adding a preface saying it is only a discussion draft.]

The key concept in the report is software independence.

A voting system is software-independent if a previously undetected change or error in its software cannot cause an undetectable change or error in an election outcome. In other words, it can be positively determined whether the voting system’s (typically, electronic) CVRs [cast-vote records] are accurate as cast by the voter or in error.

This gets to the heart of the problem with paperless voting: we can’t be sure the software in the machines on election day will work as expected. It’s difficult to tell for sure which software is present, and even if we do know which software is there we cannot be sure it will behave correctly. Today’s paperless e-voting systems (known as DREs) are not software-independent.

NIST does not known how to write testable requirements to make DREs secure, and NIST’s recommendation to the STS [a subcommittee of the TGDC] is that the DRE in practical terms cannot be made secure. Consequently, NIST and the STS recommend that [the 2007 federal voting standard] should require voting systems to be [software independent].

In other words, NIST recommends that the 2007 standard should be written to exclude DREs.

Though the software-independence requirement and condemnation of DREs as unsecureable will rightly get most of the attention, the report makes three other good recommendations. First, attention should be paid to improving the usability and accessibility of voting systems that use paper. Second, the 2007 standard should include high-level discussion of new approaches to software independence, such as fancy cryptographic methods. Third, more research is needed to develop new kinds of voting technologies, with special attention paid to improving usability.

Years from now, when we look back on the recent DRE fad with what-were-we-thinking hindsight, we’ll see this NIST report as a turning point.