November 24, 2024

Three Flavors of Net Neutrality

When the Wall Street Journal claimed on Monday that Google was secretly backtracking on its net neutrality position, commentators were properly skeptical. Tim Lee (among others) argued that the Journal misunderstood what net neutrality means, and others pointed out gaps in the Journal’s reasoning — not to mention that the underlying claim about Google’s actions was based on nonpublic documents.

Part of the difficulty in this debate is that “net neutrality” can mean different things to different people. At least three flavors of “net neutrality” are identifiable among the Journal’s critics.

Net Neutrality as End-to-End Design: The first perspective sees neutrality as an engineering principle, akin to the end-to-end principle, saying that the network’s job is to carry the traffic it is paid to carry, and decisions about protocols and priorities should be made by endpoint systems. As Tim Lee puts it, “Network neutrality is a technical principle about the configuration of Internet routers.”

Net Neutrality as Nonexclusionary Business Practices: The second perspective see neutrality as an economic principle, saying that network providers should not offer deals to one content provider unless they offer the same deal to all providers. Larry Lessig takes this position in his initial response to the journal: “The zero discriminatory surcharge rules [which Lessig supports] are just that — rules against discriminatory surcharges — charging Google something different from what a network charges iFilm. The regulation I call for is a ‘MFN’ requirement — that everyone has the right to the rates of the most favored nation.”

Net Neutrality as Content Nondiscrimination: The third perspective sees neutrality as a free speech principle, saying that network providers should not discriminate among messages based on their content. We see less of this in the response to the Journal piece, though there are whiffs of it.

There are surely more perspectives, but these are the three I see most often. Feel free to offer alternatives in the comments.

To be clear, none of this is meant to suggest that critics of the Journal piece are wrong. If Tim says that Google’s plans don’t violate Definition A of net neutrality, and Larry says that those same plans don’t violate Definition B of net neutrality, Tim and Larry may both be right. Indeed, based on what little is known about Google’s plans, they may well be net-neutral under any reasonable definition. Or not, if we fill in differently the details missing from the public reporting.

Which bring me to my biggest disappointment with the Journal story. The Journal said it had documents describing Google’s plans. Instead of writing an actually informative story, saying “Google is planning to do X”, the Journal instead wrote a gotcha story, saying “Google is planning to do some unspecified but embarrassing thing”. The Journal can do first-class reporting, when it wants to. That’s what it should have done here.

Election Transparency Project Finds Ballot-Counting Bug

Yesterday, Kim Zetter at Wired News reported an amazing e-voting story about lost ballots and the public advocates who found them.

Here’s a summary: Humboldt County, California has an innovative program to put on the Internet scanned images of all the optical-scan ballots cast in the county. In the online archive, citizens found 197 ballots that were not included in the official results of the November election. Investigation revealed that the ballots disappeared from the official count due to a programming error in central tabulation software supplied by Premier (formerly known as Diebold), the county’s e-voting vendor.

The details of the programming error are jaw-dropping. Here is Zetter’s deadpan description:

Premier explained that due to a programming problem, the first “deck” or batch of ballots that are counted by the GEMS software sometimes gets randomly deleted if any subsequent deck is intentionally deleted. The GEMS system names the first deck of ballots “deck 0”, with subsequent batches called “deck 1,” “deck 2,” etc. For some reason “deck 0” is sometimes erased from the system if any other deck is erased. Since it’s common for officials to intentionally erase a deck in the normal counting process if they’ve made an error and want to rescan a deck, the chance that a GEMS system containing this flaw will delete a batch of ballots is pretty high.

The system never provides any indication to election officials when it’s deleting a batch of ballots in this manner. The problem occurs with version 1.18.19 of the GEMS software, though it’s possible that other versions have the problem as well. [County election director Carolyn] Crnich said an official in the California secretary of state’s office told her the problem was still prevalent in version 1.18.22 of Premier’s software and wasn’t fixed until version 1.18.24.

Neither Premier nor the secretary of state’s office, which certifies voting systems for use in the state, has returned calls for comment about this.

After examining Humboldt’s database, Premier determined that the “deck 0” in Humboldt was deleted at some point in between processing decks 131 and 135, but so far Crnich has been unable to determine what caused the deletion. She said she did at one point abort deck 132, instead of deleting it, when she made a mistake with it, but that occurred before election day, and the “deck 0” batch of ballots was still in the system on November 23rd, after she’d aborted deck 132. She couldn’t recall deleting any other deck after election night or after the 23rd that might have caused “deck 0” to disappear in the manner Premier described.

The deletion of “deck 0” wasn’t the only problem with the GEMS system. As I mentioned previously, the audit log not only didn’t show that “deck 0” had been deleted, it never showed that the deck existed in the first place.

The system creates a “deck 0” for each ballot type that is scanned. This means, the system should have three “deck 0” entries in the log — one for vote-by-mail ballots, one for provisional ballots, and one for regular ballots cast at the precinct. Crnich found that the log did show a “deck 0” for provisional ballots and precinct-cast ballots but none for vote-by-mail ballots, even though the machine had printed a receipt at the time that an election worker had scanned the ballots into the machine. In fact, the regular audit log provides no record of any files that were deleted, including deck 132, which she intentionally deleted. She said she had to go back to a backup of the log, created before the election, to find any indication that “deck 0” had ever been created.

I don’t know which is more alarming: that the vendor failed to treat as an emergency a programming error that silently deletes ballots, or that the tabulator’s “audit log” looks more like an after the fact reconstruction of what-must-have-happened rather than a log of what actually did happen.

The good news here is that Humboldt County’s opening of election records to the public paid off, when members of the public found important facts in the records that officials and the vendor had missed. If other jurisdictions opened their records, how many more errors would we find and fix?

Watching Google's Gatekeepers

Google’s legal team has extraordinary power to decide which videos can be seen by audiences around the world, according to Jeffrey Rosen’s piece, Google’s Gatekeepers in yesterday’s New York Times magazine. Google, of course, owns YouTube, which gives it the technical ability to block particular videos — though of course so many videos are submitted that it’s impractical to review them all in advance.

Some takedown requests are easy — content that is offensive and illegal (almost) everywhere will come own immediately once a complaint is received and processed. But Rosen focuses on more difficult cases, where a government asks YouTube to take down a video that expresses dissent or is otherwise inconvenient for that government. Sometimes these videos violate local laws, but more often their legal status is murky and in any case the laws in question may be contrary to widely accepted free speech principles.

Rosen worries that too much power to decide what can be seen is being concentrated in the hands of one company. He acknowledges that Google has behaved reasonably so far, but he worries about what might happen in the future.

I understand his point, but it’s hard to see an alternative that would be better in practice. If Google, as the owner of YouTube, is not going to have this power, then the power will have to be given to somebody else. Any nominations? I don’t have any.

What we’re left with, then, is Google making the decisions. But this doesn’t mean all of us are out in the cold, without influence. As consumers of Google’s services, we have a certain amount of leverage. And this is not just hypothetical — Google’s “don’t be evil” reputation contributes greatly to the value of its brand. The moment people think Google is misbehaving is the moment they’ll consider taking their business elsewhere.

As concerned members of the public — concerned customers, from Google’s viewpoint — there are things we can do to help keep Google honest. First, we can insist on transparency, that Google reveal what it is blocking and why. Rosen describes some transparency mechanisms that are in place, such as Google’s use of the Chilling Effects website.

Second, when we use Google’s services, we can try to minimize our switching costs, so that moving to an alternative service is a realistic possibility. The less we’re locked in to Google’s service, the less we’ll feel forced to keep using those services even if the company’s behavior changes. And of course we should think carefully about switching costs in all our technology decisions, even when larger policy issues aren’t at stake.

Finally, we can make sure that Google knows we care about free speech, and about its corporate behavior generally. This means criticizing them when they slip up, and praising them when they do well. Most of all, it means debating their decisions — which Rosen’s article helpfully invites us to do.

Discerning Voter Intent in the Minnesota Recount

Minnesota election officials are hand-counting millions of ballots, as they perform a full recount in the ultra-close Senate race between Norm Coleman and Al Franken. Minnesota Public Radio offers a fascinating gallery of ballots that generated disputes about voter intent.

A good example is this one:

A scanning machine would see the Coleman and Franken bubbles both filled, and call this ballot an overvote. But this might be a Franken vote, if the voter filled in both slots by mistake, then wrote “No” next to Coleman’s name.

Other cases are more difficult, like this one:

Do we call this an overvote, because two bubbles are filled? Or do we give the vote to Coleman, because his bubble was filled in more completely?

Then there’s this ballot, which is destined to be famous if the recount descends into ligitation:

[Insert your own joke here.]

This one raises yet another issue:

Here the problem is the fingerprint on the ballot. Election laws prohibit voters from putting distinguishing marks on their ballots, and marked ballots are declared invalid, for good reason: uniquely marked ballots can be identified later, allowing a criminal to pay the voter for voting “correctly” or punish him for voting “incorrectly”. Is the fingerprint here an identifying mark? And if so, how can you reject this ballot and accept the distinctive “Lizard People” ballot?

Many e-voting experts advocate optical-scan voting. The ballots above illustrate one argument against opscan: filling in the ballot is a free-form activity that can create ambiguous or identifiable ballots. This creates a problem in super-close elections, because ambiguous ballots may make it impossible to agree on who should have won the election.

Wearing my pure-scientist hat (which I still own, though it sometimes gets dusty), this is unsurprising: an election is a measurement process, and all measurement processes have built-in errors that can make the result uncertain. This is easily dealt with, by saying something like this: Candidate A won by 73 votes, plus or minus a 95% confidence interval of 281 votes. Or perhaps this: Candidate A won with 57% probability. Problem solved!

In the real world, of course, we need to declare exactly one candidate to be the winner, and a lot can be at stake in the decision. If the evidence is truly ambiguous, somebody is going to end up feeling cheated, and the most we can hope for is a sense that the rules were properly followed in determining the outcome.

Still, we need to keep this in perspective. By all reports, the number of ambiguous ballots in Minnesota is miniscule, compared to the total number cast in Minnesota. Let’s hope that, even if some individual ballots don’t speak clearly, the ballots taken collectively leave no doubt as to the winner.

Low Hit Rate Isn't the Problem with TSA Screening

The TSA, which oversees U.S. airport security, comes in for a lot of criticism — much of it deserved. But sometimes commentators let their dislike for the TSA get the better of them, and they offer critiques that don’t stand up logically.

A good example is yesterday’s USA Today article on TSA’s behavioral screening program, and the commentary that followed it. The TSA program trained screeners to look for nervous and suspicious behavior, and to subject travellers exhibiting such behavior to more stringent security measures such as pat-down searches or short interviews.

Commentators condemned the TSA program because fewer than 1% of the selected travellers were ultimately arrested. Is this a sensible objection? I think not, for reasons I’ll explain below.

Before I explain why, let’s take a minute to set aside our general opinions about the TSA. Forget the mandatory shoe removal and toiletry-container nitpicking. Forget that time the screener was rude to you. Forget the slippery answers to inconvenient Constitutional questions. Forget the hours you have spent waiting in line. Put on your blinders please, just for now. We’ll take them off later.

Now suppose that TSA head Kip Hawley came to you and asked you to submit voluntarily to a pat-down search the next time you travel. And suppose you knew, with complete certainty, that if you agreed to the search, this would magically give the TSA a 0.1% chance of stopping a deadly crime. You’d agree to the search, wouldn’t you? Any reasonable person would accept the search to save (by assumption) at least 0.001 lives. This hypothetical TSA program is reasonable, even though it only has a 0.1% arrest rate. (I’m assuming here that an attack would cost only one life. Attacks that killed more people would justify searches with an even smaller arrest rate.)

So the commentators’ critique is weak — but of course this doesn’t mean the TSA program should be seen as a success. The article says that the arrests the system generates are mostly for drug charges or carrying a false ID. Should a false-ID arrest be considered a success for the system? Certainly we don’t want to condone the use of false ID, but I’d bet most of these people are just trying to save money by flying on a ticket in another person’s name — which hardly makes them Public Enemy Number One. Is it really worth doing hundreds of searches to catch one such person? Are those searches really the best use of TSA screeners’ time? Probably not.

On the whole, I’m not sure I can say whether the behavioral screening program is a good idea. It apparently hasn’t caught any big fish yet, but it might have positive effects by deterring some serious crimes. We haven’t seen the data to support it, and we’ve learned to be skeptical of TSA claims that some security measure is necessary.

Now it’s time for the professor to call on one of the diehard civil libertarians in the class, who by this point are bouncing in their seats with both hands waving in the air. They’re dying to point out that our system, for good reason, doesn’t automatically accept claims by the authorities that searches or seizures are justified, and that our institutions are properly skeptical about expanding the scope of searches. They’re unhappy that the debate about this TSA program is happening after it was in place, rather than before it started. These are all good points.

The TSA’s behavioral screening is a rich topic for debate — but not because of its arrest rate.