March 29, 2024

Archives for 2008

Three Flavors of Net Neutrality

When the Wall Street Journal claimed on Monday that Google was secretly backtracking on its net neutrality position, commentators were properly skeptical. Tim Lee (among others) argued that the Journal misunderstood what net neutrality means, and others pointed out gaps in the Journal’s reasoning — not to mention that the underlying claim about Google’s actions was based on nonpublic documents.

Part of the difficulty in this debate is that “net neutrality” can mean different things to different people. At least three flavors of “net neutrality” are identifiable among the Journal’s critics.

Net Neutrality as End-to-End Design: The first perspective sees neutrality as an engineering principle, akin to the end-to-end principle, saying that the network’s job is to carry the traffic it is paid to carry, and decisions about protocols and priorities should be made by endpoint systems. As Tim Lee puts it, “Network neutrality is a technical principle about the configuration of Internet routers.”

Net Neutrality as Nonexclusionary Business Practices: The second perspective see neutrality as an economic principle, saying that network providers should not offer deals to one content provider unless they offer the same deal to all providers. Larry Lessig takes this position in his initial response to the journal: “The zero discriminatory surcharge rules [which Lessig supports] are just that — rules against discriminatory surcharges — charging Google something different from what a network charges iFilm. The regulation I call for is a ‘MFN’ requirement — that everyone has the right to the rates of the most favored nation.”

Net Neutrality as Content Nondiscrimination: The third perspective sees neutrality as a free speech principle, saying that network providers should not discriminate among messages based on their content. We see less of this in the response to the Journal piece, though there are whiffs of it.

There are surely more perspectives, but these are the three I see most often. Feel free to offer alternatives in the comments.

To be clear, none of this is meant to suggest that critics of the Journal piece are wrong. If Tim says that Google’s plans don’t violate Definition A of net neutrality, and Larry says that those same plans don’t violate Definition B of net neutrality, Tim and Larry may both be right. Indeed, based on what little is known about Google’s plans, they may well be net-neutral under any reasonable definition. Or not, if we fill in differently the details missing from the public reporting.

Which bring me to my biggest disappointment with the Journal story. The Journal said it had documents describing Google’s plans. Instead of writing an actually informative story, saying “Google is planning to do X”, the Journal instead wrote a gotcha story, saying “Google is planning to do some unspecified but embarrassing thing”. The Journal can do first-class reporting, when it wants to. That’s what it should have done here.

The Journal Misunderstands Content-Delivery Networks

There’s been a lot of buzz today about this Wall Street Journal article that reports on the shifting positions of some of the leading figures of the network neutrality movement. Specifically, it claims that Google, Microsoft, and Yahoo! have abandoned their prior commitment to network neutrality. It also claims that Larry Lessig has “softened” his support for network neutrality, and it implies that because Lessig is an Obama advisor, that Lessig’s changing stance may portend a similar shift in the president-elect views, which would obviously be a big deal.

Unfortunately, the Journal seems to be confused about the contours of the network neutrality debate, and in the process it has mis-described the positions of at least two of the key players in the debate, Google and Lessig. Both were quick to clarify that their views have not changed.

At the heart of the dispute is a question I addressed in my recent Cato paper on network neutrality: do content delivery networks (CDNs) violate network neutrality? A CDN is a group of servers that improve website performance by storing content closer to the end user. The most famous is Akamai, which has servers distributed around the world and which sells its capacity to a wide variety of large website providers. When a user requests content from the website of a company that uses Akamai’s service, the user’s browser may be automatically re-directed to the nearest Akamai server. The result is faster load times for the user and reduced load on the original web server. Does this violate network neutrality? If you’ll forgive me for quoting myself, here’s how I addressed the question in my paper:

To understand how Akamai manages this feat, it’s helpful to know a bit more about what happens under the hood when a user loads a document from the Web. The Web browser must first translate the domain name (e.g., “cato.org”) into a corresponding IP address (72.32.118.3). It does this by querying a special computer called a domain name system (DNS) server. Only after the DNS server replies with the right IP address can the Web browser submit a request for the document. The process for accessing content via Akamai is the same except for one small difference: Akamai has special DNS servers that return the IP addresses of different Akamai Web servers depending on the user’s location and the load on nearby servers. The “intelligence” of Akamai’s network resides in these DNS servers.

Because this is done automatically, it may seem to users like “the network” is engaging in intelligent traffic management. But from a network router’s perspective, a DNS server is just another endpoint. No special modifications are needed to the routers at the core of the Internet to get Akamai to work, and Akamai’s design is certainly consistent with the end-to-end principle.

The success of Akamai has prompted some of the Internet’s largest firms to build CDN-style networks of their own. Google, Microsoft, and Yahoo have already started building networks of large data centers around the country (and the world) to ensure there is always a server close to each end user’s location. The next step is to sign deals to place servers within the networks of individual residential ISPs. This is a win-win-win scenario: customers get even faster response times, and both Google and the residential ISP save money on bandwidth.

The Journal apparently got wind of this arrangement and interpreted it as a violation of network neutrality. But this is a misunderstanding of what network neutrality is and how CDNs work. Network neutrality is a technical principle about the configuration of Internet routers. It’s not about the business decisions of network owners. So if Google signs an agreement with a major ISP to get its content to customers more quickly, that doesn’t necessarily mean that a network neutrality violation has occurred. Rather, we have to look at how the speed-up was accomplished. If, for example, it was accomplished by upgrading the network between the ISP and Google, network neutrality advocates would have no reason to object. In contrast, if the ISP accomplished by re-configuring its routers to route Google’s packets in preference to those from other sources, that would be a violation of network neutrality.

The Journal article had relatively few details about the deal Google is supposedly negotiating with residential ISPs, so it’s hard to say for sure which category it’s in. But what little description the Journal does give us—that the agreement would “place Google servers directly within the network of the service providers”—suggests that the agreement would not violate network neutrality. And indeed, over on its public policy blog, Google denies that its “edge caching” network violates network neutrality and reiterates its support for a neutral Internet. Don’t believe everything you read in the papers.

Election Transparency Project Finds Ballot-Counting Bug

Yesterday, Kim Zetter at Wired News reported an amazing e-voting story about lost ballots and the public advocates who found them.

Here’s a summary: Humboldt County, California has an innovative program to put on the Internet scanned images of all the optical-scan ballots cast in the county. In the online archive, citizens found 197 ballots that were not included in the official results of the November election. Investigation revealed that the ballots disappeared from the official count due to a programming error in central tabulation software supplied by Premier (formerly known as Diebold), the county’s e-voting vendor.

The details of the programming error are jaw-dropping. Here is Zetter’s deadpan description:

Premier explained that due to a programming problem, the first “deck” or batch of ballots that are counted by the GEMS software sometimes gets randomly deleted if any subsequent deck is intentionally deleted. The GEMS system names the first deck of ballots “deck 0”, with subsequent batches called “deck 1,” “deck 2,” etc. For some reason “deck 0” is sometimes erased from the system if any other deck is erased. Since it’s common for officials to intentionally erase a deck in the normal counting process if they’ve made an error and want to rescan a deck, the chance that a GEMS system containing this flaw will delete a batch of ballots is pretty high.

The system never provides any indication to election officials when it’s deleting a batch of ballots in this manner. The problem occurs with version 1.18.19 of the GEMS software, though it’s possible that other versions have the problem as well. [County election director Carolyn] Crnich said an official in the California secretary of state’s office told her the problem was still prevalent in version 1.18.22 of Premier’s software and wasn’t fixed until version 1.18.24.

Neither Premier nor the secretary of state’s office, which certifies voting systems for use in the state, has returned calls for comment about this.

After examining Humboldt’s database, Premier determined that the “deck 0” in Humboldt was deleted at some point in between processing decks 131 and 135, but so far Crnich has been unable to determine what caused the deletion. She said she did at one point abort deck 132, instead of deleting it, when she made a mistake with it, but that occurred before election day, and the “deck 0” batch of ballots was still in the system on November 23rd, after she’d aborted deck 132. She couldn’t recall deleting any other deck after election night or after the 23rd that might have caused “deck 0” to disappear in the manner Premier described.

The deletion of “deck 0” wasn’t the only problem with the GEMS system. As I mentioned previously, the audit log not only didn’t show that “deck 0” had been deleted, it never showed that the deck existed in the first place.

The system creates a “deck 0” for each ballot type that is scanned. This means, the system should have three “deck 0” entries in the log — one for vote-by-mail ballots, one for provisional ballots, and one for regular ballots cast at the precinct. Crnich found that the log did show a “deck 0” for provisional ballots and precinct-cast ballots but none for vote-by-mail ballots, even though the machine had printed a receipt at the time that an election worker had scanned the ballots into the machine. In fact, the regular audit log provides no record of any files that were deleted, including deck 132, which she intentionally deleted. She said she had to go back to a backup of the log, created before the election, to find any indication that “deck 0” had ever been created.

I don’t know which is more alarming: that the vendor failed to treat as an emergency a programming error that silently deletes ballots, or that the tabulator’s “audit log” looks more like an after the fact reconstruction of what-must-have-happened rather than a log of what actually did happen.

The good news here is that Humboldt County’s opening of election records to the public paid off, when members of the public found important facts in the records that officials and the vendor had missed. If other jurisdictions opened their records, how many more errors would we find and fix?

On the future of voting technologies: simplicity vs. sophistication

Yesterday, I testified before a hearing of Colorado’s Election Reform Commission. I made a small plug, at the end of my testimony, for a future generation of electronic voting machines that would use crypto machinery for end-to-end / software independent verification. Normally, the politicos tend to ignore this and focus on the immediately actionable stuff (e.g., current-generation DREs are unacceptably insecure; optical-scan is the best thing presently on the market). Not this time. I got a bunch of questions asking me to explain how a crypto voting system can be verifiable, how you can prove that the machine is behaving properly, and so forth. Pretty amazing. What I realized, however, is that it’s really hard to explain crypto machinery to non-CS people. I did my best, but it was clear from conversations afterward that a few minutes of Q&A did little to give them any confidence that crypto voting machinery really works.

Another of the speakers, Neil McBurnett, was talking about doing variable sampling-rate audits (as a function of how close the tally is). Afterward, he lamented to me, privately, how hard it is to explain basic concepts like what it means for something to be “statistically significant.”

There’s a clear common theme here. How do we explain to the public the basic scientific theories that underly the problems that voting systems face? My written testimony (reused from an earlier hearing in Texas) includes links to papers, and some people will follow up. Others won’t. My big question is whether we have a research challenge to invent progressively simpler systems that still have the right security properties, or whether we have an education challenge to explain that a certain amount of complexity is worthwhile for the good properties that can be achieved. (Uglier question: is it a desirable goal to weaken the security properties in return for greater simplicity? What security properties would you sacrifice?)

Certainly, with our own VoteBox system, which uses a variation on Benaloh‘s voter-initiated ballot challenge mechanism, one of the big open questions is whether real voters, who just want to cast their votes and don’t care about the security mechanisms, will be tripped up by the extra question at the end that’s fundamental to the mechanism. We’re going to need to run human subject tests against these aspects of the machine design, and if they fail in practice, it’s going to be a trip back to the drawing board.

[Sidebar: I’m co-teaching a class on elections with Bob Stein (a political scientist) and Mike Byrne (a psychologist). The students are a mix of Rice undergrads, most of whom aren’t computer scientists. I experimentally built a lecture that began by teaching just enough number theory to explain how El Gamal cryptography works and how it allows for homomorphic vote tallying. Then I described how VoteBox uses this mechanism, and wrapped up with an explanation of how to do Benaloh-style challenges. I left out a lot of details, like how you generate large prime numbers, or how you construct NIZK proofs, but I seemed to have the class along with me for the lecture. If I can sell the idea of end-to-end cryptographic mechanisms to undergraduate non-science students, then there may yet be some hope.]

Watching Google's Gatekeepers

Google’s legal team has extraordinary power to decide which videos can be seen by audiences around the world, according to Jeffrey Rosen’s piece, Google’s Gatekeepers in yesterday’s New York Times magazine. Google, of course, owns YouTube, which gives it the technical ability to block particular videos — though of course so many videos are submitted that it’s impractical to review them all in advance.

Some takedown requests are easy — content that is offensive and illegal (almost) everywhere will come own immediately once a complaint is received and processed. But Rosen focuses on more difficult cases, where a government asks YouTube to take down a video that expresses dissent or is otherwise inconvenient for that government. Sometimes these videos violate local laws, but more often their legal status is murky and in any case the laws in question may be contrary to widely accepted free speech principles.

Rosen worries that too much power to decide what can be seen is being concentrated in the hands of one company. He acknowledges that Google has behaved reasonably so far, but he worries about what might happen in the future.

I understand his point, but it’s hard to see an alternative that would be better in practice. If Google, as the owner of YouTube, is not going to have this power, then the power will have to be given to somebody else. Any nominations? I don’t have any.

What we’re left with, then, is Google making the decisions. But this doesn’t mean all of us are out in the cold, without influence. As consumers of Google’s services, we have a certain amount of leverage. And this is not just hypothetical — Google’s “don’t be evil” reputation contributes greatly to the value of its brand. The moment people think Google is misbehaving is the moment they’ll consider taking their business elsewhere.

As concerned members of the public — concerned customers, from Google’s viewpoint — there are things we can do to help keep Google honest. First, we can insist on transparency, that Google reveal what it is blocking and why. Rosen describes some transparency mechanisms that are in place, such as Google’s use of the Chilling Effects website.

Second, when we use Google’s services, we can try to minimize our switching costs, so that moving to an alternative service is a realistic possibility. The less we’re locked in to Google’s service, the less we’ll feel forced to keep using those services even if the company’s behavior changes. And of course we should think carefully about switching costs in all our technology decisions, even when larger policy issues aren’t at stake.

Finally, we can make sure that Google knows we care about free speech, and about its corporate behavior generally. This means criticizing them when they slip up, and praising them when they do well. Most of all, it means debating their decisions — which Rosen’s article helpfully invites us to do.