November 23, 2024

Net Neutrality: When is Network Management "Reasonable"?

Last week the FCC released its much-awaited Notice of Proposed Rulemaking (NPRM) on network neutrality. As expected, the NPRM affirms past FCC neutrality principles, and adds two more. Here’s the key language:

1. Subject to reasonable network management, a provider of broadband Internet access service may not prevent any of its users from sending or receiving the lawful content of the user’s choice over the Internet.

2. Subject to reasonable network management, a provider of broadband Internet access service may not prevent any of its users from running the lawful applications or using the lawful services of the user’s choice.

3. Subject to reasonable network management, a provider of broadband Internet access service may not prevent any of its users from connecting to and using on its network the user’s choice of lawful devices that do not harm the network.

4. Subject to reasonable network management, a provider of broadband Internet access service may not deprive any of its users of the user’s entitlement to competition among network providers, application providers, service providers, and content providers.

5. Subject to reasonable network management, a provider of broadband Internet access service must treat lawful content, applications, and services in a nondiscriminatory manner.

6. Subject to reasonable network management, a provider of broadband Internet access service must disclose such information concerning network management and other practices as is reasonably required for users and content, application, and service providers to enjoy the protections specified in this part.

That’s a lot of policy packed into (relatively) few words. I expect that my colleagues and I will have a lot to say about these seemingly simple rules over the coming weeks.

Today I want to focus on the all-purpose exception for “reasonable network management”. Unpacking this term might tell us a lot about how the proposed rule would operate.

Here’s what the NPRM says:

Reasonable network management consists of: (a) reasonable practices employed by a provider of broadband Internet access to (i) reduce or mitigate the effects of congestion on its network or to address quality-of-service concerns; (ii) address traffic that is unwanted by users or harmful; (iii) prevent the transfer of unlawful content; or (iv) prevent the unlawful transfer of content; and (b) other reasonable network management practices.

The key word is “reasonable”, and in that respect the definition is nearly circular: in order to be “reasonable”, a network management practice must be (a) “reasonable” and directed toward certain specific ends, or (b) “reasonable”.

In the FCC’s defense, it does seek comments and suggestions on what the definition should be, and it does say that it intends to make case-by-case determinations in practice, as it did in the Comcast matter. Further, it rejects a “strict scrutiny” standard of the sort that David Robinson rightly criticized in a previous post.

“Reasonable” is hard to define because in real life every “network management” measure will have tradeoffs. For example, a measure intended to block copyright-infringing material would in practice make errors in both directions: it would block X% (less than 100%) of infringing material, while as a side-effect also blocking Y% (more than 0%) of non-infringing material. For what values of X and Y is such a measure “reasonable”? We don’t know.

Of course, declaring a vague standard rather than a bright-line rule can sometimes be good policy, especially where the facts on the ground are changing rapidly and it’s hard to predict what kind of details might turn out to be important in a dispute. Still, by choosing a case-by-case approach, the FCC is leaving us mostly in the dark about where it will draw the line between “reasonable” and “unreasonable”.

Sidekick Users' Data Lost: Blame the Cloud?

Users of Sidekick mobile phones saw much of their data disappear last week due to engineering problems at a Microsoft data center. Sidekick devices lose the contents of their memory when they don’t have power (e.g. when the battery is being changed), so all data is transmitted to a data center for permanent storage — which turned out not to be so permanent.

(The latest news is that some of the data, perhaps most of it, may turn out to be recoverable.)

A common response to this story is that this kind of danger is inherent in “cloud” computing services, where you rely on some service provider to take care of your data. But this misses the point, I think. Preserving data is difficult, and individual users tend to do a mediocre job of it. Admit it: You have lost your own data at some point. I know I have lost some of mine. A big, professionally run data center is much less likely to lose your data than you are.

It’s worth noting, too, that many cloud services face lower risk of this sort of problem. My email, for example, lives in the cloud–the “official copy” is on a central server, and copies are downloaded frequently to my desktop and laptop computers. If the server were to go up in flames, along with all of the server backups, I would still be in good shape, because I would still have copies of everything on my desktop and laptop.

For my email and similar services, the biggest risk to data integrity is not that the server will disappear altogether, but that the server will misbehave in subtle ways, causing my stored data to be corrupted over time. Thanks to the automatic synchronization between the server and my two clients (desktop and laptop), bad data could be replicated silently into all copies. In principle, some of the damage could be repaired later, using the server’s backups, but that’s a best case scenario.

This risk, of buggy software corrupting data, has always been with us. The question is not whether problems will happen in the cloud — in any complex technology, trouble comes with the territory — but whether the cloud makes a problem worse.

PrivAds: Behavioral Advertising without Tracking

There’s an interesting new paper out of Stanford and NYU, about a system called “PrivAds” that tries to provide behavioral advertising on web sites, without having a central server gather detailed information about user behavior. If the paper’s approach turns out to work, it could have an important impact on the debate about online advertising and privacy.

Advertisers have obvious reasons to show you ads that match your interests. You can benefit too, if you see ads that are relevant to your needs, rather than ones you don’t care about. The problem, as I argued in my Congressional testimony, comes when sites track your activities, and build up detailed files on you, in order to do the targeting.

PrivAds tries to solve this problem by providing behavioral advertising without having any server track you. The idea is that your own browser will track you, and analyze your online activities to build a model of your interests, but your browser won’t reveal this information to anyone else. When a site wants to show you an interest-based ad, your browser will choose the ad from a portfolio of ads offered by the ad service.

The tricky part is how your browser can do all of this without incidentally leaking your activities to the server. For example, the ad agency needs to know how many times each ad was shown. How can you report this to the ad service without revealing which ads you saw? PrivAds offers a solution based on fancy cryptography, so that the ad agency can aggregate reports from many users, without being able to see the users’ individual reports. Similarly, every interaction between your browser and the outside must be engineered carefully so that behavioral advertising can occur but the browser doesn’t telegraph your actions.

It’s not clear at this point whether the PrivAds approach will work, in the sense of protecting privacy without reducing the effectiveness of ad targeting. It’s clear, though, that PrivAds is asking an important question.

If the PrivAds approach succeeds, demonstrating that behavioral advertising does not require tracking, this doesn’t mean that companies will stop wanting to track you — but it does mean that they won’t be able to use advertising as an excuse to track you.

Privacy as a Social Problem, Not a Technology Problem

Bob Blakley had an interesting post Monday, arguing that technologists tend to frame the privacy issue poorly. (I would add that many non-technologists use the same framing.) Here’s a sample:

That’s how privacy works; it’s not about secrecy, and it’s not about control: it’s about sociability. Privacy is a social good which we give to one another, not a social order in which we control one another.

Technologists hate this; social phenomena aren’t deterministic and programmers can’t write code to make them come out right. When technologists are faced with a social problem, they often respond by redefining the problem as a technical problem they think they can solve.

The privacy framing that’s going on in the technology industry today is this:

Social Frame: Privacy is a social problem; the solution is to ensure that people use sensitive personal information only in ways that are beneficial to the subject of the information.

BUT as technologists we can’t … control peoples’ behavior, so we can’t solve this problem. So instead let’s work on a problem that sounds similar:

Technology Frame: Privacy is a technology problem; since we can’t make people use sensitive personal information sociably, the solution is to ensure that people never see others’ sensitive personal information.

We technologists have tried to solve the privacy problem in this technology frame for about a decade now, and, not surprisingly (information wants to be free!) we have failed.

The technology frame isn’t the problem. Privacy is the problem. Society can and routinely does solve the privacy problem in the social frame, by getting the vast majority of people to behave sociably.

This is an excellent point, and one that technologists and policymakers would be wise to consider. Privacy depends, ultimately, on people and institutions showing a reasonable regard for the privacy interests of others.

Bob goes on to argue that technologies should be designed to help these social mechanisms work.

A sociable space is one in which people’s social and antisocial actions are exposed to scrutiny so that normal human social processes can work.

A space in which tagging a photograph publicizes not only the identities of the people in the photograph but also the identities of the person who took the photograph and the person who tagged the photograph is more sociable than a space in which the only identity revealed is that of the person in the photograph – because when the picture of Jimmy holding a martini washes up on the HR department’s desk, Jimmy will know that Johnny took it (at a private party) and Julie tagged him – and the conversations humans have developed over tens of thousands of years to handle these situations will take place.

Again, this is an excellent and underappreciated point. But we need to be careful how far we take it. If we go beyond Bob’s argument, and we say that good design of the kind he advocates can completely solve the online privacy problem, then we have gone too far.

Technology doesn’t just move old privacy problems online. It also creates new problems and exacerbates old ones. In the old days, Johnny and Julie might have taken a photo of Jimmy drinking at the office party, and snail-mailed the photo to HR. That would have been a pretty hostile act. Now, the same harm can arise from a small misunderstanding: Johnny and Julie might assume that HR is more tolerant, or that HR doesn’t watch Facebook; or they might not realize that a site allows HR to search for photos of Jimmy. A photo might be taken by Johnny and tagged by Julie, even though Johnny and Julie don’t know each other. All in all, the photo scenario is more likely to happen today than in the pre-Net age.

This is just one example of what James Grimmelmann calls Accidental Privacy Spills. Grimmelmann tells the story of a private email message that was forwarded and re-forwarded to thousands of people, not by malice but because many people made the seemingly harmless decision to forward it to a few friends. This would never have happened with a personal letter. (Personal letters are sometimes publicized against the wishes of the author, but that’s very rare and wouldn’t have happened in the case Grimmelmann describes.) As the cost of capturing, transmitting, storing, and searching photos and other digital information falls to near-zero, it’s only natural that more capturing, transmitting, storing, and searching of information will occur.

Good design is not the whole solution to our privacy problem. But design has the huge advantage that we can get started on it right away, without needing to reach some sweeping societal agreement about what the rules should be. If you’re designing a product, or deciding which product to use, you can support good privacy design today.

Introducing FedThread: Opening the Federal Register

Today we are rolling out FedThread, a new way of interacting with the Federal Register. It’s the latest civic technology project from our team at Princeton’s Center for Information Technology Policy.

The Federal Register is “[t]he official daily publication for rules, proposed rules, and notices of Federal agencies and organizations, as well as executive orders and other presidential documents.” It’s published by the U.S. government, five days a week. The Federal Register tells citizens what their government is doing, in a lot more detail than the news media do.

FedThread makes the Federal Register more open and accessible. FedThread gives users:

  • collaborative annotation: Users can attach a note to any paragraph of the Federal Register; a conversation thread hangs off of every paragraph.
  • advanced search: Users can search the Federal Register (going back to 2000) on full text, by date, agency, and other fields.
  • customized feeds: Any search can be turned into an RSS feed. The resulting feed will include any new items that match the search query. Feeds can be delivered by email as well.

I think FedThread is a nice tool, but what’s most amazing to me is that the whole project took only ten days to create. Ten days ago we had no code, no HTML, no plan, not even a block diagram on a whiteboard. Today we launched a pretty good service.

How was this possible? Three things enabled it.

First, government provided the necessary data, for bulk download, in a format (XML) that’s easy for software to handle. This let us acquire and manipulate the underlying data (Federal Register contents) quickly. Folks at the Government Printing Office, National Archives and Records Administration, and Office of Science and Technology Policy all helped to make this possible. The roll-out of the government’s XML-based Federal Register site today is a significant step forward.

Second, we had great tools, such as Linux, Apache, MySql, Python, Django, jQuery, Datejs, and lxml. These tools are capable, flexible, and free, and they fit together in useful ways. More than once we faced a challenging engineering problem, only to find an existing tool that did almost exactly what we needed. When we needed a tool for managing inline discussion threads within a document, Adrian Holovaty, Jacob Kaplan-Moss and Jack Slocum graciously let us use their code from djangobook.com, which served as the basis for our system. Tools like these help small teams build big projects quickly.

Third, we have a amazing team. A project like this needs people who are super-smart, tireless, have great engineering judgment, and know how to work as a team. Joe Calandrino, Ari Feldman, Harlan Yu, and Bill Zeller all did fantastic work building the site. We set an insane schedule — at the start we guessed we had a 50% chance of having anything at all ready by today — and they raced ahead of the schedule, to the point that we expanded the project’s scope more than once. Great job, guys! Now please get some sleep.

We hope FedThread is a useful tool that brings more people into contact with the operations of their government — one small step in a larger trend of using technology to make government more transparent.