April 20, 2014

Harlan Yu

avatar

Collateral Freedom in China

OpenITP has just released a new report—Collateral Freedom—that studies the state of censorship circumvention tool usage in China today. From the report’s overview:

This report documents the experiences of 1,175 Chinese Internet users who are circumventing their country’s Internet censorship—and it carries a powerful message for developers and funders of censorship circumvention tools. We believe these results show an opportunity for the circumvention tech community to build stable, long term improvements in Internet freedom in China.

The circumvention tools that work best for these users are technologically diverse, but they are united by a shared political feature: the collateral cost of choosing to block them is prohibitive for China’s censors. Our survey respondents are relying not on tools that the Great Firewall can’t block, but rather on tools that the Chinese government does not want the Firewall to block. Internet freedom for these users is collateral freedom, built on technologies and platforms that the regime finds economically or politically indispensable.

Download the full report here: http://openitp.org/?q=node/44

The study was conducted by CITP alums David Robinson and me, along with Anne An. It was managed by OpenITP, and supported by Radio Free Asia’s Open Technology Fund. We wrote it primarily for developers and funders of censorship circumvention technology projects, but it is also designed to be accessible for non-technical policymakers who are interested in Internet freedom, and for China specialists without technology background.

avatar

The New Ambiguity of "Open Government"

David Robinson and I have just released a draft paper—The New Ambiguity of “Open Government”—that describes, and tries to help solve, a key problem in recent discussions around online transparency. As the paper explains, the phrase “open government” has become ambiguous in a way that makes life harder for both advocates and policymakers, by combining the politics of transparency with the technologies of open data. We propose using new terminology that is politically neutral: the word adaptable to describe desirable features of data (and the word inert to describe their absence), separately from descriptions of the governments that use these technologies.

Clearer language will serve everyone well, and we hope this paper will spark a conversation among those who focus on civic transparency and innovation. Thanks to Justin Grimes and Josh Tauberer, for their helpful insight and discussions as we drafted this paper.

Download the full paper here.

Abstract:

“Open government” used to carry a hard political edge: it referred to politically sensitive disclosures of government information. The phrase was first used in the 1950s, in the debates leading up to passage of the Freedom of Information Act. But over the last few years, that traditional meaning has blurred, and has shifted toward technology.

Open technologies involve sharing data over the Internet, and all kinds of governments can use them, for all kinds of reasons. Recent public policies have stretched the label “open government” to reach any public sector use of these technologies. Thus, “open government data” might refer to data that makes the government as a whole more open (that is, more transparent), but might equally well refer to politically neutral public sector disclosures that are easy to reuse, but that may have nothing to do with public accountability. Today a regime can call itself “open” if it builds the right kind of web site—even if it does not become more accountable or transparent. This shift in vocabulary makes it harder for policymakers and activists to articulate clear priorities and make cogent demands.

This essay proposes a more useful way for participants on all sides to frame the debate: We separate the politics of open government from the technologies of open data. Technology can make public information more adaptable, empowering third parties to contribute in exciting new ways across many aspects of civic life. But technological enhancements will not resolve debates about the best priorities for civic life, and enhancements to government services are no substitute for public accountability.

avatar

Retiring FedThread

Nearly two years ago, the Federal Register was published in a structured XML format for the first time. This was a big deal in the open government world: the Federal Register, often called the daily newspaper of our federal government, is one of our government’s most widely read publications. And while it could previously be read in paper and PDF forms, it wasn’t easy to digitally manipulate. The XML release changed all this.

When we heard this was happening, four of us here at CITP—Ari Feldman, Bill Zeller, Joe Calandrino, and myself—decided to see how we might be able to improve how citizens could interact with the Federal Register. Our big idea was to make it easy for anyone to comment paragraph-by-paragraph on any of its documents, like a proposed regulation. The site, which we called FedThread, would provide an informal public forum for annotating these documents, and we hoped it would lead to useful online discussions about the merits and weaknesses of all kinds of federal regulatory activity. We also added other useful features, like a full-text search engine and custom RSS feeds. Building these features for the Federal Register only became a straightforward task because of the new XML version. We built the site in just eight days from conception to release.

Another trio of developers in SF also saw opportunities in this free machine-readable resource and developed their own project called GovPulse, which had already won the Sunlight Foundation’s Apps for America 2 contest. They were then approached by the staff of the Federal Register last summer to expand their site to create what would become the new online face of the publication, Federal Register 2.0. Their approach to user comments actually guided users into participating in the formal regulatory comment process—a great idea. Federal Register 2.0 included several features present in FedThread, and many more. Everything was done using open source tools, and made available to the public as open source.

This has left little reason for us to continue operating FedThread. It has continued to reliably provide the features we developed two years ago, but our regular users will find it straightforward to transition to the similar (and often superior) search and subscription features on Federal Register 2.0. So, we’re retiring FedThread. However, the code that we developed will continue to be available, and we hope that enterprising developers will find components to re-use in their own projects that benefit society. For instance, the general purpose paragraph-commenting code that we developed can be useful in a variety of projects. Of course, that code itself was an adaptation of the code supporting another open source project—the Django Book, a free set of documentation about the web framework that we were using to build FedThread (but this is what developers would call a “meta” observation).

Ideally, this is how hacking open government should work. Free machine readable data sets beget useful new ways for citizens to explore those data and make it useful to other citizens. Along the way, they experiment with different ideas, some of which catch on and others of which serve as fodder for the next great idea. This happens faster than standard government contracting, and often produces more innovative results.

Finally, a big thanks to the GPO, NARA and the White House Open Government Initiative for making FedThread possible and for helping to demonstrate that this approach can work, and congratulations on the fantastic Federal Register 2.0.

avatar

What We Lose if We Lose Data.gov

In its latest 2011 budget proposal, Congress makes deep cuts to the Electronic Government Fund. This fund supports the continued development and upkeep of several key open government websites, including Data.gov, USASpending.gov and the IT Dashboard. An earlier proposal would have cut the funding from $34 million to $2 million this year, although the current proposal would allocate $17 million to the fund.

Reports say that major cuts to the e-government fund would force OMB to shut down these transparency sites. This would strike a significant blow to the open government movement, and I think it’s important to emphasize exactly why shuttering a site like Data.gov would be so detrimental to transparency.

On its face, Data.gov is a useful catalog. It helps people find the datasets that government has made available to the public. But the catalog is really a convenience that doesn’t necessarily need to be provided by the government itself. Since the vast majority of datasets are hosted on individual agency servers—not directly by Data.gov—private developers could potentially replicate the catalog with only a small amount of effort. So even if Data.gov goes offline, nearly all of the data still exist online, and a private developer could go rebuild a version of the catalog, maybe with even better features and interfaces.

But Data.gov also plays a crucial behind the scenes role, setting standards for open data and helping individual departments and agencies live up to those standards. Data.gov establishes a standard, cross-agency process for publishing raw datasets. The program gives agencies clear guidance on the mechanics and requirements for releasing each new dataset online.

There’s a Data.gov manual that formally documents and teaches this process. Each agency has a lead Data.gov point-of-contact, who’s responsible for identifying publishable datasets and for ensuring that when data is published, it meets information quality guidelines. Each dataset needs to be published with a well-defined set of common metadata fields, so that it can be organized and searched. Moreover, thanks to Data.gov, all the data is funneled through at least five stages of intermediate review—including national security and privacy reviews—before final approval and publication. That process isn’t quick, but it does help ensure that key goals are satisfied.

When agency staff have data they want to publish, they use a special part of the Data.gov website, which outside users never see, called the Data Management System (DMS). This back-end administrative interface allows agency points-of-contact to efficiently coordinate publishing activities agency-wide, and it gives individual data stewards a way to easily upload, view and maintain their own datasets.

My main concern is that this invaluable but underappreciated infrastructure will be lost when IT systems are de-funded. The individual roles and responsibilities, the informal norms and pressures, and perhaps even the tacit authority to put new datasets online would likely also disappear. The loss of structure would probably mean that sharply reduced amounts of data will be put online in the future. The datasets that do get published in an ad hoc way would likely lack the uniformity and quality that the current process creates.

Releasing a new dataset online is already a difficult task for many agencies. While the current standards and processes may be far from perfect, Data.gov provides agencies with a firm footing on which they can base their transparency efforts. I don’t know how much funding is necessary to maintain these critical back-end processes, but whatever Congress decides, it should budget sufficient funds—and direct that they be used—to preserve these critically important tools.

avatar

What are the Constitutional Limits on Online Tracking Regulations?

As the conceptual contours of Do Not Track are being worked out, an interesting question to consider is whether such a regulation—if promulgated—would survive a First Amendment challenge. Could Do Not Track be an unconstitutional restriction on the commercial speech of online tracking entities? The answer would of course depend on what restrictions a potential regulation would specify. However, it may also depend heavily on the outcome of a case currently in front of the Supreme Court—Sorrell v. IMS Health Inc.—that challenges the constitutionality of a Vermont medical privacy law.

The privacy law at issue would restrict pharmacies from selling prescription drug records to data mining companies for marketing purposes without the prescribing doctor’s consent. These drug records each contain extensive details about the doctor-patient relationship, including “the prescriber’s name and address, the name, dosage and quantity of the drug, the date and place the prescription is filled and the patient’s age and gender.” A doctor’s prescription record can be tracked very accurately over time, and while patient names are redacted, each patient is assigned a unique identifier so their prescription histories may also be tracked. Pharmacies have been selling these records to commercial data miners, who in turn aggregate the data and sell compilations to pharmaceutical companies, who then engage in direct marketing back to individual doctors using a practice known as “detailing.” Sound familiar yet? It’s essentially brick-and-mortar behavioral advertising, and a Do Not Track choice mechanism, for prescription drugs.

The Second Circuit recently struck down the Vermont law on First Amendment grounds, ruling first that the law is a regulation of commercial speech and second that the law’s restrictions fall on the wrong side of the Central Hudson test—the four-step analysis used to determine the constitutionality of commercial speech restrictions. This ruling clashes explicitly with two previous decisions in the First Circuit, in Ayotte and Mills, which deemed that similar medical privacy laws in Maine and New Hampshire were constitutional. As such, the Supreme Court decided in January to take the case and resolve the disagreement, and the oral argument is set for April 26th.

I’m not a lawyer, but it seems like the outcome of Sorrell could have a wide-ranging impact on current and future information privacy laws, including possible Do Not Track regulations. Indeed, the petitioners recognize the potentially broad implications of their case. From the petition:

“Information technology has created new and unprecedented opportunities for data mining companies to obtain, monitor, transfer, and use personal information. Indeed, one of the defining traits of the so-called “Information Age” is this ability to amass information about individuals. Computers have made the flow of data concerning everything from personal purchasing habits to real estate records easier to collect than ever before.”

One central question in the case is whether a restriction on access to these data for marketing purposes is a restriction on legitimate commercial speech. The Second Circuit believes it is, reasoning that even “dry information” sold for profit—and already in the hands of a private actor—is entitled to First Amendment protection. In contrast, the First Circuit in Ayotte posited that the information being exchanged has “itself become a commodity,” not unlike beef jerky, so such restrictions are only a limitation on commercial conduct—not speech—and therefore do not implicate any First Amendment concerns.

A major factual difference here, as compared to online privacy and tracking, is that pharmacies are required by many state and federal laws to collect and maintain prescription drug records, so there may be more compelling reasons for the state to restrict access to this information.

In the case of online privacy, it could be argued that Internet users are voluntarily supplying information to the tracking servers, even though many users probably don’t intend to do this, nor do they expect that this is occurring. Judge Livingston, in her circuit dissent in Sorrell, notes that different considerations apply where the government is “prohibiting a speaker from conveying information that the speaker already possesses,” distinguishing that from situations where the government restricts access to the information itself. In applying this to online communications, at what point does the server “possess” the user’s data—when the packets are received and are sitting in a buffer or when the packets are re-assembled and the data permanently stored? Is there a constitutional difference between restrictions on collection versus restrictions on use? The Supreme Court in 1965 in Zemel v. Rusk stated that “the right to speak and publish does not carry with it the unrestrained right to gather information.” To what extent does this apply to government restrictions of online tracking?

The constitutionality of state and federal information privacy laws have historically and consistently been called into question, and things would be no different if—and it’s a big if— Congress grants the FTC authority over online tracking. When considering technical standards and what “tracking” means, it’s worth keeping in mind the possible constitutional challenges insofar as state action may be involved, as some desirable options to curb online tracking may only be possible within a voluntary or self-regulatory framework. Where that line is drawn will depend on how the Supreme Court comes down in Sorrell and how broadly they decide the case.

avatar

Some Technical Clarifications About Do Not Track

When I last wrote here about Do Not Track in August, there were just a few rumblings about the possibility of a Do Not Track mechanism for online privacy. Fast forward four months, and Do Not Track has shot to the top of the privacy agenda among regulators in Washington. The FTC staff privacy report released in December endorsed the idea, and Congress was quick to hold a hearing on the issue earlier this month. Now, odds are quite good that some kind of Do Not Track legislation will be introduced early in this new congressional session.

While there isn’t yet a concrete proposal for Do Not Track on the table, much has already been written both in support of and against the idea in general, and it’s terrific to see the issue debated so widely. As I’ve been following along, I’ve noticed some technical confusion on a few points related to Do Not Track, and I’d like to address three of them here.

1. Do Not Track will most likely be based on an HTTP header.

I’ve read some people still suggesting that Do Not Track will be some form of a government-operated list or registry—perhaps of consumer names, device identifiers, tracking domains, or something else. This type of solution has been suggested before in an earlier conception of Do Not Track, and given its rhetorical likeness to the Do Not Call Registry, it’s a natural connection to make. But as I discussed in my earlier post—the details of which I won’t rehash here—a list mechanism is a relatively clumsy solution to this problem for a number of reasons.

A more elegant solution—and the one that many technologists seem to have coalesced around—is the use of a special HTTP header that simply tells the server whether the user is opting out of tracking for that Web request, i.e. the header can be set to either “on” or “off” for each request. If the header is “on,” the server would be responsible for honoring the user’s choice to not be tracked. Users would be able to control this choice through the preferences panel of the browser or the mobile platform.

2. Do Not Track won’t require us to “re-engineer the Internet.”

It’s also been suggested that implementing Do Not Track in this way will require a substantial amount of additional work, possibly even rising to the level of “re-engineering the Internet.” This is decidedly false. The HTTP standard is an extensible one, and it “allows an open-ended set of… headers” to be defined for it. Indeed, custom HTTP headers are used in many Web applications today.

How much work will it take to implement Do Not Track using the header? Generally speaking, not too much. On the client-side, adding the ability to send the Do Not Track header is a relatively simple undertaking. For instance, it only took about 30 minutes of programming to add this functionality to a popular extension for the Firefox Web browser. Other plug-ins already exist. Implementing this functionality directly into the browser might take a little bit longer, but much of the work will be in designing a clear and easily understandable user interface for the option.

On the server-side, adding code to detect the header is also a reasonably easy task—it takes just a few extra lines of code in most popular Web frameworks. It could take more substantial work to program how the server behaves when the header is “on,” but this work is often already necessary even in the absence of Do Not Track. With industry self-regulation, compliant ad servers supposedly already handle the case where a user opts out of their behavioral advertising programs, the difference now being that the opt-out signal comes from a header rather than a cookie. (Of course, the FTC could require stricter standards for what opting-out means.)

Note also that contrary to some suggestions, the header mechanism doesn’t require consumers to identify who they are or otherwise authenticate to servers in order to gain tracking protection. Since the header is a simple on/off flag sent with every Web request, the server doesn’t need to maintain any persistent state about users or their devices’ opt-out preferences.

3. Microsoft’s new Tracking Protection feature isn’t the same as Do Not Track.

Last month, Microsoft announced that its next release of Internet Explorer will include a privacy feature called Tracking Protection. Mozilla is also reportedly considering a similar browser-based solution (although a later report makes it unclear whether they actually will). Browser vendors should be given credit for doing what they can from within their products to protect user privacy, but their efforts are distinct from the Do Not Track header proposal. Let me explain the major difference.

Browser-based features like Tracking Protection basically amount to blocking Web connections from known tracking domains that are compiled on a list. They don’t protect users from tracking by new domains (at least until they’re noticed and added to the tracking list) nor from “allowed” domains that are tracking users surreptitiously.

In contrast, the Do Not Track header compels servers to cooperate, to proactively refrain from any attempts to track the user. The header could be sent to all third-party domains, regardless of whether the domain is already known or whether it actually engages in tracking. With the header, users wouldn’t need to guess whether a domain should be blocked or not, and they wouldn’t have to risk either allowing tracking accidentally or blocking a useful feature.

Tracking Protection and other similar browser-based defenses like Adblock Plus and NoScript are reasonable, but incomplete, interim solutions. They should be viewed as complementary with Do Not Track. For entities under FTC jurisdiction, Do Not Track could put an effective end to the tracking arms race between those entities and browser-based defenses—a race that browsers (and thus consumers) are losing now and will be losing in the foreseeable future. For those entities outside FTC jurisdiction, blocking unwanted third parties is still a useful though leaky defense that maintains the status quo.

Information security experts like to preach “defense in depth” and it’s certainly vital in this case. Neither solution fully protects the user, so users really need both solutions to be available in order to gain more comprehensive protection. As such, the upcoming features in IE and Firefox should not be seen as a technical substitute for Do Not Track.

——

To reiterate: if the technology that implements Do Not Track ends up being an HTTP header, which I think it should be, it would be both technically feasible and relatively simple. It’s also distinct from recent browser announcements about privacy in that Do Not Track forces server cooperation, while browser-based defenses work alone to fend off tracking.

What other technical issues related to Do Not Track remain murky to readers? Feel free to leave comments here, or if you prefer on Twitter using the #dntrack tag and @harlanyu.

avatar

On Facebook Apps Leaking User Identities

The Wall Street Journal today reports that many Facebook applications are handing over user information—specifically, Facebook IDs—to online advertisers. Since a Facebook ID can easily be linked to a user’s real name, third party advertisers and their downstream partners can learn the names of people who load their advertisement from those leaky apps. This reportedly happens on all ten of Facebook’s most popular apps and many others.

The Journal article provides few technical details behind what they found, so here’s a bit more about what I think they’re reporting.

The content of a Facebook application, for example FarmVille, is loaded within an iframe on the Facebook page. An iframe essentially embeds one webpage (FarmVille) inside another (Facebook). This means that as you play FarmVille, your browser location bar will show http://apps.facebook.com/onthefarm, but the iframe content is actually controlled by the application developer, in this case by farmville.com.

The content loaded by farmville.com in the iframe contains the game alongside third party advertisements. When your browser goes to fetch the advertisement, it automatically forwards to the third party advertiser “referer” information—that is, the URL of the current page that’s loading the ad. For FarmVille, the URL referer that’s sent will look something like:

http://fb-tc-2.farmville.com/flash.php?…fb_sig_user=[User’s Facebook ID]…

And there’s the issue. Because of the way Zynga (the makers of FarmVille) crafts some of its URLs to include the user’s Facebook ID, the browser will forward this identifying information on to third parties. I confirmed yesterday evening that using FarmVille does indeed transmit my Facebook ID to a few third parties, including Doubleclick, Interclick and socialvi.be.

Facebook policy prohibits application developers from passing this information to advertising networks and other third parties. In addition, Zynga’s privacy policy promises that “Zynga does not provide any Personally Identifiable Information to third-party advertising companies.”

But evidence clearly indicates otherwise.

What can be done about this? First, application developers like Zynga can simply stop including the user’s Facebook ID in the HTTP GET arguments, or they can place a “#” mark before the sensitive information in the URL so browsers don’t transmit this information automatically to third parties.

Second, Facebook can implement a proxy scheme, as proposed by Adrienne Felt more than two years ago, where applications would not receive real Facebook IDs but rather random placeholder IDs that are unique for each application. Then, application developers can be free do whatever they want with the placeholder IDs, since they can no longer be linked back to real user names.

Third, browser vendors can give users easier and better control over when HTTP referer information is sent. As Chris Soghoian recently pointed out, browser vendors currently don’t make these controls very accessible to users, if at all. This isn’t a direct solution to the problem but it could help. You could imagine a privacy-enhancing opt-in browser feature that turns off the referer header in all cross-domain situations.

Some may argue that this leak, whether inadvertent or not, is relatively innocuous. But allowing advertisers and other third parties to easily and definitively correlate a real name with an otherwise “anonymous” IP address, cookie, or profile is a dangerous path forward for privacy. At the very least, Facebook and app developers need to be clear with users about their privacy rights and comply with their own stated policies.

avatar

Assessing PACER's Access Barriers

The U.S. Courts recently conducted a year-long assessment of their Electronic Public Access program which included a survey of PACER users. While the results of the assessment haven’t been formally published, the Third Branch Newsletter has an interview with Bankruptcy Judge J. Rich Leonard that discusses a few high-level findings of the survey. Judge Leonard has been heavily involved in shaping the evolution of PACER since its inception twenty years ago and continues to lead today.

The survey covered a wide range of PACER users—“the courts, the media, litigants, attorneys, researchers, and bulk data collectors”—and Judge Leonard claims they found “a remarkably high level of satisfaction”: around 80% of those surveyed were “satisfied” or “very satisfied” with the service.

If we compare public access before we had PACER to where we are now, there is clearly much success to celebrate. But the key question is not only whether current users are satisfied with the service but also whether PACER is reaching its entire audience of potential users. Are there artificial obstacles preventing potential PACER users—who admittedly would be difficult to poll—from using the service? The satisfaction statistic may be fine at face value, assuming that a representative sample of users were polled, but it could be misleading if it’s being used to gauge the overall success of PACER as a public access system.

One indicator of obstacles may be another statistic cited by Judge Leonard: “about 45% of PACER users also use CM/ECF,” the Courts’ electronic case management and filing system. To put it another way, nearly half of all PACER users are currently attorneys who practice federal law.

That number seems inordinately high to me and suggests that significant barriers to public access may exist. In particular, account registration requires all users to submit a valid credit card for billing (or alternatively a valid home address to receive log-in credentials and billing statements by mail.) Even if users’ credit cards are never charged, this registration hurdle may already turn away many potential PACER users at the door.

The other barrier is obviously the cost itself. With a few exceptions, users are forced to pay a fee for each document they download, at a metered rate of eight-cents per page. Judge Leonard asserts that “surprisingly, cost ranked way down” in the survey and that “most people thought they paid a fair price for what they got.”

But this doesn’t necessarily imply that cost isn’t a major impediment to access. It may just be that those surveyed—primarily lawyers—simply pass the cost of using PACER down to their clients and never bear the cost themselves. For the rest of PACER users who don’t have that luxury, the high cost of access can completely rule out certain kinds of legal research, or cause users to significantly ration and monitor their usage (as is the case even in the vast majority of our nation’s law libraries), or wholly deter users from ever using the service.

Judge Leonard rightly recognizes that it’s Congress that has authorized the collection of user fees, rather than using general taxpayer money, to fund the electronic public access program. But I wish the Courts would at least acknowledge that moving away from a fee-based model, to a system funded by general appropriations, would strengthen our judicial process and get us closer to securing each citizen’s right to equal protection under the law.

Rather than downplaying the barriers to public access, the Courts should work with Congress to establish a way forward to support a public access system that is truly open. They should study and report on the extent to which Congress already funds PACER indirectly, through Executive and Legislative branch PACER fee payments to the Judiciary, and re-appropriate those funds directly. If there is a funding shortfall, and I assume there will be, they should study the various options for closing that gap, such as additional direct appropriations or a slight increase in certain filing fees.

With our other two branches of government making great strides in openness and transparency with the help of technology, the Courts similarly needs to transition away from a one-size-fits-all approach to information dissemination. Public access to the courts will be fundamentally transformed by a vigorous culture of civic innovation around federal court documents, and this will only happen if the Courts confront today’s access barriers head-on and break them down.

(Thanks to Daniel Schuman for pointing me to the original article.)

avatar

Do Not Track: Not as Simple as it Sounds

Over the past few weeks, regulators have rekindled their interest in an online Do Not Track proposal in hopes of better protecting consumer privacy. FTC Chairman Jon Leibowitz told a Senate Commerce subcommittee last month that Do Not Track is “one promising area” for regulatory action and that the Commission plans to issue a report in the fall about “whether this is one viable way to proceed.” Senator Mark Pryor (D-AR), who sits on the subcommittee, is also reportedly drafting a new privacy bill that includes some version of this idea, of empowering consumers with blanket opt-out powers over online tracking.

Details are sparse at this point about how a Do Not Track mechanism might actually be implemented. There are a variety of possible technical and regulatory approaches to the problem, each with its own difficulties and limitations, which I’ll discuss in this post.

An Adaptation of “Do Not Call”

Because of its name, Do Not Track draws immediate comparisons to arguably the most popular piece of consumer protection regulation ever instituted in the US—the National Do Not Call Registry. If the FTC were to take an analogous approach for online tracking, a consumer would register his device’s network identifier—its IP address—with the national registry. Online advertisers would then be prohibited from tracking devices that are identified by those IP addresses.

Of course, consumer devices rarely have persistent long-term IP addresses. Most ISPs assign IP addresses dynamically (using DHCP) and a single device might be assigned a new IP address every few minutes. Consumer devices often also share the same IP address at the same time (using NAT) so there’s no stable one-to-one mapping between IPs and devices. Things could be different with IPv6, where each device could have its own stable IP address, but the Do Not Call framework, directly applied, is not the best solution for today’s online world.

The comparison is still useful though, if only to caution against the assumption that Do Not Track will be as easy, or as successful, as Do Not Call. The differences between the problems at hand and the technologies involved are substantial.

A Registry of Tracking Domains

Back in 2007, a coalition of online consumer privacy groups lobbied for the creation of a national Do Not Track List. They proposed a reverse approach: online advertisers would be required to register with the FTC all domain names used to issue persistent identifiers to user devices. The FTC would then publish this list, and it would be up to the browser to protect users from being tracked by these domains. Notice that the onus here is fully on the browser—equipped with this list—to protect the user from being uniquely identified. Meanwhile, online advertisers would still have free rein to try any method they wish to track user behavior, so long as it happens from these tracking domains.

We’ve learned over the past couple of years that modern browsers, from a practical perspective, can be limited in their ability to protect the user from unique identification. The most stark example of this is the browser fingerprinting attack, which was popularized by the EFF earlier this year. In this attack, the tracking site runs a special script that gathers information about the browser’s configurations, which are unique enough to identify the browser instance in nearly every case. The attack takes advantage of the fact that much of the gathered information is used frequently for legitimate purposes—such as determining which plugins are available to the site—so a browser which blocks the release of this information would surely irritate the user. As these kinds of “side-channel” attacks grow in sophistication, major browser vendors might always be playing catch-up in the technical arms race, leaving most users vulnerable to some form of tracking by these domains.

The x-notrack Header

If we believe that browsers, on their own, will be unable to fully protect users, then any effective Do No Track proposal will need to place some restraints on server tracking behavior. Browsers could send a signal to the tracking server to indicate that the user does not want this particular interaction to be tracked. The signaling mechanism could be in the form of a standard pre-defined cookie field, or more likely, an HTTP header that marks the user’s tracking preference for each connection.

In the simplest case, the HTTP header—call it x-notrack—is a binary flag that can be turned on or off. The browser could enable x-notrack for every HTTP connection, or for connections to only third party sites, or for connections to some set of user-specified sites. Upon receiving the signal not to track, the site would be prevented, by FTC regulation, from setting any persistent identifiers on the user’s machine or using any other side-channel mechanism to uniquely identify the browser and track the interaction.

While this approach seems simple, it could raise a few complicated issues. One issue is bifurcation: nothing would prevent sites from offering limited content or features to users who choose to opt-out of tracking. One could imagine a divided Web, where a user who turns on the x-notrack header for all HTTP connections—i.e. a blanket opt-out—would essentially turn off many of the useful features on the Web.

By being more judicious in the use of x-notrack, a user could permit silos of first-party tracking in exchange for individual feature-rich sites, while limiting widespread tracking by third parties. But many third parties offer useful services, like embedding videos or integrating social media features, and they might require that users disable x-notrack in order to access their services. Users could theoretically make a privacy choice for each third party, but such a reality seems antithetical to the motivations behind Do Not Track: to give consumers an easy mechanism to opt-out of harmful online tracking in one fell swoop.

The FTC could potentially remedy this scenario by including some provision for “tracking neutrality,” which would prohibit sites from unnecessarily discriminating against a user’s choice not to be tracked. I won’t get into the details here, but suffice it to say that crafting a narrow yet effective neutrality provision would be highly contentious.

Privacy Isn’t a Binary Choice

The underlying difficulty in designing a simple Do Not Track mechanism is the subjective nature of privacy. What one user considers harmful tracking might be completely reasonable to another. Privacy isn’t a single binary choice but rather a series of individually-considered decisions that each depend on who the tracking party is, how much information can be combined and what the user gets in return for being tracked. This makes the general concept of online Do Not Track—or any blanket opt-out regime—a fairly awkward fit. Users need simplicity, but whether simple controls can adequately capture the nuances of individual privacy preferences is an open question.

Another open question is whether browser vendors can eventually “win” the technical arms race against tracking technologies. If so, regulations might not be necessary, as innovative browsers could fully insulate users from unwanted tracking. While tracking technologies are currently winning this race, I wouldn’t call it a foregone conclusion.

The one thing we do know is this: Do Not Track is not as simple as it sounds. If regulators are serious about putting forth a proposal, and it sounds like they are, we need to start having a more robust conversation about the merits and ramifications of these issues.

avatar

Release Government Data, Early and Often

One of the key axioms of modern open government is that all public data should be published online in a raw but usable form. Usability in this case is aimed at software programmers. By making government datasets more usable, programmers are more likely to innovate in the civic sphere and build technologies, using the raw data, to enhance the relationships among citizens and with government.

The open government community has provided plenty of valuable guidance about what usability means for programmers. We proclaim that all datasets need to be: published in a format that is reasonably structured and machine-processable; well-documented; downloadable in bulk; authenticated using cryptographic digital signatures; version-controlled; permanent and citable; and the list goes on and on. These are all worthy principles to be sure, and all government datasets should strive to meet them.

But you’ll be hard-pressed to find any government datasets that exist with all of these principles pre-satisfied. While some are in better shape than others, most datasets would make programmers cringe. Data often only exist as informal working sets in proprietary Excel spreadsheets. Sometimes they are in structured databases, but schemas are undocumented, field values are ambiguous, and the semantics are only understood by the employee who created them. Datasets have errors and biases that are known but never explicitly corrected.

For a civil servant who is a data caretaker looking over the laundry list of publishing principles, there’s frequently a huge quality chasm between the dataset she owns and how people are asking to see it released. To her, publishing this data adequately just seems like a lot of extra work. The more attractive alternative is to put off the data publishing—it’s not in her job description or evaluations anyway—and move on to other work instead.

How can this chasm be bridged? A widely-adopted philosophy in software development and entrepreneurship would serve open government data well: release early and release often. And listen to your customers.

In the software development world, a working version of the product is pushed out as soon as possible even with known imperfections—an “alpha” release—so it can be subject to real use by early adopters. Early adopters can provide helpful feedback about what works, what’s broken, and what new features would be most useful to them. The software developers then iterate quickly. They incorporate the suggested fixes and features into their code and release an updated version of the product to their users. The virtuous cycle then starts again. Under this philosophy, software developers can be efficient about how to best improve their code where it matters, and users get software that works better and has more features they desire.

The “release early, release often” philosophy should be applied to government data. For the initial release, data caretakers should take the path of least resistance to get data out the door. This means publishing datasets in whatever format is most convenient, along with as much documentation as can reasonably be mustered. Documentation is especially important with an “alpha” dataset—proper warnings about its problems, instabilities and inductive limitations must be prominently displayed. (Of course, the usual privacy and legal caveats should also be applied.) Sometimes, the “alpha” release will be “good enough” for programmers to start their work, and this will minimize any superfluous work done by caretakers. This is the virtue of “release early.”

In other cases, programmers will need assistance using the dataset and will notice problem spots with the initial release. The dataset might be confusing, contain errors or be difficult to work with. A tight feedback mechanism allows the programmer to get help quickly and continue to innovate, while the data caretaker can fix problems based on real use cases and add clarifying metadata into an updated version of the dataset. Data quality and usability increases for those working with the dataset, both in and outside of government. That’s the virtue of “release often.”

And here is the big opportunity for government: no platform currently exists to engage the prime audience for government data—software programmers. Without a tight feedback mechanism, the virtuous cycle of mutual benefit cannot exist. Government is missing its best opportunity to improve data quality by neglecting useful feedback from programmers who are actually tinkering with the datasets. Society is losing out on potentially game-changing civic innovations, which otherwise would have been built if data were more usable and the uncertainty of failure reduced.

A terrific start in turning the corner would be for government to adopt an issue-tracking system for its datasets. As a public venue, it would help ensure that data caretakers are prompt in addressing developer concerns. It would also allow caretakers to organize feedback in a formal way. Such platforms are commonplace in any successful software development venture. The same needs to be true for government data in order to drive rapid quality improvements and increase developer engagement.