May 17, 2024

Archives for 2009

Introducing RECAP: Turning PACER Around

With today’s technologies, government transparency means much more than the chance to read one document at a time. Citizens today expect to be able to download comprehensive government datasets that are machine-processable, open and free. Unfortunately, government is much slower than industry when it comes to adopting new technologies. In recent years, private efforts have helped push government, the legislative and executive branches in particular, toward greater transparency. Thus far, the judiciary has seen relatively little action.

Today, we are excited to announce the public beta release of RECAP, a tool that will help bring an unprecedented level of transparency to the U.S. federal court system. RECAP is a plug-in for the Firefox web browser that makes it easier for users to share documents they have purchased from PACER, the court’s pay-to-play access system. With the plug-in installed, users still have to pay each time they use PACER, but whenever they do retrieve a PACER document, RECAP automatically and effortlessly donates a copy of that document to a public repository hosted at the Internet Archive. The documents in this repository are, in turn, shared with other RECAP users, who will be notified whenever documents they are looking for can be downloaded from the free public repository. RECAP helps users exercise their rights under copyright law, which expressly places government works in the public domain. It also helps users advance the public good by contributing to an extensive and freely available archive of public court documents.

The project’s website, https://www.recapthelaw.org, has all of the details– how to install RECAP, a screencast of the plug-in in action, more discussion of why this issue matters, and a host of other goodies.

The repository already has over one million documents available for free download. Together, with the help of RECAP users, we can recapture truly public access to the court proceedings that give our laws their practical meaning.

Anonymization FAIL! Privacy Law FAIL!

I have uploaded my latest draft article entitled, Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization to SSRN (look carefully for the download button, just above the title; it’s a little buried). According to my abstract:

Computer scientists have recently undermined our faith in the privacy-protecting power of anonymization, the name for techniques for protecting the privacy of individuals in large databases by deleting information like names and social security numbers. These scientists have demonstrated they can often “reidentify” or “deanonymize” individuals hidden in anonymized data with astonishing ease. By understanding this research, we will realize we have made a mistake, labored beneath a fundamental misunderstanding, which has assured us much less privacy than we have assumed. This mistake pervades nearly every information privacy law, regulation, and debate, yet regulators and legal scholars have paid it scant attention. We must respond to the surprising failure of anonymization, and this Article provides the tools to do so.

I have labored over this article for a long time, and I am very happy to finally share it publicly. Over the next week, or so, I will write a few blog posts here, summarizing the article’s high points and perhaps expanding on what I couldn’t get to in a mere 28,000 words.

Thanks to Ed, David, and everybody else at Princeton’s CITP for helping me develop this article during my visit earlier this year.

Please let me know what you think, either in these comments or by direct email.

Open Government Data: Starting to Judge the Results

Like many others who read this blog, I’ve spent some time over the last year trying to get more civic data online. I’ve argued that government’s failure to put machine-readable data online is the key roadblock that separates us from a world in which exciting, Web 2.0 style technologies enrich nearly every aspect of civic life. This is an empirical claim, and as more government data comes online, it is being tested.

Jay Nath is the “manager of innovation” for the City and County of San Francisco, working to put municipal data online and build a community of developers who can make the most of it. In a couple of recent blog posts, he has considered the empirical state of government data publishing efforts. Drawing on data from Washington DC, where officials led by then-city CTO Vivek Kundra have put a huge catalog of government data online, he analyzed usage statistics and found an 80/20 pattern of public use of online government data — enormous interest in crime statistics and 311-style service requests, but relatively little about housing code enforcement and almost none about city workers’ use of purchasing credit cards. Here’s the chart: he made (larger version)

Note that this chart measures downloads, not traffic to downstream sites that may be reusing the data.

This analysis was part of a broader effort in San Francisco to begin measuring the return on investments in open government data. One simple measure, as many have remarked before, is foregone IT expenditures that are avoided when third party innovators make it unnecessary for government to provide certain services or make certain investments. But this misses what seems, intuitively, to be the lion’s share of the benefit: New value that didn’t exist before and is created by the extra functionality that third party innovators deliver, but government would not. Another approach is to measure government responsiveness before and after effectiveness data begin to be published. Unfortunately, such measures are unlikely to be controlled — if services get worse, for example, it may have more to do with budget cuts than with any victory, or failure, of citizen monitoring.

Open government data advocates and activists have allies on the inside in a growing number of governmental contexts, from city hall to the White House. But for these allies to be successful, they will need to be able to point to concrete results — sooner and more urgently in the current economic climate than they might have had to do otherwise. This holds a clear lesson for the activists: Small, tangible, steps that turn published government data into cost savings, measurable service improvements, or other concrete goods will “punch above their weight” : not only are they valuable in their own right, but they help favorably disposed civic servants make the case internally for more transparency and disclosure. Beyond aiming for perfection and thinking about the long run, the volunteer community would benefit from seeking low hanging fruit that will prove the concept of open government data and justify further investment.

Twittering for the Marines

The Marines recently issued an order banning social network sites (Facebook, MySpace, Twitter, etc.). The Pentagon is reviewing this sort of thing across all services. This follows on the heels of a restrictive NFL policy along the same lines. Slashdot has a nice thread, where among other things, we learn that some military personnel will contract with off-base ISPs for private Internet connections.

There are really two separate security issues to be discussed here. First, there’s the issue that military personnel might inadvertently leak information that could be used by their adversaries. This is what the NFL is worried about. The Marines order makes no mention of such leaks, and they would already be covered by rules and regulations, never mind continuing education (see, e.g., loose lips sink ships). Instead, our discussion will focus on the issue explicitly raised in the order: social networks as a vector for attackers to get at our military personnel.

For starters, there are other tools and techniques that can be used to protect people from visiting malicious web sites. There are black-list services, such as Google’s Safe Browsing, built into any recent version of Firefox. There are also better browser architectures, like Google’s Chrome, that isolate one part of the browser from another. The military could easily require the use of a specific web browser. The military could go one step further and provide sacrificial virtual machines, perhaps running on remote hosts and shared by something like VNC, to allow personnel to surf the public Internet. A solution like this seems infinitely preferable to forcing personnel to use third-party ISPs on personal computers, where vulnerable machines may well be compromised, yet go unnoticed by military sysadms. (Or worse, the ISP could itself be compromised, giving a huge amount of intel to the enemy; contrast this with the military, with its own networks and its own crypto, which presumably is designed to leak far less intel to a local eavesdropper.)

Even better, the virtual machine / remote display technique allows the military sysadm to keep all kinds of forensic data. Users’ external network behavior creates a fantastic honeynet for capturing malicious payloads. If your personnel are being attacked, you want to have the evidence in hand to sort out who the attacker is and why you’re being attacked. That helps you block future attacks and formulate any counter-measures you might take. You could do this just as well for email programs as web browsing. Might not work so well for games, but otherwise it’s a pretty powerful technique. (And, oh by the way, we’re talking about the military here, so personnel privacy isn’t as big a concern as it might be in other settings.)

It’s also important to consider the benefits of social networking. Military personnel are not machines. They’re people with spouses, children, and friends back home. Facebook is a remarkably efficient way to keep in touch with large numbers of friends without investing large amounts of time — ideal for the Marine, back from patrol, to get a nice chuckle when winding down before heading off to sleep.

In short, it’s problematic to ban social networking on “official” machines, which only pushes personnel to use these things on “unofficial” machines with “unofficial” ISPs, where you’re less likely to detect attacks and it’s harder to respond to them. Bring them in-house, in a controlled way, where you can better manage security issues and have happier personnel.

AP's DRM Announcement: Much Ado About Nothing

Last week the Associated Press announced it would be developing some kind of online news registry to control use of news content. From AP’s press release:

The registry will employ a microformat for news developed by AP and which was endorsed two weeks ago by the Media Standards Trust, a London-based nonprofit research and development organization that has called on news organizations to adopt consistent news formats for online content. The microformat will essentially encapsulate AP and member content in an informational “wrapper” that includes a digital permissions framework that lets publishers specify how their content is to be used online and which also supplies the critical information needed to track and monitor its usage.

The registry also will enable content owners and publishers to more effectively manage and control digital use of their content, by providing detailed metrics on content consumption, payment services and enforcement support. It will support a variety of payment models, including pay walls.

It was hard to make sense of this, so I went looking for more information. AP posted a diagram of the system, which only adds to the confusion — your satisfaction with the diagram will be inversely proportional to your knowledge of the technology.

As far as I can tell, the underlying technology is based on hNews, a microformat for news, shown in the AP diagram, that was announced by AP and the Media Standards Trust two weeks before the recent AP announcement.

Unfortunately for AP, the hNews spec bears little resemblance to AP’s claims about it. hNews is a handy way of annotating news stories with information about the author, dateline, and so on. But it doesn’t “encapsulate” anything in a “wrapper”, nor does it do much of anything to facilitate metering, monitoring, or paywalls.

AP also says that hNews ” includes a digital permissions framework that lets publishers specify how their content is to be used online”. This may sound like a restrictive DRM scheme, aimed at clawing back the rights copyright grants to users. But read the fine print. hNews does include a “rights” field that can be attached to an article, but the rights field uses ccREL, the Creative Commons Rights Expression Language, whose definition states unequivocally that it does not limit users’ rights already granted by copyright and can only convey further rights to the user. Here’s the ccREL definition, page 9:

Here are the License properties defined as part of ccREL:

  • cc:permits — permits a particular use of the Work above and beyond what default copyright law allows.
  • cc:prohibits — prohibits a particular use of the Work, specifically affecting the scope of the permissions provided by cc:permits (but not reducing rights granted under copyright).

It seems that there is much less to the AP’s announcement than meets the eye. If there’s a story here, it’s in the mismatch between the modest and reasonable underlying technology, and AP’s grandiose claims for it.