March 28, 2024

What We Lose if We Lose Data.gov

In its latest 2011 budget proposal, Congress makes deep cuts to the Electronic Government Fund. This fund supports the continued development and upkeep of several key open government websites, including Data.gov, USASpending.gov and the IT Dashboard. An earlier proposal would have cut the funding from $34 million to $2 million this year, although the current proposal would allocate $17 million to the fund.

Reports say that major cuts to the e-government fund would force OMB to shut down these transparency sites. This would strike a significant blow to the open government movement, and I think it’s important to emphasize exactly why shuttering a site like Data.gov would be so detrimental to transparency.

On its face, Data.gov is a useful catalog. It helps people find the datasets that government has made available to the public. But the catalog is really a convenience that doesn’t necessarily need to be provided by the government itself. Since the vast majority of datasets are hosted on individual agency servers—not directly by Data.gov—private developers could potentially replicate the catalog with only a small amount of effort. So even if Data.gov goes offline, nearly all of the data still exist online, and a private developer could go rebuild a version of the catalog, maybe with even better features and interfaces.

But Data.gov also plays a crucial behind the scenes role, setting standards for open data and helping individual departments and agencies live up to those standards. Data.gov establishes a standard, cross-agency process for publishing raw datasets. The program gives agencies clear guidance on the mechanics and requirements for releasing each new dataset online.

There’s a Data.gov manual that formally documents and teaches this process. Each agency has a lead Data.gov point-of-contact, who’s responsible for identifying publishable datasets and for ensuring that when data is published, it meets information quality guidelines. Each dataset needs to be published with a well-defined set of common metadata fields, so that it can be organized and searched. Moreover, thanks to Data.gov, all the data is funneled through at least five stages of intermediate review—including national security and privacy reviews—before final approval and publication. That process isn’t quick, but it does help ensure that key goals are satisfied.

When agency staff have data they want to publish, they use a special part of the Data.gov website, which outside users never see, called the Data Management System (DMS). This back-end administrative interface allows agency points-of-contact to efficiently coordinate publishing activities agency-wide, and it gives individual data stewards a way to easily upload, view and maintain their own datasets.

My main concern is that this invaluable but underappreciated infrastructure will be lost when IT systems are de-funded. The individual roles and responsibilities, the informal norms and pressures, and perhaps even the tacit authority to put new datasets online would likely also disappear. The loss of structure would probably mean that sharply reduced amounts of data will be put online in the future. The datasets that do get published in an ad hoc way would likely lack the uniformity and quality that the current process creates.

Releasing a new dataset online is already a difficult task for many agencies. While the current standards and processes may be far from perfect, Data.gov provides agencies with a firm footing on which they can base their transparency efforts. I don’t know how much funding is necessary to maintain these critical back-end processes, but whatever Congress decides, it should budget sufficient funds—and direct that they be used—to preserve these critically important tools.

Release Government Data, Early and Often

One of the key axioms of modern open government is that all public data should be published online in a raw but usable form. Usability in this case is aimed at software programmers. By making government datasets more usable, programmers are more likely to innovate in the civic sphere and build technologies, using the raw data, to enhance the relationships among citizens and with government.

The open government community has provided plenty of valuable guidance about what usability means for programmers. We proclaim that all datasets need to be: published in a format that is reasonably structured and machine-processable; well-documented; downloadable in bulk; authenticated using cryptographic digital signatures; version-controlled; permanent and citable; and the list goes on and on. These are all worthy principles to be sure, and all government datasets should strive to meet them.

But you’ll be hard-pressed to find any government datasets that exist with all of these principles pre-satisfied. While some are in better shape than others, most datasets would make programmers cringe. Data often only exist as informal working sets in proprietary Excel spreadsheets. Sometimes they are in structured databases, but schemas are undocumented, field values are ambiguous, and the semantics are only understood by the employee who created them. Datasets have errors and biases that are known but never explicitly corrected.

For a civil servant who is a data caretaker looking over the laundry list of publishing principles, there’s frequently a huge quality chasm between the dataset she owns and how people are asking to see it released. To her, publishing this data adequately just seems like a lot of extra work. The more attractive alternative is to put off the data publishing—it’s not in her job description or evaluations anyway—and move on to other work instead.

How can this chasm be bridged? A widely-adopted philosophy in software development and entrepreneurship would serve open government data well: release early and release often. And listen to your customers.

In the software development world, a working version of the product is pushed out as soon as possible even with known imperfections—an “alpha” release—so it can be subject to real use by early adopters. Early adopters can provide helpful feedback about what works, what’s broken, and what new features would be most useful to them. The software developers then iterate quickly. They incorporate the suggested fixes and features into their code and release an updated version of the product to their users. The virtuous cycle then starts again. Under this philosophy, software developers can be efficient about how to best improve their code where it matters, and users get software that works better and has more features they desire.

The “release early, release often” philosophy should be applied to government data. For the initial release, data caretakers should take the path of least resistance to get data out the door. This means publishing datasets in whatever format is most convenient, along with as much documentation as can reasonably be mustered. Documentation is especially important with an “alpha” dataset—proper warnings about its problems, instabilities and inductive limitations must be prominently displayed. (Of course, the usual privacy and legal caveats should also be applied.) Sometimes, the “alpha” release will be “good enough” for programmers to start their work, and this will minimize any superfluous work done by caretakers. This is the virtue of “release early.”

In other cases, programmers will need assistance using the dataset and will notice problem spots with the initial release. The dataset might be confusing, contain errors or be difficult to work with. A tight feedback mechanism allows the programmer to get help quickly and continue to innovate, while the data caretaker can fix problems based on real use cases and add clarifying metadata into an updated version of the dataset. Data quality and usability increases for those working with the dataset, both in and outside of government. That’s the virtue of “release often.”

And here is the big opportunity for government: no platform currently exists to engage the prime audience for government data—software programmers. Without a tight feedback mechanism, the virtuous cycle of mutual benefit cannot exist. Government is missing its best opportunity to improve data quality by neglecting useful feedback from programmers who are actually tinkering with the datasets. Society is losing out on potentially game-changing civic innovations, which otherwise would have been built if data were more usable and the uncertainty of failure reduced.

A terrific start in turning the corner would be for government to adopt an issue-tracking system for its datasets. As a public venue, it would help ensure that data caretakers are prompt in addressing developer concerns. It would also allow caretakers to organize feedback in a formal way. Such platforms are commonplace in any successful software development venture. The same needs to be true for government data in order to drive rapid quality improvements and increase developer engagement.

What Open Data Means to Marginalized Communities

Two symbols of this era of open data are President Obama’s Open Governance Initiative, a directive that has led agencies to post their results online and open up data sets, and Ushahidi, a tool for crowdsourcing crisis information. While these tools are bringing openness to governance and crisis response respectively, I believe we have yet to find a good answer to the question: what does open data means for the long-term social and economic development of poor and marginalized communities?

I came to Nairobi on a hunch. The hunch was that a small digital mapping experiment taking place in the Kibera slum would matter deeply, both for Kiberans who want to improve their community, and for practitioners keen to use technology to bring the voiceless into a conversation about how resources are allocated on their behalf.

So far I haven’t been disappointed. Map Kibera, an effort to create the first publicly available map of Kibera, is the brainchild of Mikel Maron, a technologist and Open Street Map founder, and Erica Hagen, a new media and development expert, and is driven by a group of 13 intrepid mappers from the Kibera community. In partnership with SODNET (an incredible local technology for social change group), Phase I was the creation of the initial map layer on Open Street Map (see Mikel’s recent presentation at Where 2.0). Phase II, with the generous support of UNICEF, will focus on making the map useful for even the most marginalized groups within the Kibera community.

What we have in mind is quite simple: add massive amounts of data to the map around 3 categories (health services, public safety/vulnerability and informal education) then experiment with ways to increase awareness and the ability to advocate for better service provision. The resulting toolbox, which will involve no tech (drawing on printed maps), and tech (SMS reporting, Ushahidi and new media creation) will help us collectively answer questions about how open data itself, and the narration of such data through citizen media and face-to-face conversations, can help even the most marginalized transform their communities.

We hope the methodology we develop, which will be captured on our wiki, can be incorporated into other communities around Kenya, and to places like Haiti, where it is critical to enable Haitians to own their own vision of a renewed nation.
cross-posted to the In An African Minute blog.

Introducing RECAP: Turning PACER Around

With today’s technologies, government transparency means much more than the chance to read one document at a time. Citizens today expect to be able to download comprehensive government datasets that are machine-processable, open and free. Unfortunately, government is much slower than industry when it comes to adopting new technologies. In recent years, private efforts have helped push government, the legislative and executive branches in particular, toward greater transparency. Thus far, the judiciary has seen relatively little action.

Today, we are excited to announce the public beta release of RECAP, a tool that will help bring an unprecedented level of transparency to the U.S. federal court system. RECAP is a plug-in for the Firefox web browser that makes it easier for users to share documents they have purchased from PACER, the court’s pay-to-play access system. With the plug-in installed, users still have to pay each time they use PACER, but whenever they do retrieve a PACER document, RECAP automatically and effortlessly donates a copy of that document to a public repository hosted at the Internet Archive. The documents in this repository are, in turn, shared with other RECAP users, who will be notified whenever documents they are looking for can be downloaded from the free public repository. RECAP helps users exercise their rights under copyright law, which expressly places government works in the public domain. It also helps users advance the public good by contributing to an extensive and freely available archive of public court documents.

The project’s website, https://www.recapthelaw.org, has all of the details– how to install RECAP, a screencast of the plug-in in action, more discussion of why this issue matters, and a host of other goodies.

The repository already has over one million documents available for free download. Together, with the help of RECAP users, we can recapture truly public access to the court proceedings that give our laws their practical meaning.