December 3, 2024

Final version of Government Data and the Invisible Hand

Thanks to the hard work of our patient editors at the Yale Journal of Law and Technology, my coauthors and I can now share the final version of our paper about online transparency, Government Data and the Invisible Hand.

If you have read the first version, you know that our paper is informed by a deep disappointment with the current state of the federal government’s Internet presence. A naive viewer, like we once were, might look at the chaos of clunky sites in .gov and entertain doubts about the webmasters who run those sites. But that would be—was, on our part—a mistake. We’re happy to set the record straight today.

Barack Obama’s web team is certainly one of the best that has ever been assembled. His staff did a fantastic job on the campaign site, and produced an also excellent, if slightly less dynamic, transition site at Change.gov. On its way to the White House, however, a team comprised of many of the same people seemed to lose its mojo. The complaints about the new Whitehouse.gov site—slow to be updated, lacking in interactivity—are familiar to observers of other .gov sites throughout the government.

What happened? It’s not plausible to suppose that Obama’s staffers have somehow gotten worse as they have moved from campaign to transition to governance. Instead, they have faced an increasingly stringent and burdensome array of regulations as they have become progressively more official. The transition was a sort of intermediate phase in this respect, and the new team now faces the Presidential Records Act, the Paperwork Reduction Act, and a number of other pre-Internet statutory obligations. This experience teaches that the limitations of the federal web reflect the thicket of rules to which such sites are subject—not the hardworking people who labor under those rules.

One of the most exciting things about the new administration’s approach to online media is the way it seeks to enable federal webmasters to move beyond some of the limitations of dated policies, using their expertise to leverage government data online.

My coauthors and I look forward to continuing to work on these issues. We are humbled to recognize the remarkable reservoir of talent and energy that is being brought to bear on the problem, from both within and beyond government.

New USACM Poilcy Recommendations on Open Government

USACM is the Washington policy committee of the Association for Computing Machinery, the professional association that represents computer scientists and computing practitioners. Today, USACM released Policy Recommendations on Open Government. The recommendations offer simple, clear advice to help Congress and the new administration make government initiatives—like the pending recovery bill—transparent to citizens.

The leading recommendation is that data be published in formats that “promote analysis and reuse of the data”—in other words, machine-readable formats that give citizens, rather than only government, the chance to decide how the data will be analyzed and presented. Regular Freedom to Tinker readers may recall that we have made this argument here before: The proposed Recovery.gov should offer machine-readable data, rather than only government-issue “presentations” of it. Ed and I both took part in the working group that drafted these new recommendations, and we’re pleased to be able to share them with you now, while the issue is in the spotlight.

Today’s statement puts the weight of America’s computing professionals behind the push for machine-readable government data. It also sends a clear signal to the Executive Branch, and to Congress, that America’s computing professionals stand ready to help realize the full potential of new information technologies in government.

Here are the recommendations in full:

  • Data published by the government should be in formats and approaches that promote analysis and reuse of that data.
  • Data republished by the government that has been received or stored in a machine-readable format (such as as online regulatory filings) should preserve the machine-readability of that data.
  • Information should be posted so as to also be accessible to citizens with limitations and disabilities.
  • Citizens should be able to download complete datasets of regulatory, legislative or other information, or appropriately chosen subsets of that information, when it is published by government.
  • Citizens should be able to directly access government-published datasets using standard methods such as queries via an API (Application Programming Interface).
  • Government bodies publishing data online should always seek to publish using data formats that do not include executable content.
  • Published content should be digitally signed or include attestation of publication/creation date, authenticity, and integrity.

New Site Tests Crowd-Sourced Transparency

Some of my colleagues here at CITP have written about the importance of open data formats for promoting government transparency and achieving government accountability. Another leading thinker in this area is my friend Jerry Brito, a George Mason University scholar who contributed a post here at Freedom to Tinker last year. Jerry wrote one of the first papers on the importance of mashups using government data. Now, Jerry and a few collaborators have put his ideas into action by building a site called Stimulus Watch that will facilitate crowd-sourced analysis of the hundreds of billions of dollars of deficit spending that President Obama has made a centerpiece of his economic agenda.

Jerry and his collaborators parsed a report containing more than 10,000 “shovel ready” spending proposals from the nation’s mayors. Many of these proposals will likely be funded if Congress approves Obama’s spending bill. Using the site, ordinary Americans across the country can review the proposals in their own metropolitan areas and provide feedback on which proposals deserve the highest priority. As the site grows in popularity, it may prove extremely valuable for federal officials deciding where to allocate money. And if there are turkeys like the “Bridge to Nowhere” among the mayors’ requests, the site will allow citizens to quickly identify and publicize these proposals and perhaps shame government officials into canceling them.

The (Ironic) Best Way to Make the Bailout Transparent

The next piece of proposed bailout legislation is called the American Recovery and Reinvestment Act of 2009. Chris Soghoian, who is covering the issue on his Surveillance State blog at CNET, brought the bill to my attention, particularly a provision requiring that a new web site called Recovery.gov “provide data on relevant economic, financial, grant, and contract information in user-friendly visual presentations to enhance public awareness of the use funds made available in this Act.” As a group of colleagues and I suggested last year in Government Data and the Invisible Hand, there’s an easy way to make rules like this one a great deal more effective.

Ultimately, we all want information about bailout spending to be available in the most user-friendly way to the broadest range of citizens. But is a government monopoly on “presentations” of the data the best way to achieve that goal? Probably not. If Congress orders the federal bureaucracy to provide a web site for end users, then we will all have to live with the one web site they cook up. Regular citizens would have more and better options for learning about the bailout if Congress told the executive branch to provide the relevant data in a structured machine-readable format such as XML, so many sites can be made to analyze the data. (A government site aimed at end users would also be fine. But we’re only apt to get machine-readable data if Congress makes it a requirement.)

Why does this matter? Because without the underlying data, anyone who wants to provide a useful new tool for analysis must first try to reconstruct the underlying numbers from the “user-friendly visual presentations” or “printable reports” that the government publishes. Imagine trying to convert a nice-looking graph back into a list of figures, or trying to turn a printed transcript of a congressional debate into a searchable database of who said what and when. It’s not easy.

Once the computer-readable data is out there—whether straightforwardly published by the government officials who have it in the first place, or painstakingly recreated by volunteers who don’t—we know that a small army of volunteers and nonprofits stands ready to create tools that regular citizens, even those with no technical background at all, will find useful. This group of volunteers is itself a small constituency, but the things they make, like Govtrack, Open Congress, and Washington Watch, are used by a much broader population of interested citizens. The federal government might decide to put together a system for making maps or graphs. But what about an interactive one like this? What about three-dimensional animated visualizations over time? What about an interface that’s specially designed for blind users, who still want to organize and analyze the data but may be unable to benefit as most of us can from visualizations? There might be an interface in Spanish, the second most common American language, but what about one in Tagalog, the sixth most common?

There’s a deep and important irony here: The best way for government data to reach the broadest possible population is probably to release it in a form that nobody wants to read. XML files are called “machine-readable” because they make sense to a computer, rather than to human eyes. Releasing the data that way—so a variety of “user-friendly presentations,” to match the variety of possible users, can emerge—is what will give regular citizens the greatest power to understand and react to the bailout. It would be a travesty to make government the only source for interaction with bailout data—the transparency equivalent of central planning. It would be better for everyone, and easier, to let a thousand mashups bloom.

Taking Advantage of Citizen Contrarians

In my last post, I argued that sifting through citizens’ questions for the President is a job best done outside of government. More broadly, there’s a class of input that is good for government to receive, but that probably won’t be welcome at the staff level, where moment-to-moment success is more of a concern than long-term institutional thriving. Tough questions from citizens are in this category. So is unexpected, challenging or contrarian citizen advice or policy input. A flood of messages that tell the President “I’m in favor of what you already plan to do,” perhaps leavened with a sprinkling of “I respectfully disagree, but still like you anyway,” would make for great PR, and better yet, since such messages don’t offer action guiding advice, they don’t actually drive any change whatsoever in what anyone in government—from the West Wing to the furthest corners of the executive branch—does.

Will the new administration set things up to run this way? I don’t know. Certainly, the cookie-cutter blandness of their responses to the first round of online citizen questions is not a promising sign. There’s no question that Obama himself sees some value in real, tough questions that come from the masses. But the immediate practical advantages of a choir that echoes the preacher may be a much more attractive prospect for his staff then the scrambling, search, and actual policy rethought that might have to follow tough questions or unexpected advice.

This outcome would be a lost opportunity precisely because there are pockets of untapped expertise, uncommon wisdom, and bright ideas out there. Surfacing these insights—the inputs that weren’t already going to be incorporated into the policy process, the thoughts that weren’t talking points during the campaign, the things we didn’t already know—is precisely what the new collaborative technologies have made possible.

On the other hand, in order for this to work, we need to be able to regard (at least some of) the surprising, unexpected or quirky citizen inputs as successes for the system that attracted them, rather than failures. We can already find out what the median voter thinks, without all these fancy new systems, and in any case, his or her opinion is unlikely to add new or unexpected value to the policy process.

Obamacto.org, a potential model for external sites that gather citizen input for government, has a leaderboard of suggested priorities for the new CTO, voted up by visitors to the site. The first three suggestions are net neutrality regulation, Patriot Act repeal and DMCA repeal—unsurprising major issues. Arguably, if enough people took part in the online voting, there would be some value in knowing how the online group had prioritized these familiar requests. But with the fourth item, things get interesting: it reads “complete the job on metrication that Ronald Reagan defunded.”

On the one hand, my first reaction to this is to laugh: Regardless of whether or not moving to the metric system would be a good idea, it’s something that doesn’t have nearly the political support today that would be needed in order for it to be a plausible priority for Obama’s CTO. Put another way, there’s no chance that persuading America to do this is the best use of the new administration’s political capital.

On the other hand, maybe that’s what these sorts of online fora are for: Changing which issues are on the table, and how we think about them. The netroots turned net neutrality into a mainstream political issue, and for all I know they (or some other constellation of political forces) could one day do the same for the drive to go metric.

Readers, commenters: What do you think? Are quirky inputs like the suggestion that Obama’s CTO focus on metrication a hopeful sign for the value new deliberative technologies can add in the political process? Or, are they a sign that we haven’t figured out how these technologies should work or how to use them?