April 26, 2024

Government Online: Outreach vs. Transparency

These days everybody in Washington seems to be jumping on the Twitter bandwagon. The latest jumpers are four House committees, according to Tech Daily Dose.

The committees, like a growing number of individual members’ offices, plan to use Twitter as a new tool to reach their audience and ensure transparency between the government and the public.

“I believe government works best when it is transparent and information is accessible to all….” [said a committee chair].

I’m all in favor of public officials using technology to communicate with us. But Twitter is a tool for outreach, not transparency.

Here’s the difference: outreach means government telling us what it wants us to hear; transparency means giving us the information that we, the citizens, want to get. An ideal government provides both outreach and transparency. Outreach lets officials share their knowledge about what is happening, and it lets them argue for particular policy choices — both of which are good. Transparency keeps government honest and responsive by helping us know what government is doing.

Twitter, with its one-way transmission of 140-character messages, may be useful for outreach, but it won’t give us transparency. So, Congressmembers: Thanks for Twittering, but please don’t forget about transparency.

(Interestingly, the students in my tech policy class were surprised to hear that any of the digerati had ever Twittered. The students think of Twitter as a tool for aging hepcat techno-poseurs. [Insert your own joke here.])

Meanwhile, the Obama team is having trouble transitioning its famous online outreach machinery into government, according to Jose Antonio Vargas’s story in the Washington Post:

WhiteHouse.gov, envisioned as the primary vehicle for President Obama to communicate with the online masses, has been overwhelmed by challenges that staffers did not foresee and technological problems they have yet to solve.

Obama, for example, would like to send out mass e-mail updates on presidential initiatives, but the White House does not have the technology in place to do so. The same goes for text messaging, another campaign staple.

Beyond the technological upgrades needed to enable text broadcasts, there are security and privacy rules to sort out involving the collection of cellphone numbers, according to Obama aides, who acknowledge being caught off guard by the strictures of government bureaucracy.

Here again we see a difference between outreach and transparency. Outreach, by its nature, must be directed by government. But transparency, which aims to offer citizens the information they want, is best embodied by vigorous activity outside of government, enabled by government providing free and open access to data. As we argued in our Invisible Hand paper, many things are inherently more difficult to do inside of government, so the key role of government is to enable a marketplace of ideas in the private sector, rather than doing the whole job.

Kundra Named As Federal CIO

Today, the Obama administration named Vivek Kundra as the Chief Information Officer of the U.S. government, a newly created position.

This is great news. Kundra, in his previous role as CTO of the District of Columbia, made great strides in opening the DC government by publishing government data. When he spoke at our Thursday Forum last fall, everyone was impressed by how quickly and effectively he had transformed the DC government’s approach to technology.

First, he set up an open Data Catalog, where lots of data collected by the DC government is freely available in standard formats. Second, he ran the Apps for Democracy contest, in which he challenged citizens to develop applications to take advantage of all the data that the DC government is publishing. The results were impressive—with 47 different apps submitted by citizens—and also inexpensive.

Most impressively, in doing this he overcame the natural inertia of big city government. The Federal government will be even harder to budge, but with the right support from the top, Kundra could bring a new level of openness and tech-friendliness to the government.

New Internet? No Thanks.

Yesterday’s New York Times ran a piece, “Do We Need a New Internet?” suggesting that the Internet has too many security problems and should therefore be rebuilt.

The piece has been widely criticized in the technical blogosphere, so there’s no need for me to pile on. Anyway, I have already written about the redesign-the-Net meme. (See Internet So Crowded, Nobody Goes There Anymore.)

But I do want to discuss two widespread misconceptions that found their way into the Times piece.

First is the notion that today’s security problems are caused by weaknesses in the network itself. In fact, the vast majority of our problems occur on, and are caused by weaknesses in, the endpoint devices: computers, mobile phones, and other widgets that connect to the Net. The problem is not that the Net is broken or malfunctioning, it’s that the endpoint devices are misbehaving — so the best solution is to secure the endpoint devices. To borrow an analogy from Gene Spafford, if people are getting mugged at bus stops, the solution is not to buy armored buses.

(Of course, there are some security issues with the network itself, such as vulnerability of routing protocols and DNS. We should work on fixing those. But they aren’t the problems people normally complain about — and they aren’t the ones mentioned in the Times piece.)

The second misconception is that the founders of the Internet had no plan for protecting against the security attacks we see today. Actually they did have a plan which was simple and, if executed flawlessly, would have been effective. The plan was that endpoint devices would not have remotely exploitable bugs.

This plan was plausible, but it turned out to be much harder to execute than the founders could have foreseen. It has become increasingly clear over time that developing complex Net-enabled software without exploitable bugs is well beyond the state of the art. The founders’ plan is not working perfectly. Maybe we need a new plan, or maybe we need to execute the original plan better, or maybe we should just muddle through. But let’s not forget that there was a plan, and it was reasonable in light of what was known at the time.

As I have said before, the Internet is important enough that it’s worthwhile having people think about how it might be redesigned, or how it might have been designed differently in the first place. The Net, like any large human-built institution, is far from perfect — but that doesn’t mean that we would be better off tearing it down and starting over.

Please participate in research project — requires only one click

As part of a research project on web browser security we are currently taking a “census” of browser installations. We hope you’ll agree to participate.

If you do participate, a small snippet of JavaScript will collect your browser’s settings and send them to our server. We will record a cryptographic hash of those settings in our research database. We will also store a non-unique cookie (saying only that you participated) in your browser. We will do all of this immediately if you click this link.

(If you want to see in advance the Javascript code we run on participants’ machines, you can read it here.)

[I revised this entry to be more clear about what we are doing. — Ed]

DRM In Retreat

Last week’s agreement between Apple and the major record companies to eliminate DRM (copy protection) in iTunes songs marks the effective end of DRM for recorded music. The major online music stores are now all DRM-free, and CDs still lack DRM, so consumers who acquire music will now expect it without DRM. That’s a sensible result, given the incompatibility and other problems caused by DRM, and it’s a good sign that the record companies are ready to retreat from DRM and get on with the job of reinventing themselves for the digital world.

In the movie world, DRM for stored content may also be in trouble. On DVDs, the CSS DRM scheme has long been a dead letter, technologically speaking. The Blu-ray scheme is better, but if Blu-ray doesn’t catch on, this doesn’t matter.

Interestingly, DRM is not retreating as quickly in systems that stream content on demand. This makes sense because the drawbacks of DRM are less salient in a streaming context: there is no need to maintain compatibility with old content; users can be assumed to be online so software can be updated whenever necessary; and users worry less about preserving access when they know they can stream the content again later. I’m not saying that DRM causes no problems with streaming, but I do think the problems are less serious than in a stored-content setting.

In some cases, streaming uses good old fashioned incompatibility in place of DRM. For example, a stream might use a proprietary format and the most convenient software for watching streams might lack a “save this video” button.

It remains to be seen how far DRM will retreat. Will it wither away entirely, or will it hang on in some applications?

Meanwhile, it’s interesting to see traditional DRM supporters back away from it. RIAA chief Mitch Bainwol now says that the RIAA is agnostic on DRM. And DRM cheerleader Bill Rosenblatt has relaunched his “DRM Watch” blog under the new title “Copyright and Technology“. The new blog’s first entry: iTunes going DRM-free.