December 22, 2024

Where are the Technologists on the EAC Advisory Board?

Barbara Simons, an accomplished computer scientist and e-voting expert, was recently appointed to the Election Assistance Commission (EAC) Board of Advisors. (The EAC is the U.S. Federal body responsible for voting technology standards, among other things.) This is good news.

The board has thirty-seven members, of which four positions are allocated for “members representing professionals in the field of science and technology”. These four positions are to be appointed by Majority and Minority leaders in the House and the Senate. (See page 2 of the Board’s charter.) Given the importance of voting technology issues to the EAC, it does seem like a good idea to reserve 10% of the advisory board positions for technologists. If anything, the number of technologist seats should be larger.

Barbara was appointed to the board position by the Senate Majority Leader, Harry Reid. Kudos to Senator Reid for appointing a genuine voting technology expert.

What about the other three seats for “professionals in the field of science and technology?” Sadly, the board’s membership list shows that these seats are not actually held by technical people. Barbara Arnwine holds the House Speaker’s seat, Tom Fuentes holds the House Minority Leader’s seat, and Wesley R. Kliner, Jr. holds the Senate Minority Leader’s seat. All three appear to be accomplished people who have something to offer on the board. But as far as I can tell they are not “professionals in the field of science and technology”, so their appropriate positions on the board would be somewhere in the other thirty-three seats.

What can be done? I wouldn’t go so far as to kick any current members off the board, even if that were possible. But when vacancies do become available, they should be filled by scientists or technologists, as dictated by the charter’s requirement of having four such people on the board.

The EAC is struggling with voting technology issues. They could surely use the advice of three more expert technologists.

Plenty of Blame to Go Around in Yahoo Music Shutdown

People have been heaping blame on Yahoo after it announced plans to shut down its Yahoo Music Store DRM servers on September 30. The practical effect of the shutdown is to make music purchased at the store unusable after a while.

Though savvy customers tended to avoid buying music in forms like this, where a company had to keep some distant servers running to keep the purchased music alive, those customers who did buy – taking reassurances from Yahoo and music industry at face value – are rightly angry. In the face of similar anger, Microsoft backtracked on plans to shutter its DRM servers. It looks like Yahoo will stay the course.

Yahoo deserves blame here, but let’s not forget who else contributed to this mess. Start with the record companies for pushing this kind of DRM, and the DRM agenda generally, despite the ample evidence that it would inconvenience paying customers without stopping infringement.

Even leaving aside past mistakes, copyright owners could step in now to help users, either by enticing Yahoo to keep its servers running, or by helping Yahoo create and distribute software that translates the music into a usable form. If I were a Yahoo Music customer, I would be complaining to the copyright owners now, and asking them to step in and stand behind their product.

Finally, let’s not forget the role of Congress. The knowledge of how to jailbreak Yahoo Music tracks and transform them into a stable, usable form exists and could easily be packaged in software form. But Congress made it illegal to circumvent Yahoo’s DRM, even to enable noninfringing use of a legitimately purchased song. And they made it illegal to distribute certain software tools to enable those uses. If Congress had paid more attention to consumer interests in drafting the Digital Millennium Copyright Act, or if it had passed any of the remedial legislation offered since the DMCA took effect, then the market could solve this Yahoo problem all on its own. If I were a Yahoo Music customer, I would be complaining to Congress now, and asking them to stop blocking consumer-friendly technologies.

And needless to say, I wouldn’t be buying DRM-encumbered songs any more.

UPDATE (July 29, 2008): Yahoo has now done the right thing, offering to give refunds or unencumbered MP3s to the stranded customers. I wonder how much this is costing Yahoo.

What's the Cyber in Cyber-Security?

Recently Barack Obama gave a speech on security, focusing on nuclear, biological, and infotech threats. It was a good, thoughtful speech, but I couldn’t help noticing how, in his discussion of the infotech threats, he promised to appoint a “National Cyber Advisor” to give the president advice about infotech threats. It’s now becoming standard Washington parlance to say “cyber” as a shorthand for what many of us would call “information security.” I won’t fault Obama for using the terminology spoken by the usual Washington experts. Still, it’s interesting to consider how Washington has developed its own terminology, and what that terminology reveals about the inside-the-beltway view of the information security problem.

The word “cyber” has interesting roots. It started with an old Greek word meaning (roughly) one who guides a boat, such as a pilot or rudder operator. Plato adapted this word to mean something like “governance”, on the basis that governing was like steering society. Already in ancient Greece, the term had taken on connotations of central government control.

Fast-forward to the twentieth century. Norbert Wiener foresaw the rise of sophisticated robots, and realized that a robot would need something like a brain to control its mechanisms, as your brain controls your body. Wiener predicted correctly that this kind of controller would be difficult to design and build, so he sought a word to describe the study of these “intelligent” controllers. Not finding a suitable word in English, he reached back to the old Greek word, which he transliterated into English as “cybernetics”. Notice the connection Wiener drew between governance and technological control.

Enter William Gibson. In his early novels about the electronic future, he wanted a term for the “space” where online interactions happen. Failing to find a suitable word, he coined one – cyberspace – by borrowing “cyber” from Wiener. Gibson’s 1984 novel Neuromancer popularized the term. Many of the Net’s early adopters were fans of Gibson’s work, so cyberspace became a standard name for the place you went when you were on the Net.

The odd thing about this usage is that the Internet lacks the kind of central control system that is the subject matter of cybernetics. Gibson knew this – his vision of the Net was decentralized and chaotic – be he liked the term anyway.

All I knew about the word “cyberspace” when I coined it, was that it seemed like an effective buzzword. It seemed evocative and essentially meaningless. It was suggestive of something, but had no real semantic meaning, even for me, as I saw it emerge on the page.

Indeed, the term proved just as evocative for others as it was for Gibson, and it stuck.

As the Net grew, it was widely seen as ungovernable – which many people liked. John Perry Barlow’s “Declaration of Independence of Cyberspace” famously declared that governments have no place in cyberspace. Barlow notwithstanding, government did show up in cyberspace, but it has never come close to the kind of cybernetic control Wiener envisioned.

Meanwhile, the government’s security experts settled on a term, “information security”, or “infosec” for short, to describe the problem of securing information and digital systems. The term is widely used outside of government (along with similar terms “computer security” and “network security”) – the course I teach at Princeton on this topic is called “information security”, and many companies have Chief Information Security Officers to manage their security exposure.

So how did this term “cybersecurity” get mindshare, when we already had a useful term for the same thing? I’m not sure – give me your theories in the comments – but I wouldn’t be surprised if it reflects a military influence on government thinking. As both military and civilian organizations became wedded to digital technology, the military started preparing to defend certain national interests in an online setting. Military thinking on this topic naturally followed the modes of thought used for conventional warfare. Military units conduct reconnaissance; they maneuver over terrain; they use weapons where necessary. This mindset wants to think of security as defending some kind of terrain – and the terrain can only be cyberspace. If you’re defending cyberspace, you must be doing something called cybersecurity. Over time, “cybersecurity” somehow became “cyber security” and then just “cyber”.

Listening to Washington discussions about “cyber”, we often hear strategies designed to exert control or put government in a role of controlling, or at least steering, the evolution of technology. In this community, at least, the meaning of “cyber” has come full circle, back to Wiener’s vision of technocratic control, and Plato’s vision of government steering the ship.

The Decline of Localist Broadcasting Policies

Public policy, in the U.S. at least, has favored localism in broadcasting: programming on TV and radio stations is supposed to be aimed, at least in part, at the local community. Two recent events call this policy into question.

The first event is the debut of the Pandora application on the iPhone. Pandora is a personalized “music radio” service delivered over the Internet. You tell it which artists and songs you like, and it plays you the requested songs, plus other songs it thinks are similar. You can rate the songs it plays, thereby giving it more information about what you like. It’s not a jukebox – you can’t find out in advance what it’s going to play, and there are limits on how often it can play songs from the same artist or album – but it’s more personalized than broadcast radio. (Last.fm offers a similar service, also available now on the iPhone.)

Now you can get Pandora on your iPhone, so you can listen to Pandora on a battery-powered portable device that fits in your pocket – like a twenty-first century version of the old transistor radios, only this one plays a station designed especially for you. Why listen to music on broadcast radio when you can listen to this? Or to put it another way: why listen to music targeted at people who live near you, when you can listen to music targeted at people with tastes like yours?

The second event I’ll point to is a statement from a group of Christian broadcasters, opposing a proposed FCC rule that would require radio stations to have local advisory boards that tell them how to tailor programming to the local community. [hat tip: Ars Technica] The Christian stations say, essentially, that their community is defined by a common interest rather than by geography.

Many people are like the Pandora or Christian radio listeners, in wanting to hear content aimed at their interests rather than just their location. Public policy ought to recognize this and give broadcasters more latitude to find their own communities rather than defining communities only by geography.

Now I’m not saying that there shouldn’t be local programming, or that people shouldn’t care what is happening in their neighborhoods. Most people care a lot about local issues and want some local programming. The local community is one of their communities of interest, but it’s not the only one. Let some stations serve local communities while others serve non-local communities. As long as there is demand for local programming – as there surely will be – the market will provide it, and new technologies will help people get it.

Indeed, one of the benefits of new technologies is that they let people stay in touch with far-away localities. When we were living in Palo Alto during my sabbatical, we wanted to stay in touch with events in the town of Princeton because we were planning to move back after a year. Thanks to the Web, we could stay in touch with both Palo Alto and Princeton. The one exception was that we couldn’t get New Jersey TV stations. We had satellite TV, so the nearby New York and Philadelphia stations were literally being transmitted to our Palo Alto house; but the satellite TV company said the FCC wouldn’t let us have the station because localist policy wanted us to watch San Francisco stations instead. Localist policy, perversely, pushed us away from local programming and kept us out of touch.

New technologies undermine the rationale for localist policies. It’s easier to get far-away content now – indeed the whole notion that content is bound to a place is fading away. With access to more content sources, there are more possible venues for local programming, making it less likely that local programming will be unavailable because of the whims or blind spots of a few station owners. It’s getting easier and cheaper to gather and distribute information, so more people have the means to produce local programming. In short, we’re looking at a future with more non-local programming and more local programming.

Transit Card Maker Sues Dutch University to Block Paper

NXP, which makes the Mifare transit cards used in several countries, has sued Radboud University Nijmegen (in the Netherlands), to block publication of a research paper, “A Practical Attack on the MIFARE Classic,” that is scheduled for publication at the ESORICS security conference in October. The new paper reportedly shows fatal security flaws in NXP’s Mifare Classic, which appears to be the world’s most commonly used contactless smartcard.

I wrote back in January about the flaws found by previous studies of Mifare. After the previous studies, there wasn’t much left to attack in Mifare Classic. The new paper, if its claims are correct, shows that it’s fairly easy to defeat MIFARE Classic completely.

It’s not clear what legal argument NXP is giving for trying to suppress the paper. There was a court hearing last week in Arnheim, but I haven’t seen any reports in the English-language press. Perhaps a Dutch-speaking reader can fill in more details. An NXP spokesman has called the paper “irresponsible” but that assertion is hardly a legal justification for censoring the paper.

Predictably, a document purporting to be the censored paper showed up on Wikileaks, and BoingBoing linked to it. Then, for some reason, it disappeared from Wikileaks, though BoingBoing commenters quickly pointed out that it was still available in Google’s cache of Wikileaks, and also at Cryptome. But why go to a leak-site? The same article has been available on the Web all along at arxiv, a popular repository of sci/tech research preprints run by the Cornell University library.

[UPDATE (July 15): It appears that Wikileaks had the wrong paper, though one that came from the same Radboud group. The censored paper is called “Dismantling Mifare Classic”.]

As usual in these cases of censorship-by-lawsuit, it’s hard to see what NXP is trying to achieve with the suit. The research is already done and peer-reviewed,. The suit will only broaden the paper’s readership. NXP’s approach will alienate the research community. The previous Radboud paper already criticizes NXP’s approach, in a paragraph written before the lawsuit:

We would like to stress that we notified NXP of our findings before publishing our results. Moreover, we gave them the opportunity to discuss with us how to publish our results without damaging their (and their customers) immediate interests. They did not take advantage of this offer.

What is really puzzling here is that the paper is not a huge advance over what has already been published. People following the literature on Mifare Classic – a larger group, thanks to the lawsuit – already know that the system is unsound. Had NXP reacted responsibly to this previous work, admitting the Mifare Classic problems and getting to work on migrating customers to newer, more secure products, none of this would have been necessary.

You’ve got to wonder what NXP was thinking. The lawsuit is almost certain to backfire: it will only boost the audience of the censored paper and of other papers criticizing Mifare Classic. Perhaps some executive got angry and wanted to sue the university out of spite. Things can’t be comfortable in the executive suite: NXP’s failure to get in front of the Mifare Classic problems will (rightly) erode customers’ trust in the company and its products.

UPDATE (July 18): The court ruled against NXP, so the researchers are free to publish. See Mrten’s comment below.