November 22, 2024

Archives for 2008

Where are the Technologists on the EAC Advisory Board?

Barbara Simons, an accomplished computer scientist and e-voting expert, was recently appointed to the Election Assistance Commission (EAC) Board of Advisors. (The EAC is the U.S. Federal body responsible for voting technology standards, among other things.) This is good news.

The board has thirty-seven members, of which four positions are allocated for “members representing professionals in the field of science and technology”. These four positions are to be appointed by Majority and Minority leaders in the House and the Senate. (See page 2 of the Board’s charter.) Given the importance of voting technology issues to the EAC, it does seem like a good idea to reserve 10% of the advisory board positions for technologists. If anything, the number of technologist seats should be larger.

Barbara was appointed to the board position by the Senate Majority Leader, Harry Reid. Kudos to Senator Reid for appointing a genuine voting technology expert.

What about the other three seats for “professionals in the field of science and technology?” Sadly, the board’s membership list shows that these seats are not actually held by technical people. Barbara Arnwine holds the House Speaker’s seat, Tom Fuentes holds the House Minority Leader’s seat, and Wesley R. Kliner, Jr. holds the Senate Minority Leader’s seat. All three appear to be accomplished people who have something to offer on the board. But as far as I can tell they are not “professionals in the field of science and technology”, so their appropriate positions on the board would be somewhere in the other thirty-three seats.

What can be done? I wouldn’t go so far as to kick any current members off the board, even if that were possible. But when vacancies do become available, they should be filled by scientists or technologists, as dictated by the charter’s requirement of having four such people on the board.

The EAC is struggling with voting technology issues. They could surely use the advice of three more expert technologists.

License for an open-source voting system?

Back when we were putting together the grant proposal for ACCURATE, one of the questions that we asked ourselves, and which the NSF people asked us as well, was whether we would produce a “bright shiny object,” which is to say whether or not we would produce a functional voting machine that could ostensibly be put to use in a real election.  Our decision at the time, and it was certainly the correct decision, is that we would focus on innovating in the technology under the covers of a voting system, and we might produce, at most “research prototypes”.  The difference between a research prototype and a genuine, commercial system are typically quite substantial, in general, and it would be no different here with voting system prototypes.

At Rice we built a fairly substantial prototype that we call “VoteBox”; you can read more about it in a paper that will appear on Friday at Usenix Security.  To grossly summarize, our prototype feels a lot like a normal DRE voting system, but uses some nice cryptographic machinery to ensure that you don’t have to trust that the code is correct.  You can verify the correctness of a machine, on the fly, while the election is ongoing.  Our prototype is missing a couple features that you’d want from a commercial system, like write-in voting, but it’s far enough along that it’s been used in several human-factors experiments (CHI’08, Everett’07).

This summer, our mission is to get this thing shipped as some sort of “open source” project.  Now we have several goals in this:

  • Allow other researchers to benefit from our infrastructure as a platform to do their own research.
  • Inspire commercial voting system vendors to build better products (i.e., solving the hard design problems for them, to reduce their cost for adopting innovative techniques).
  • Allow commercial voting system vendors to build on our source code, itself.

All well and good.  Now the question is how we should actually license the thing.  There are many, many different models under which we could proceed:

  • Closed source + patents + licenses.  This may or may not yield revenues for our university, and may or may not be attractive to vendors.  It’s clearly unattractive to other researchers, and would limit uptake of our system in places where we might not even think to look, such as outside the U.S.
  • Open source + a “not for commercial use” license.  This makes it a little easier for other researchers to pick up and modify the software although ownership issues could get tricky.
  • Open source with a “BSD”-style license. A BSD-style license effectively says “do whatever you want, just give us credit for our work and you’re on your own if it doesn’t work.”   This sort of license tends to maximize the ease with which companies can adopt your work.
  • Open source with a “GPL”-style license.  The GPL has an interesting property for the voting system world: it makes any derivatives of the source code as open as the original code (unless a vendor reimplements it from scratch).  This sort of openness is something we want to encourage in voting system vendors, but it might reduce vendor willingness to use the codebase.
  • Open source with a “publication required” licenseJoe Hall suggested this as another option.  Like a BSD license, anybody can use it in a product, but the company would be compelled to publish the source code, like a book.  Unlike GPL, they would not be required to give up copyright or allow any downstream use of their code.

I did a quick survey of several open source voting systems.  Most are distributed under the GPL:

  • Adder
  • eVACS (old version is GPL; new version is proprietary)
  • Helios (code not yet released; most likely GPL according to Ben Adida)
  • OVC (GPL with extensions to require change histories be maintained)
  • Pvote

Civitas is distributed under a non-commercial-use only license.  VoteHere, at one point, opened its code for independent evaluation (but not reuse), but I can’t find it any more.  It was probably a variant on the non-commercial-use only theme.  Punchscan is distributed under a BSD-style license.

My question to the peanut gallery: what sort of license would you select for a bright, shiny new voting system project and why?

[Extra food for thought: The GPLv3 would have some interesting interactions with voting systems.  For starters, there’s a question of who, exactly, a “user” might be.  Is that the county clerk whose office bought it, or the person who ultimately votes on it?  Also, you have section 3, which forbids any attempt to limit reverse-engineering or “circumvention” of the product.  I suppose that means that garden-variety tampering with a voting machine would still violate various laws, but the vendor couldn’t sue you for it.  Perhaps more interesting is section 6, which talks about how source code must be made available when you ship compiled software.  A vendor could perhaps give the source code only to its “customers” without giving it to everybody (again, depending on who a “user” is).  Of course, any such customer is free under the GPL to redistribute the code to anybody else.  Voting vendors may be most scared away by section 11, which talks about compulsory patent licensing (depending, of course, on the particulars of their patent portfolios).]

Plenty of Blame to Go Around in Yahoo Music Shutdown

People have been heaping blame on Yahoo after it announced plans to shut down its Yahoo Music Store DRM servers on September 30. The practical effect of the shutdown is to make music purchased at the store unusable after a while.

Though savvy customers tended to avoid buying music in forms like this, where a company had to keep some distant servers running to keep the purchased music alive, those customers who did buy – taking reassurances from Yahoo and music industry at face value – are rightly angry. In the face of similar anger, Microsoft backtracked on plans to shutter its DRM servers. It looks like Yahoo will stay the course.

Yahoo deserves blame here, but let’s not forget who else contributed to this mess. Start with the record companies for pushing this kind of DRM, and the DRM agenda generally, despite the ample evidence that it would inconvenience paying customers without stopping infringement.

Even leaving aside past mistakes, copyright owners could step in now to help users, either by enticing Yahoo to keep its servers running, or by helping Yahoo create and distribute software that translates the music into a usable form. If I were a Yahoo Music customer, I would be complaining to the copyright owners now, and asking them to step in and stand behind their product.

Finally, let’s not forget the role of Congress. The knowledge of how to jailbreak Yahoo Music tracks and transform them into a stable, usable form exists and could easily be packaged in software form. But Congress made it illegal to circumvent Yahoo’s DRM, even to enable noninfringing use of a legitimately purchased song. And they made it illegal to distribute certain software tools to enable those uses. If Congress had paid more attention to consumer interests in drafting the Digital Millennium Copyright Act, or if it had passed any of the remedial legislation offered since the DMCA took effect, then the market could solve this Yahoo problem all on its own. If I were a Yahoo Music customer, I would be complaining to Congress now, and asking them to stop blocking consumer-friendly technologies.

And needless to say, I wouldn’t be buying DRM-encumbered songs any more.

UPDATE (July 29, 2008): Yahoo has now done the right thing, offering to give refunds or unencumbered MP3s to the stranded customers. I wonder how much this is costing Yahoo.

What's the Cyber in Cyber-Security?

Recently Barack Obama gave a speech on security, focusing on nuclear, biological, and infotech threats. It was a good, thoughtful speech, but I couldn’t help noticing how, in his discussion of the infotech threats, he promised to appoint a “National Cyber Advisor” to give the president advice about infotech threats. It’s now becoming standard Washington parlance to say “cyber” as a shorthand for what many of us would call “information security.” I won’t fault Obama for using the terminology spoken by the usual Washington experts. Still, it’s interesting to consider how Washington has developed its own terminology, and what that terminology reveals about the inside-the-beltway view of the information security problem.

The word “cyber” has interesting roots. It started with an old Greek word meaning (roughly) one who guides a boat, such as a pilot or rudder operator. Plato adapted this word to mean something like “governance”, on the basis that governing was like steering society. Already in ancient Greece, the term had taken on connotations of central government control.

Fast-forward to the twentieth century. Norbert Wiener foresaw the rise of sophisticated robots, and realized that a robot would need something like a brain to control its mechanisms, as your brain controls your body. Wiener predicted correctly that this kind of controller would be difficult to design and build, so he sought a word to describe the study of these “intelligent” controllers. Not finding a suitable word in English, he reached back to the old Greek word, which he transliterated into English as “cybernetics”. Notice the connection Wiener drew between governance and technological control.

Enter William Gibson. In his early novels about the electronic future, he wanted a term for the “space” where online interactions happen. Failing to find a suitable word, he coined one – cyberspace – by borrowing “cyber” from Wiener. Gibson’s 1984 novel Neuromancer popularized the term. Many of the Net’s early adopters were fans of Gibson’s work, so cyberspace became a standard name for the place you went when you were on the Net.

The odd thing about this usage is that the Internet lacks the kind of central control system that is the subject matter of cybernetics. Gibson knew this – his vision of the Net was decentralized and chaotic – be he liked the term anyway.

All I knew about the word “cyberspace” when I coined it, was that it seemed like an effective buzzword. It seemed evocative and essentially meaningless. It was suggestive of something, but had no real semantic meaning, even for me, as I saw it emerge on the page.

Indeed, the term proved just as evocative for others as it was for Gibson, and it stuck.

As the Net grew, it was widely seen as ungovernable – which many people liked. John Perry Barlow’s “Declaration of Independence of Cyberspace” famously declared that governments have no place in cyberspace. Barlow notwithstanding, government did show up in cyberspace, but it has never come close to the kind of cybernetic control Wiener envisioned.

Meanwhile, the government’s security experts settled on a term, “information security”, or “infosec” for short, to describe the problem of securing information and digital systems. The term is widely used outside of government (along with similar terms “computer security” and “network security”) – the course I teach at Princeton on this topic is called “information security”, and many companies have Chief Information Security Officers to manage their security exposure.

So how did this term “cybersecurity” get mindshare, when we already had a useful term for the same thing? I’m not sure – give me your theories in the comments – but I wouldn’t be surprised if it reflects a military influence on government thinking. As both military and civilian organizations became wedded to digital technology, the military started preparing to defend certain national interests in an online setting. Military thinking on this topic naturally followed the modes of thought used for conventional warfare. Military units conduct reconnaissance; they maneuver over terrain; they use weapons where necessary. This mindset wants to think of security as defending some kind of terrain – and the terrain can only be cyberspace. If you’re defending cyberspace, you must be doing something called cybersecurity. Over time, “cybersecurity” somehow became “cyber security” and then just “cyber”.

Listening to Washington discussions about “cyber”, we often hear strategies designed to exert control or put government in a role of controlling, or at least steering, the evolution of technology. In this community, at least, the meaning of “cyber” has come full circle, back to Wiener’s vision of technocratic control, and Plato’s vision of government steering the ship.

The Decline of Localist Broadcasting Policies

Public policy, in the U.S. at least, has favored localism in broadcasting: programming on TV and radio stations is supposed to be aimed, at least in part, at the local community. Two recent events call this policy into question.

The first event is the debut of the Pandora application on the iPhone. Pandora is a personalized “music radio” service delivered over the Internet. You tell it which artists and songs you like, and it plays you the requested songs, plus other songs it thinks are similar. You can rate the songs it plays, thereby giving it more information about what you like. It’s not a jukebox – you can’t find out in advance what it’s going to play, and there are limits on how often it can play songs from the same artist or album – but it’s more personalized than broadcast radio. (Last.fm offers a similar service, also available now on the iPhone.)

Now you can get Pandora on your iPhone, so you can listen to Pandora on a battery-powered portable device that fits in your pocket – like a twenty-first century version of the old transistor radios, only this one plays a station designed especially for you. Why listen to music on broadcast radio when you can listen to this? Or to put it another way: why listen to music targeted at people who live near you, when you can listen to music targeted at people with tastes like yours?

The second event I’ll point to is a statement from a group of Christian broadcasters, opposing a proposed FCC rule that would require radio stations to have local advisory boards that tell them how to tailor programming to the local community. [hat tip: Ars Technica] The Christian stations say, essentially, that their community is defined by a common interest rather than by geography.

Many people are like the Pandora or Christian radio listeners, in wanting to hear content aimed at their interests rather than just their location. Public policy ought to recognize this and give broadcasters more latitude to find their own communities rather than defining communities only by geography.

Now I’m not saying that there shouldn’t be local programming, or that people shouldn’t care what is happening in their neighborhoods. Most people care a lot about local issues and want some local programming. The local community is one of their communities of interest, but it’s not the only one. Let some stations serve local communities while others serve non-local communities. As long as there is demand for local programming – as there surely will be – the market will provide it, and new technologies will help people get it.

Indeed, one of the benefits of new technologies is that they let people stay in touch with far-away localities. When we were living in Palo Alto during my sabbatical, we wanted to stay in touch with events in the town of Princeton because we were planning to move back after a year. Thanks to the Web, we could stay in touch with both Palo Alto and Princeton. The one exception was that we couldn’t get New Jersey TV stations. We had satellite TV, so the nearby New York and Philadelphia stations were literally being transmitted to our Palo Alto house; but the satellite TV company said the FCC wouldn’t let us have the station because localist policy wanted us to watch San Francisco stations instead. Localist policy, perversely, pushed us away from local programming and kept us out of touch.

New technologies undermine the rationale for localist policies. It’s easier to get far-away content now – indeed the whole notion that content is bound to a place is fading away. With access to more content sources, there are more possible venues for local programming, making it less likely that local programming will be unavailable because of the whims or blind spots of a few station owners. It’s getting easier and cheaper to gather and distribute information, so more people have the means to produce local programming. In short, we’re looking at a future with more non-local programming and more local programming.