April 24, 2014

avatar

Art of Science, and Princeton Privacy Panel

Today I want to recommend two great things happening at Princeton, one of which is also on the Net.

Princeton’s second annual Art of Science exhibit was unveiled recently, and it’s terrific, just like last year. Here’s some background, from the online exhibit:

In the spring of 2006 we again asked the Princeton University community to submit images—and, for the first time, videos and sounds—produced in the course of research or incorporating tools and concepts from science. Out of nearly 150 entries from 16 departments, we selected 56 works to appear in the 2006 Art of Science exhibition.

The practices of science and art both involve the single-minded pursuit of those moments of discovery when what one perceives suddenly becomes more than the sum of its parts. Each piece in this exhibition is, in its own way, a record of such a moment. They range from the image that validates years of research, to the epiphany of beauty in the trash after a long day at the lab, to a painter’s meditation on the meaning of biological life.

You can view the exhibit online, but the best way to see it is in person, in the main hallway of the Friend Center on the Princeton campus. One of the highlights is outdoors: a fascinating metal object that looks for all the world like a modernist sculpture but was actually built as a prototype winding coil for a giant electromagnet that will control superhot plasma in a fusion energy experiment. (The online photo doesn’t do it justice.)

If you’re on the Princeton campus on Friday afternoon (June 2), you’ll want to see the panel discussion on “Privacy and Security in the Digital Age”, which I’ll be moderating. We have an all-star group of panelists:
* Dave Hitz (Founder, Network Appliance)
* Paul Misener (VP for Global Public Affairs, Amazon)
* Harriet Pearson (Chief Privacy Officer, IBM)
* Brad Smith (Senior VP and General Counsel, Microsoft)
It’s in 006 Friend, just downstairs from the Art of Science exhibit, from 2:00 to 3:00 on Friday.

These panelists are just a few of the distinguished Princeton alumni who will be on campus this weekend for Reunions.

avatar

Twenty-First Century Wiretapping: Your Dog Sees You Naked

Suppose the government were gathering information about your phone calls: who you talked to, when, and for how long. If that information were made available to human analysts, your privacy would be impacted. But what if the information were made available only to computer algorithms?

A similar question arose when Google introduced its Gmail service. When Gmail users read their mail, they see advertisements. Servers at Google select the ads based on the contents of the email messages being displayed. If the email talks about camping, the user might see ads for camping equipment. No person reads the email (other than the intended recipient) – but Google’s servers make decisions based on the email’s contents.

Some people saw this as a serious privacy problem. But others drew a line between access by people and by computers, seeing access by even sophisticated computer algorithms as a privacy non-event. One person quipped that “Worrying about a computer reading your email is like worrying about your dog seeing you naked.”

So should we worry about the government running computer algorithms on our call data? I can see two main reasons to object.

First, we might object to the government gathering and storing the information at all, even if the information is not (supposed to be) used for anything. Storing the data introduces risks of misuse, for example, that cannot exist if the data is not stored in the first place.

Second, we might object to actions triggered by the algorithms. For example, if the algorithms flag certain records to be viewed by human analysts, we might object to this access by humans. I’ll consider this issue of algorithm-triggered access in a future post – for now, I’ll just observe that the objection here is not to the access by algorithms, but to the access by humans that follows.

If these are only objections to algorithmic analysis of our data, then it’s not the use of computer algorithms that troubles us. What really bothers us is access to our data by people, whether as part of the plan or as unplanned abuse.

If we could somehow separate the use of algorithms from the possibility of human-mediated privacy problems, then we could safely allow algorithms to crawl over our data. In practice, though, algorithmic analysis goes hand in hand with human access, so the question of how to apportion our discomfort is mostly of theoretical interest. It’s enough to object to the possible access by people, while being properly skeptical of claims that the data is not available to people.

The most interesting questions about computerized analysis arise when algorithms bring particular people and records to the attention of human analysts. That’s the topic of my next post.

avatar

Twenty-First Century Wiretapping: Storing Communications Data

Today I want to continue the post-series about new technology and wiretapping (previous posts: 1, 2, 3), by talking about what is probably the simplest case, involving gathering and storage of data by government. Recall that I am not considering what is legal under current law, which is an important issue but is beyond my expertise. Instead, I am considering the public policy question of what rules, if any, should constrain the government’s actions.

Suppose the government gathered information about all phone calls, including the calling and called numbers and the duration of the call, and then stored that information in a giant database, in the hope that it might prove useful later in criminal investigations or foreign intelligence. Unlike the recently disclosed NSA call database, which is apparently data-mined, we’ll assume that the data isn’t used immediately but is only stored until it might be needed. Under what circumstances should this be allowed?

We can start by observing that government should not have free rein to store any data it likes, because storing data, even if it is not supposed to be accessed, still imposes some privacy harm on citizens. For example the possibility of misuse must be taken serious where so much data is at issue. Previously, I listed four types of costs imposed by wiretapping. At least two of those costs – the risk that the information will be abused, and the psychic cost of being watched (such as wondering about “How will this look?”) – apply to stored data, even if nobody is supposed to look at it.

It follows that, before storing such data, government should have to make some kind of showing that the expected value of storing the data outweighs the harms, and that there should be some kind of plan for minimizing the harms, for example by storing the data securely (even against rogue insiders) and discarding the data after some predefined time interval.

The most important safeguard would be an enforceable promise by government not to use the data without getting further permission (and showing sufficient cause). That promise might possibly be broken, but it changes the equation nevertheless by reducing the likelihood and scope of potential misuse.

To whom should the showing of cause be made? Presumably the answer is “a court”. The executive branch agency that wanted to store data would have to convince a court that the expected value of storing the data was sufficient, in light of the expected costs (including all costs/harms to citizens) of storing it. The expected costs would be higher if data about everyone were to be stored, and I would expect a court to require a fairly strong showing of significant benefit before authorizing the retention of so much data.

Part of the required showing, I think, would have to be an argument that there is not some way to store much less data and still get nearly the same benefit. An alternative to storing data on everybody is to store data only about people who are suspected of being bad guys and therefore are more likely to be targets of future investigations.

I won’t try to calibrate the precise weights to place on the tradeoff between the legitimate benefits of data retention and the costs. That’s a matter for debate, and presumably a legal framework would have to be more precise than I am. For now, I’m happy to establish the basic parameters and move on.

All of this gets more complicated when government wants to have computers analyze the stored data, as the NSA is apparently doing with phone call records. How to think about such analyses is the topic of the next post in the series.

avatar

Zfone Encrypts VoIP Calls

Phil Zimmerman, who created the PGP encryption software, and faced a government investigation as a result, now offers a new program, Zfone, that provides end-to-end encryption of computer-to-computer (VoIP) phone calls, according to a story in yesterday’s New York Times.

One of the tricky technical problems in encrypting communications is key exchange: how to get the two parties to agree on a secret key that only they know. This is often done with a cumbersome “public key infrastructure” (PKI), which wouldn’t work well for this application. Zfone has a clever key exchange protocol that dispenses with the PKI and instead relies on the two people reading short character strings to each other over the voice connection. This will provide a reasonably secure shared secret key, as long as the two people recognize each others’ voices.

(Homework problem for security students: What does the string-reading accomplish? Based on just the information here, how do you think the Zfone key exchange protocol works?)

In the middle of the article is this interesting passage:

But Mr. Zimmermann, 52, does not see those fearing government surveillance — or trying to evade it — as the primary market [for Zfone]. The next phase of the Internet’s spyware epidemic, he contends, will be software designed to eavesdrop on Internet telephone calls made by corporate users.

“They will have entire digital jukeboxes of covertly acquired telephone conversations, and suddenly someone in Eastern Europe is going to be very wealthy,” he said.

Though the article doesn’t say so directly, this passage seems to imply that Zfone can protect against spyware-based eavesdropping. That’s not right.

One of the challenges in using encryption is that the datastream is not protected before it is encrypted at the source, or after it is decrypted at the destination. If you and I are having a Zfone-protected conversation, spyware on your computer could capture your voice before it is encrypted for transmission to me, and could also capture my voice after it is decrypted on your computer. Zfone is helpless against this threat, as are other VoIP encryption schemes.

All of this points to an interesting consequence of strong encryption. As more and more communications are strongly encrypted, would-be spies have less to gain from wiretapping and more to gain from injecting malware into their targets’ computers. Yet another reason to expect a future with even more malware.

avatar

Twenty-First Century Wiretapping: Not So Hypothetical

Two weeks ago I started a series of posts (so far: 1, 2) about how new technologies change the policy issues around government wiretapping. I argued that technology changed the policy equation in two ways, by making storage much cheaper, and by enabling fancy computerized analyses of intercepted communications.

My plan was to work my way around to a carefully-constructed hypothetical that I designed to highlight these two issues – a hypothetical in which the government gathered a giant database of everybody’s phone call records and then did data mining on the database to identify suspected bad guys. I had to lay a bit more groundwork before getting to the hypothetical, but I was planning to get to it after a few more posts.

Events intervened – the “hypothetical” turned out, apparently, to be true – which makes my original plan moot. So let’s jump directly to the NSA call-database program. Today I’ll explain why it’s a perfect illustration of the policy issues in 21st century surveillance. In the next post I’ll start unpacking the larger policy issues, using the call record program as a running example.

The program illustrates the cheap-storage trend for obvious reasons: according to some sources, the NSA’s call record database is the biggest database in the world. This part of the program probably would not have been possible, within the NSA’s budget, until the last few years.

The data stored in the database is among the least sensitive (i.e., private) communications data around. This is not to say that it has no privacy value at all – all I mean is that other information, such as full contents of calls, would be much more sensitive. But even if information about who called whom is not particularly sensitive for most individual calls, the government might, in effect, make it up on volume. Modestly sensitive data, in enormous quantities, can add up to a big privacy problem – an issue that is much more important now that huge databases are feasible.

The other relevant technology trend is the use of automated algorithms, rather than people, to analyze communications traffic. With so many call records, and relatively few analysts, simple arithmetic dictates that the overwhelming majority of call records will never be seen by a human analyst. It’s all about what the automated algorithms do, and which information gets forwarded to a person.

I’ll start unpacking these issues in the next post, starting with the storage question. In the meantime, let me add my small voice to the public complaints about the NSA call record program. They ruined my beautiful hypothetical!

avatar

Princeton-Microsoft IP Conference Liveblog

Today I’m at the Princeton-Microsoft Intellectual Property Conference. I’ll be blogging some of the panels as they occur. There are parallel sessions, and I’m on one panel, so I can’t cover everything.

The first panel is on “Organizing the Public Interest”. Panelists are Yochai Benkler, David Einhorn, Margaret Hedstrom, Larry Lessig, and Gigi Sohn. The moderator is Paul Starr.

Yochai Benker (Yale Law) speaks first. He has two themes: decentralization of creation, and emergence of a political movement around that creation. Possibility of altering the politics in three ways. First, the changing relationship between creators and users and growth in the number of creators changes how people relate to the rules. Second, we see existence proofs of the possible success of decentralized production: Linux, Skype, Flickr, Wikipedia. Third, a shift away from centralized, mass, broadcast media. He talks about political movements like free culture, Internet freedom, etc. He says these movements are coalescing and allying with each other and with other powers such as companies or nations. He is skeptical of the direct value of public reason/persuasion. He thinks instead that changing social practices will have a bigger impact in the long run.

David Einhorn (Counsel for the Jackson Laboratory, a research institution) speaks second. “I’m here to talk about mice.” Jackson Lab has lots of laboratory mice – the largest collection (community? inventory?) in the world. Fights developed around access to certain strains of mice. Gene sequences created in the lab are patentable, and research institutions are allowed to exploit those patents (even if the university was government-funded). This has led to some problems. There is an inherent tension between patent exploitation and other goals of universities (creation and open dissemination of knowledge). Lines of lab mice were patentable, and suddenly lawyers were involved whenever researchers used to get mice. It sounds to me like Jackson Lab is a kind of creative commons for mice. He tells stories about how patent negotiations have blocked some nonprofit research efforts.

Margaret Hedstrom (Univ. of Michigan) speaks third. She talks about the impact of IP law on libraries and archives, and how those communities have organized themselves. In the digital world, there has been a shift from buying copies of materials, to licensing materials – a shift from the default copyright rules to the rules that are in the license. This means, for instance, that libraries may not be able to lend out material, or may not be able to make archival copies. Some special provisions in the law apply to libraries and archives, but not to everybody who does archiving (e.g., the Internet Archive is in the gray area). The orphan works problem is a big deal for libraries and archives, and they are working to chip away at this and other narrow legal issues. They are also talking to academic authors, urging them to be more careful about which rights they assign to journals who publish their articles.

Larry Lessig (Stanford Law) speaks fourth. He starts by saying that most of his problems are caused by his allies, but his opponents are nicer and more predictable in some ways. Why? (1) Need to unite technologists and lawyers. (2) Need to unite libertarians and liberals. Regarding tech and law, the main conflict is about what constitutes success. He says technologists want 99.99% success, lawyers are happy with 60%. (I don’t think this is quite right.) He says that fair use and network neutrality are essentially the same issue, but they’re handled inconsistently. He dislikes the fair use system (though he likes fair use itself) because the cost and uncertainty of the system bias so strongly against use without permission, even when those uses ought to be fair – people don’t want to be right, they want to avoid having suits filed against them. Net neutrality, he says, is essentially the same problem as fair use, because it is about how to limit the ability of properties owners who have monopoly power (i.e., copyright owners or ISPs) to use their monopoly property rights against the public interest. The challenge is how to keep the coalition together while addressing these issues.

Gigi Sohn (PublicKnowledge) is the last speaker. Her topic is “what it’s like to be a public interest advocate on the ground.” PublicKnowledge plays a key role in doing thiis, as part of a larger coalition. She lists six strategies that are used in practice to change the debate: (1) day to day, face to face advocacy with policymakers; (2) coalition-building with other NGOs, such as Consumers Union, librarians, etc., and especially industry (different sectors on different issues); (3) message-building, both push and pull communications; (4) grassroots organizing; (5) litigation, on offense and defense (with a shout-out to EFF); (6) working with scholars to build a theoretical framework on these topics. How has it worked? “We’ve been very good at stopping bad things”: broadcast flag, analog hole, database protection laws, etc. She says they/we haven’t been so successful at making good things happen.

Time for Q&A. Tobias Robison (“Precision Blogger”) asks Gigi how to get the financial clout needed to continue the fight. Gigi says it’s not so expensive to play defense.

Sandy Thatcher (head of Penn State University Press) asks how to reconcile the legitimate needs of copyright owners with their advocacy for narrower copyright. He suggests that university presses need the DMCA to survive. (I want to talk to him about that later!) Gigi says, as usual, that PK is interested in balance, not in abolishing the core of copyright. Margaret Hedstrom says that university presses are in a tough spot, and we don’t need to have as many university presses as we have. Yochai argues that university presses shouldn’t act just like commercial presses – if university presses are just like commercial presses why should universities and scholars have any special loyalty to them?

Anne-Marie Slaughter (Dean of the Woodrow Wilson Schoel at Princeton) suggests that some people will be willing to take less money in exchange for the pyschic satisfaction of helping people by spreading knowledge. She suggests that this is a way of showing leadership. Larry Lessig answers by arguing that many people, especially those with smaller market share, can benefit financially from allowing more access. Margaret Hedstrom gives another example of scholarly books released permissively, leading to more sales.

Wes Cohen from Duke Uhiversity asserts that IP rulings (like Madey v. Duke, which vastly narrowed the experimental use exception in patent law) have had relatively litle impact on the day-to-day practice of scientific research. He asks David Einhorn whether his matches his experience. David E. says that bench scientists “are going to do what they have always done” and people are basically ignoring these rules, just hoping that one research organization will sue another and that damages will be small anyway. But, he says, the law intrudes when one organization has to get research materials from another. He argues that this is a bad thing, especially when (as in most biotech research) both organizations are funded by the same government agency. Bill [didn't catch the last name], who runs tech transfer for the University of California, says that there have been problems getting access to stem cell lines.

The second panel is on the effect of patent law. Panelists are Kathy Strandburg, Susan Mann, Wesley Cohen, Stephen Burley, and Mario Biagioli. Moderator is Rochelle Dreyfuss.

First speaker is Susan Mann (Director of IP Policy, or something like that) at Microsoft. She talks about the relation between patent law and the structure of the software industry. She says people tend not to realize how the contours of patent law shape how companies develop and design products. She gives a chronology of when and why patent law came to be applied to software. She argues that patents are better suited than copyright and trade secret for certain purposes, because patents are public, are only protected if novel and nonobvious, apply to methods of computation, and are more amenable to use in standards. She advocates process-oriented reforms to raise patent quality.

Stephen Burley (biotech researcher and entrepreneur) speaks second. He tells some stories about “me-too drugs”. Example: one of the competitors of Viagra differs from the Viagra molecule by only one carbon atom. Because of the way the viagra patent is written, the competitor could make their drug without licensing the Viagra patent. You might think this is pure free-riding, but in fact even these small differences have medical significance – in this case the drugs have the same primary effect but different side-effects. He tells another story where a new medical test cannot be independently validated by researchers because they can’t get a patent license. Here the patent is being used to prevent would-be customers from finding out about the quality of a product. (To a computer security researcher, this story sounds familiar.) He argues that the relatively free use of tools and materials in research has been hugely valuable.

Third speaker is Mario Biagioli (Harvard historian). He says that academic scientists have always been interested in patenting inventions, going back to Galileo, the Royal Society, Pascal, Huygens, and others. Galileo tried to patent the telescope. Early patents were given, not necessarily to inventors, but often to expert foreigners to give them an incentive to move. You might give a glassmaking patent to a Venetian glassmaker to give him an incentive to set up business in your city. Little explanation of how the invention worked was required, as long as the device or process produced the desired result. Novelty was not required. To get a patent, you didn’t need to invent something, you only needed to be the first to practice it in that particular place. The idea of specification – the requirement to describe the invention to the public in order to get a patent – was emphasized more recently.

Fourth speaker is Kathy Strandburg (DePaul Law). She emphasizes the social structure of science, which fosters incentives to create that are not accounted for in patent law. She argues that scientific creation is an inherently social process, with its own kind of economy of jobs and prestige. This process is pretty successful and we should be careful not to mess it up. She argues, too, that patent law doctrine hasn’t accounted adequately for innovation by users, and the tendency of users to share their innovations freely. She talks about researchers as users. When researchers are designing and using tools, they acting as both scientists and users, so both of the factors mentioned so far will operate, to make the incentive bigger than the standard story would predict. All of this argues for a robust research use exemption – a common position that seems to be emerging from several speakers so far.

Fifth and final speaker is Wesley Cohen (Duke economist). He presents his research on the impact of patents on the development and use of biotech research tools. There has been lots of concern about patenting and overly strict licensing of research tools by universities. His group did empirical research on this topic, in the biotech realm. Here are the findings. (1) Few scientists actually check whether patents might apply to them, even when their institutions tell them to check. (2) When scientists were aware of a patent they needed to license, licenses were almost always available at no cost. (3) Only rarely do scientists change their research direction because of concern over others’ patents. (4) Though patents have little impact, the need to get research materials is a bigger impediment (scientists couldn’t get a required input 20% of the time), and leads more often to changes in research direction because of inability to get materials. (5) When scientists withheld materials from their peers, the most common reasons were (a) research business activity related to the material, and (b) competition between scientists. His bottom-line conclusion: “law on the books is not the same as law in action”.

Now for the Q&A. Several questions to Wes Cohen about the details of his study results. Yochai Benkler asks, in light of the apparent practical irrelevance of patents in biotech research, what would happen if the patent system started applying strongly to that research. Wes Cohen answers that this is not so likely to happen, because there is a norm of reciprocity now, and there will still be a need to maintain good relations between different groups and institutions. It seems to me that he isn’t arguing that Benkler’s hypothetical woudn’t be harmful, just that the hypo is unlikely to happen. (Guy in the row behind me just fell asleep. I think the session is pretty interesting…)

After lunch, we have a speech by Sergio Sa Leitao, Brazil’s Minister of Cultural Policies. He speaks in favor of cultural diversity – “a read-only culture is not right for Brazil” – and how to reconcile it with IP. His theme is the need to face up to reality and figure out how to cope with changes brought on by technology. He talks specifically about the music industry, saying that they lots precious time trying to maintain a business model that was no longer relevant. He gives some history of IP diplomacy relating to cultural diversity, and argues for continued attention to this issue in international negotiations about IP policy. He speaks in favor of a UNESCO convention on cultural diversity.

In the last session of the day, I’ll be attending a panel on compulsory licensing. I’ll be on the panel, actually, so I won’t be liveblogging.

avatar

Ed Talks in SANE

Today, I gave a keynote at the SANE (System Administration and Network Engineering) conference, in Delft, the Netherlands. SANE has an interesting group of attendees, mostly high-end system and network jockeys, and people who like to hang around with them.

At the request of some attendees, I am providing a PDF of my slides, with a few images redacted to placate the copyright gods.

The talk was a quick overview of what I used to think of as the copyfight, but I now think of as the technologyfight. The first part of the talk set the stage, using two technologies as illustrations: the VCR, and Sony-BMG’s recent copy-protected CDs. I then switched gears and talked about the political/regulatory side of the techfight.

In the last part of the talk, I analogized the techfight to the Cold War. I did this with some trepidation, as I didn’t want to imply that the techfight is just like the Cold War or that it is as important as the Cold War was. But I think that the Cold War analogy is useful in thinking about the techfight.

The analogy works best in suggesting a strategy for those on the openness/technology/innovation/end-to-end side of the techfight. In the talk, I used the Cold War analogy to suggest a three-part strategy.

Part 1 is to contain. The West did not seek to win the Cold War by military action; instead it tried to contain the other side militarily so as to win in other ways. Similarly, the good guys in the techfight will not win with lawyers; but lawyers must be used when necessary to contain the other side. Kennan’s definition of containment is apt: “a long-term, patient but firm and vigilant containment of [the opponent’s] expansive tendencies”.

Part 2 is to explain. This means trying to influence public opinion by explaining the benefits of an open and free environment (in the Cold War, an open and free society) and by rebutting the other side’s arguments in favor of a more constraining, centrally planned system.

Part 3 is to create. Ultimately the West won the Cold War because people could see that ordinary citizens in the West had better, more creative, more satisfying lives. Similarly, the best strategy in the techfight is simply to show what technology can do – how it can improve the lives of ordinary citizens. This will be the decisive factor.

In the break afterward, somebody referred to a P.J. O’Rourke quote to the effect that the West won the Cold War because it, unlike its opponents, could provide its citizens with comfortable shoes. (If you’re the one who told me this, please remind me of your name.) No doubt O’Rourke was exaggerating for comic effect, but he did capture something important about the benefits of a free society and, by analogy, of a free and open technology ecosystem.

Another American approached me afterward and said that by talking about the Cold War as having been won by one side and lost by the other, I was portraying myself, to the largely European audience, as the stereotypical conservative American. I tried to avoid giving this impression (so as not to distract from my message), calling the good side of the Cold War “the West” and emphasizing the cultural rather than military aspects of the Cold War. I had worried a little about how people would react to my use of the Cold War analogy, but ultimately I decided that the analogy was just too useful to pass up. I think it worked.

All in all, it was great fun to meet the SANE folks and see Delft. Now back to real life.

avatar

ICANN Says No to .xxx

Susan Crawford reports that the ICANN board has voted not to proceed with creation of the .xxx domain. Susan, who is on ICANN’s board but voted against the decision, calls it a “low point” in ICANN’s history.

[Background: ICANN is a nonprofit organization that administers the Domain Name System (DNS), which translates human-readable Internet names like "www.freedom-to-tinker.com" into the numeric IP addresses like 192.168.1.4 that are actually used by the Internet Protocol. Accordingly, part of ICANN's job is to decide on the creation and management of new top-level domains like .info, .travel, and so on.]

ICANN had decided, some time back, to move toward a .xxx domain for adult content. The arrangements for .xxx seemed to be ready, but now ICANN has pulled the plug. The reason, apparently, is that the ICANN board was worried that ICM, the company that would have run .xxx, could not ensure that sites in the domain complied with all applicable laws. Note that this is a different standard than other domain managers would have to meet – nobody expects the managers of .com to ensure, proactively, that .com sites obey all of the national laws that might apply to them. And of course we all know why the standard was different: governments are touchy about porn.

Susan argues that the .xxx decision is a departure from ICANN’s proper role.

ICANN’s mission is to coordinate the allocation of domain names and numbers, while preserving the operational stability, reliability, and global interoperability of the Internet. The vision of a non-governmental body managing key (but narrow) technical coordination functions for the Internet remains in my view the approach most likely to reflect the needs of the Internet community.

[...]

I am not persuaded that there is any good technical-competency or financial-competency reason not to [proceed with .xxx].

The vision here is of ICANN as a technocratic standard-setter, not a policy-maker. But ICANN, in setting the .xxx process in motion, had already made a policy decision. As I wrote last year, ICANN had decided to create a top-level domain for adult content, when there wasn’t one for (say) religious organizations, or science institutes. ICANN has put itself in the position of choosing which kinds of domains will exist, and for what purposes. Here is Susan again:

ICANN’s current process for selecting new [top-level domains], and the artificial scarcity this process creates, continues to raise procedural concerns that should be avoided in the future. I am not in favor of the “beauty contest” approach taken by ICANN thus far, which relies heavily on relatively subjective and arbitrary criteria, and not enough on the technical merits of the applications. I believe this subjective approach generates conflict and is damaging to the technically-focused, non-governmental, bottom-up vision of ICANN activity. Additionally, both XXX and TEL raise substantial concerns about the merits of continuing to believe that ICANN has the ability to choose who should “sponsor” a particular domain or indeed that “sponsorship” is a meaningful concept in a diverse world. These are strings we are considering, and how they are used at the second level in the future and by whom should not be our concern, provided the entity responsible for running them continues to comply with global consensus policies and is technically competent.

We need to adopt an objective system for the selection of new [top-level domains], through creating minimum technical and financial requirements for registries. Good proposals have been put forward for improving this process, including the selection of a fixed number annually by lottery or auction from among technically-competent bidders.

One wonders what ICANN was thinking when it set off down the .xxx path in the first place. Creating .xxx was pretty clearly a public policy decision – though one might argue about that decision’s likely effects, it was clearly not a neutral standards decision. The result, inevitably, was pressure from governments to reverse course, and a lose-lose choice between losing face by giving in to government pressure, on the one hand, and ignoring governments’ objections and thereby strengthening the forces that would replace ICANN with some kind of government-based policy agency, on the other.

We can only hope that ICANN will learn from its .xxx mistake and think hard about what it is for and how it can pursue its legitimate goals.

avatar

Report Claims Very Serious Diebold Voting Machine Flaws

[This entry was written by Avi Rubin and Ed Felten.]

A report by Harri Hursti, released today at BlackBoxVoting, describes some very serious security flaws in Diebold voting machines. These are easily the most serious voting machine flaws we have seen to date – so serious that Hursti and BlackBoxVoting decided to redact some of the details in the reports. (We know most or all of the redacted information.) Now that the report has been released, we want to help people understand its implications.

Replicating the report’s findings would require access to a Diebold voting machine, and some time, so we are not in a position to replicate the findings at this time. However, the report is consistent with everything we know about how these voting machines work, and we find it very plausible. Assuming the report is accurate, we want to summarize its lessons for voters and election administrators.

Implications of the Report’s Findings

The attacks described in Hursti’s report would allow anyone who had physical access to a voting machine for a few minutes to install malicious software code on that machine, using simple, widely available tools. The malicious code, once installed, would control all of the functions of the voting machine, including the counting of votes.

Hursti’s findings suggest the possibililty of other attacks, not described in his report, that are even more worrisome.

In addition, compromised machines would be very difficult to detect or to repair. The normal procedure for installing software updates on the machines could not be trusted, because malicious code could cause that procedure to report success, without actually installing any updates. A technician who tried to update the machine’s software would be misled into thinking the update had been installed, when it actually had not.

On election day, malicious software could refuse to function, or it could silently miscount votes.

What can we do now?

Election officials are in a very tough spot with this latest vulnerability. Since exploiting the weakness requires physical access to a machine, physical security is of the utmost importance. All Diebold Accuvote machines should be sequestered and kept under vigilant watch. This measure is not perfect because it is possible that the machines are already compromised, and if it was done by a clever attacker, there may be no way to determine whether or not this is the case. Worse yet, the usual method of patching software problems cannot be trusted in this case.

Where possible, precincts planning on using these machines should consider making paper backup systems available to prepare for the possibility of widespread failures on election day. The nature of this technology is that there is really no remedy from a denial of service attack, except to have a backup system in place. While voter verified paper trails and proper audit can be used to protect against incorrect results from corrupt machines, they cannot prevent an attack that renders the machines non-functional on election day.

Using general purpose computers as voting machines has long been criticized by computer scientists. This latest vulnerability highlights the reasoning behind this position. This attack is possible due to the very nature of the hardware on which the systems are running. Several high profile studies failed to uncover this. With the current technology, there is no way to account for all the ways that a system might be vulnerable, and the discovery of a problem of this magnitude in the midst of primary season is the kind of scenario we have feared all along.

Timeline and Perspective

This is not the first time Diebold has faced serious security issues – though this problem appears to be the worst of them all. Here is a capsule history of Diebold security studies:

2001: Doug Jones produces a report highlighting design flaws in the machines that became the Diebold touchscreen voting machines.
July 24, 2003: Hopkins/Rice study finds many security flaws in Diebold machines, including ones that were pointed out by Doug Jones.
September 24, 2003: SAIC study finds serious flaws in Diebold voting machines. 2/3 of the report is redacted by the state of Maryland.
November 21, 2003: Ohio’s Compuware and InfoSentry reports find critical flaws in Diebold touchscreen voting machines
January 20, 2004: RABA study finds serious security vulnerabilities in Diebold touchscreen voting machines.
November, 2004: 37 states use Diebold touchscreen voting machines in general election.
March, 2006: Harri Hursti reports the most serious vulnerabilities to date discovered.

None of the previously published studies uncovered this flaw. Did SAIC? It might exist in the unredacted report, but to date, nobody outside of Maryland officials and SAIC has been able to see that report.

We believe that the question of whether DREs based on commodity hardware and operating systems should ever be used in elections needs serious consideration by government and election officials. As computer security experts, we believe that the known dangers and potentially unknown vulnerabilities are too great. We should not put ourselves in a position where, in the middle of primary season, the security of our voting systems comes into credible and legitimate question.

avatar

Twenty-First Century Wiretapping: Recording

Yesterday I started a thread on new wiretapping technologies, and their policy implications. Today I want to talk about how we should deal with the ability of governments to record and store huge numbers of intercepted messages.

In the old days, before there were huge, cheap digital storage devices, government would record an intercepted message only if it was likely to listen to that message eventually. Human analysts’ time was scarce, but recording media were relatively scarce too. The cost of storage tended to limit the amount of recording.

Before too much longer, Moore’s Law will enable government to record every email and phone call it knows about, and to keep the recordings forever. The cost of storage will no longer be a factor. Indeed, if storage is free but analysts’ time is costly, then the cost-minimizing strategy is to record everything and sort it out later, rather than spending analyst time figuring out what to record. Cost is minimized by doing lots of recording.

Of course the government’s cost is not the only criterion that wiretap policy should consider. We also need to consider the effect on citizens.

Any nontrivial wiretap policy will sometimes eavesdrop on innocent citizens. Indeed, there is a plausible argument that a well-designed wiretap policy will mostly eavesdrop on innocent citizens. If we knew in advance, with certainty, that a particular communication would be part of a terrorist plot, then of course we would let government listen to that communication. But such certainty only exists in hypotheticals. In practice, the best we can hope for is that, based on the best available information, there is some known probability that the message will be part of a terrorist plot. If that probability is just barely less than 100%, we’ll be comfortable allowing eavesdropping on that message. If the probability is infinitesimal, we won’t allow eavesdropping. Somewhere in the middle there is a threshold probability, just high enough that we’re willing to allow eavesdropping. We’ll make the decision by weighing the potential benefit of hearing the bad guys’ conversations, against the costs and harms imposed by wiretapping, in light of the probability that we’ll overhear real bad guys. The key point here is that even the best wiretap policy will sometimes listen in on innocent people.

(For now, I’m assuming that “we” have access to the best possible information, so that “we” can make these decisions. In practice the relevant information may be closely held (perhaps with good reason) and it matters greatly who does the deciding. I know these issues are important. But please humor me and let me set them aside for a bit longer.)

The drawbacks of wiretapping come in several flavors:
(1) Cost: Wiretapping costs money.
(2) Mission Creep: The scope of wiretapping programs (arguably) tends to increase over time, so today’s reasonable, well-balanced program will lead to tomorrow’s overreach.
(3) Abuse: Wiretaps can be (and have been) misused, by improperly spying on innocent people such as political opponents of the wiretappers, and by misusing information gleaned from wiretaps.
(4) Privacy Threat: Ordinary citizens will feel less comfortable and will feel compelled to speak more cautiously, due to the knowledge that wiretappers might be listening.

Cheap, high capacity storage reduces the first drawback (cost) but increases all the others. The risk of abuse seems particularly serious. If government stores everything from now on, corrupt government officials, especially a few years down the road, will have tremendous power to peer into the lives of people they don’t like.

This risk is reason enough to insist that recording be limited, and that there be procedural safeguards against overzealous recording. What limits and safeguards are appropriate? That’s the topic of my next post.