December 23, 2024

Archives for 2006

Twenty-First Century Wiretapping: Your Dog Sees You Naked

Suppose the government were gathering information about your phone calls: who you talked to, when, and for how long. If that information were made available to human analysts, your privacy would be impacted. But what if the information were made available only to computer algorithms?

A similar question arose when Google introduced its Gmail service. When Gmail users read their mail, they see advertisements. Servers at Google select the ads based on the contents of the email messages being displayed. If the email talks about camping, the user might see ads for camping equipment. No person reads the email (other than the intended recipient) – but Google’s servers make decisions based on the email’s contents.

Some people saw this as a serious privacy problem. But others drew a line between access by people and by computers, seeing access by even sophisticated computer algorithms as a privacy non-event. One person quipped that “Worrying about a computer reading your email is like worrying about your dog seeing you naked.”

So should we worry about the government running computer algorithms on our call data? I can see two main reasons to object.

First, we might object to the government gathering and storing the information at all, even if the information is not (supposed to be) used for anything. Storing the data introduces risks of misuse, for example, that cannot exist if the data is not stored in the first place.

Second, we might object to actions triggered by the algorithms. For example, if the algorithms flag certain records to be viewed by human analysts, we might object to this access by humans. I’ll consider this issue of algorithm-triggered access in a future post – for now, I’ll just observe that the objection here is not to the access by algorithms, but to the access by humans that follows.

If these are only objections to algorithmic analysis of our data, then it’s not the use of computer algorithms that troubles us. What really bothers us is access to our data by people, whether as part of the plan or as unplanned abuse.

If we could somehow separate the use of algorithms from the possibility of human-mediated privacy problems, then we could safely allow algorithms to crawl over our data. In practice, though, algorithmic analysis goes hand in hand with human access, so the question of how to apportion our discomfort is mostly of theoretical interest. It’s enough to object to the possible access by people, while being properly skeptical of claims that the data is not available to people.

The most interesting questions about computerized analysis arise when algorithms bring particular people and records to the attention of human analysts. That’s the topic of my next post.

Twenty-First Century Wiretapping: Storing Communications Data

Today I want to continue the post-series about new technology and wiretapping (previous posts: 1, 2, 3), by talking about what is probably the simplest case, involving gathering and storage of data by government. Recall that I am not considering what is legal under current law, which is an important issue but is beyond my expertise. Instead, I am considering the public policy question of what rules, if any, should constrain the government’s actions.

Suppose the government gathered information about all phone calls, including the calling and called numbers and the duration of the call, and then stored that information in a giant database, in the hope that it might prove useful later in criminal investigations or foreign intelligence. Unlike the recently disclosed NSA call database, which is apparently data-mined, we’ll assume that the data isn’t used immediately but is only stored until it might be needed. Under what circumstances should this be allowed?

We can start by observing that government should not have free rein to store any data it likes, because storing data, even if it is not supposed to be accessed, still imposes some privacy harm on citizens. For example the possibility of misuse must be taken serious where so much data is at issue. Previously, I listed four types of costs imposed by wiretapping. At least two of those costs – the risk that the information will be abused, and the psychic cost of being watched (such as wondering about “How will this look?”) – apply to stored data, even if nobody is supposed to look at it.

It follows that, before storing such data, government should have to make some kind of showing that the expected value of storing the data outweighs the harms, and that there should be some kind of plan for minimizing the harms, for example by storing the data securely (even against rogue insiders) and discarding the data after some predefined time interval.

The most important safeguard would be an enforceable promise by government not to use the data without getting further permission (and showing sufficient cause). That promise might possibly be broken, but it changes the equation nevertheless by reducing the likelihood and scope of potential misuse.

To whom should the showing of cause be made? Presumably the answer is “a court”. The executive branch agency that wanted to store data would have to convince a court that the expected value of storing the data was sufficient, in light of the expected costs (including all costs/harms to citizens) of storing it. The expected costs would be higher if data about everyone were to be stored, and I would expect a court to require a fairly strong showing of significant benefit before authorizing the retention of so much data.

Part of the required showing, I think, would have to be an argument that there is not some way to store much less data and still get nearly the same benefit. An alternative to storing data on everybody is to store data only about people who are suspected of being bad guys and therefore are more likely to be targets of future investigations.

I won’t try to calibrate the precise weights to place on the tradeoff between the legitimate benefits of data retention and the costs. That’s a matter for debate, and presumably a legal framework would have to be more precise than I am. For now, I’m happy to establish the basic parameters and move on.

All of this gets more complicated when government wants to have computers analyze the stored data, as the NSA is apparently doing with phone call records. How to think about such analyses is the topic of the next post in the series.

Zfone Encrypts VoIP Calls

Phil Zimmerman, who created the PGP encryption software, and faced a government investigation as a result, now offers a new program, Zfone, that provides end-to-end encryption of computer-to-computer (VoIP) phone calls, according to a story in yesterday’s New York Times.

One of the tricky technical problems in encrypting communications is key exchange: how to get the two parties to agree on a secret key that only they know. This is often done with a cumbersome “public key infrastructure” (PKI), which wouldn’t work well for this application. Zfone has a clever key exchange protocol that dispenses with the PKI and instead relies on the two people reading short character strings to each other over the voice connection. This will provide a reasonably secure shared secret key, as long as the two people recognize each others’ voices.

(Homework problem for security students: What does the string-reading accomplish? Based on just the information here, how do you think the Zfone key exchange protocol works?)

In the middle of the article is this interesting passage:

But Mr. Zimmermann, 52, does not see those fearing government surveillance — or trying to evade it — as the primary market [for Zfone]. The next phase of the Internet’s spyware epidemic, he contends, will be software designed to eavesdrop on Internet telephone calls made by corporate users.

“They will have entire digital jukeboxes of covertly acquired telephone conversations, and suddenly someone in Eastern Europe is going to be very wealthy,” he said.

Though the article doesn’t say so directly, this passage seems to imply that Zfone can protect against spyware-based eavesdropping. That’s not right.

One of the challenges in using encryption is that the datastream is not protected before it is encrypted at the source, or after it is decrypted at the destination. If you and I are having a Zfone-protected conversation, spyware on your computer could capture your voice before it is encrypted for transmission to me, and could also capture my voice after it is decrypted on your computer. Zfone is helpless against this threat, as are other VoIP encryption schemes.

All of this points to an interesting consequence of strong encryption. As more and more communications are strongly encrypted, would-be spies have less to gain from wiretapping and more to gain from injecting malware into their targets’ computers. Yet another reason to expect a future with even more malware.

Twenty-First Century Wiretapping: Not So Hypothetical

Two weeks ago I started a series of posts (so far: 1, 2) about how new technologies change the policy issues around government wiretapping. I argued that technology changed the policy equation in two ways, by making storage much cheaper, and by enabling fancy computerized analyses of intercepted communications.

My plan was to work my way around to a carefully-constructed hypothetical that I designed to highlight these two issues – a hypothetical in which the government gathered a giant database of everybody’s phone call records and then did data mining on the database to identify suspected bad guys. I had to lay a bit more groundwork before getting to the hypothetical, but I was planning to get to it after a few more posts.

Events intervened – the “hypothetical” turned out, apparently, to be true – which makes my original plan moot. So let’s jump directly to the NSA call-database program. Today I’ll explain why it’s a perfect illustration of the policy issues in 21st century surveillance. In the next post I’ll start unpacking the larger policy issues, using the call record program as a running example.

The program illustrates the cheap-storage trend for obvious reasons: according to some sources, the NSA’s call record database is the biggest database in the world. This part of the program probably would not have been possible, within the NSA’s budget, until the last few years.

The data stored in the database is among the least sensitive (i.e., private) communications data around. This is not to say that it has no privacy value at all – all I mean is that other information, such as full contents of calls, would be much more sensitive. But even if information about who called whom is not particularly sensitive for most individual calls, the government might, in effect, make it up on volume. Modestly sensitive data, in enormous quantities, can add up to a big privacy problem – an issue that is much more important now that huge databases are feasible.

The other relevant technology trend is the use of automated algorithms, rather than people, to analyze communications traffic. With so many call records, and relatively few analysts, simple arithmetic dictates that the overwhelming majority of call records will never be seen by a human analyst. It’s all about what the automated algorithms do, and which information gets forwarded to a person.

I’ll start unpacking these issues in the next post, starting with the storage question. In the meantime, let me add my small voice to the public complaints about the NSA call record program. They ruined my beautiful hypothetical!

Princeton-Microsoft IP Conference Liveblog

Today I’m at the Princeton-Microsoft Intellectual Property Conference. I’ll be blogging some of the panels as they occur. There are parallel sessions, and I’m on one panel, so I can’t cover everything.

The first panel is on “Organizing the Public Interest”. Panelists are Yochai Benkler, David Einhorn, Margaret Hedstrom, Larry Lessig, and Gigi Sohn. The moderator is Paul Starr.

Yochai Benker (Yale Law) speaks first. He has two themes: decentralization of creation, and emergence of a political movement around that creation. Possibility of altering the politics in three ways. First, the changing relationship between creators and users and growth in the number of creators changes how people relate to the rules. Second, we see existence proofs of the possible success of decentralized production: Linux, Skype, Flickr, Wikipedia. Third, a shift away from centralized, mass, broadcast media. He talks about political movements like free culture, Internet freedom, etc. He says these movements are coalescing and allying with each other and with other powers such as companies or nations. He is skeptical of the direct value of public reason/persuasion. He thinks instead that changing social practices will have a bigger impact in the long run.

David Einhorn (Counsel for the Jackson Laboratory, a research institution) speaks second. “I’m here to talk about mice.” Jackson Lab has lots of laboratory mice – the largest collection (community? inventory?) in the world. Fights developed around access to certain strains of mice. Gene sequences created in the lab are patentable, and research institutions are allowed to exploit those patents (even if the university was government-funded). This has led to some problems. There is an inherent tension between patent exploitation and other goals of universities (creation and open dissemination of knowledge). Lines of lab mice were patentable, and suddenly lawyers were involved whenever researchers used to get mice. It sounds to me like Jackson Lab is a kind of creative commons for mice. He tells stories about how patent negotiations have blocked some nonprofit research efforts.

Margaret Hedstrom (Univ. of Michigan) speaks third. She talks about the impact of IP law on libraries and archives, and how those communities have organized themselves. In the digital world, there has been a shift from buying copies of materials, to licensing materials – a shift from the default copyright rules to the rules that are in the license. This means, for instance, that libraries may not be able to lend out material, or may not be able to make archival copies. Some special provisions in the law apply to libraries and archives, but not to everybody who does archiving (e.g., the Internet Archive is in the gray area). The orphan works problem is a big deal for libraries and archives, and they are working to chip away at this and other narrow legal issues. They are also talking to academic authors, urging them to be more careful about which rights they assign to journals who publish their articles.

Larry Lessig (Stanford Law) speaks fourth. He starts by saying that most of his problems are caused by his allies, but his opponents are nicer and more predictable in some ways. Why? (1) Need to unite technologists and lawyers. (2) Need to unite libertarians and liberals. Regarding tech and law, the main conflict is about what constitutes success. He says technologists want 99.99% success, lawyers are happy with 60%. (I don’t think this is quite right.) He says that fair use and network neutrality are essentially the same issue, but they’re handled inconsistently. He dislikes the fair use system (though he likes fair use itself) because the cost and uncertainty of the system bias so strongly against use without permission, even when those uses ought to be fair – people don’t want to be right, they want to avoid having suits filed against them. Net neutrality, he says, is essentially the same problem as fair use, because it is about how to limit the ability of properties owners who have monopoly power (i.e., copyright owners or ISPs) to use their monopoly property rights against the public interest. The challenge is how to keep the coalition together while addressing these issues.

Gigi Sohn (PublicKnowledge) is the last speaker. Her topic is “what it’s like to be a public interest advocate on the ground.” PublicKnowledge plays a key role in doing thiis, as part of a larger coalition. She lists six strategies that are used in practice to change the debate: (1) day to day, face to face advocacy with policymakers; (2) coalition-building with other NGOs, such as Consumers Union, librarians, etc., and especially industry (different sectors on different issues); (3) message-building, both push and pull communications; (4) grassroots organizing; (5) litigation, on offense and defense (with a shout-out to EFF); (6) working with scholars to build a theoretical framework on these topics. How has it worked? “We’ve been very good at stopping bad things”: broadcast flag, analog hole, database protection laws, etc. She says they/we haven’t been so successful at making good things happen.

Time for Q&A. Tobias Robison (“Precision Blogger”) asks Gigi how to get the financial clout needed to continue the fight. Gigi says it’s not so expensive to play defense.

Sandy Thatcher (head of Penn State University Press) asks how to reconcile the legitimate needs of copyright owners with their advocacy for narrower copyright. He suggests that university presses need the DMCA to survive. (I want to talk to him about that later!) Gigi says, as usual, that PK is interested in balance, not in abolishing the core of copyright. Margaret Hedstrom says that university presses are in a tough spot, and we don’t need to have as many university presses as we have. Yochai argues that university presses shouldn’t act just like commercial presses – if university presses are just like commercial presses why should universities and scholars have any special loyalty to them?

Anne-Marie Slaughter (Dean of the Woodrow Wilson Schoel at Princeton) suggests that some people will be willing to take less money in exchange for the pyschic satisfaction of helping people by spreading knowledge. She suggests that this is a way of showing leadership. Larry Lessig answers by arguing that many people, especially those with smaller market share, can benefit financially from allowing more access. Margaret Hedstrom gives another example of scholarly books released permissively, leading to more sales.

Wes Cohen from Duke Uhiversity asserts that IP rulings (like Madey v. Duke, which vastly narrowed the experimental use exception in patent law) have had relatively litle impact on the day-to-day practice of scientific research. He asks David Einhorn whether his matches his experience. David E. says that bench scientists “are going to do what they have always done” and people are basically ignoring these rules, just hoping that one research organization will sue another and that damages will be small anyway. But, he says, the law intrudes when one organization has to get research materials from another. He argues that this is a bad thing, especially when (as in most biotech research) both organizations are funded by the same government agency. Bill [didn’t catch the last name], who runs tech transfer for the University of California, says that there have been problems getting access to stem cell lines.

The second panel is on the effect of patent law. Panelists are Kathy Strandburg, Susan Mann, Wesley Cohen, Stephen Burley, and Mario Biagioli. Moderator is Rochelle Dreyfuss.

First speaker is Susan Mann (Director of IP Policy, or something like that) at Microsoft. She talks about the relation between patent law and the structure of the software industry. She says people tend not to realize how the contours of patent law shape how companies develop and design products. She gives a chronology of when and why patent law came to be applied to software. She argues that patents are better suited than copyright and trade secret for certain purposes, because patents are public, are only protected if novel and nonobvious, apply to methods of computation, and are more amenable to use in standards. She advocates process-oriented reforms to raise patent quality.

Stephen Burley (biotech researcher and entrepreneur) speaks second. He tells some stories about “me-too drugs”. Example: one of the competitors of Viagra differs from the Viagra molecule by only one carbon atom. Because of the way the viagra patent is written, the competitor could make their drug without licensing the Viagra patent. You might think this is pure free-riding, but in fact even these small differences have medical significance – in this case the drugs have the same primary effect but different side-effects. He tells another story where a new medical test cannot be independently validated by researchers because they can’t get a patent license. Here the patent is being used to prevent would-be customers from finding out about the quality of a product. (To a computer security researcher, this story sounds familiar.) He argues that the relatively free use of tools and materials in research has been hugely valuable.

Third speaker is Mario Biagioli (Harvard historian). He says that academic scientists have always been interested in patenting inventions, going back to Galileo, the Royal Society, Pascal, Huygens, and others. Galileo tried to patent the telescope. Early patents were given, not necessarily to inventors, but often to expert foreigners to give them an incentive to move. You might give a glassmaking patent to a Venetian glassmaker to give him an incentive to set up business in your city. Little explanation of how the invention worked was required, as long as the device or process produced the desired result. Novelty was not required. To get a patent, you didn’t need to invent something, you only needed to be the first to practice it in that particular place. The idea of specification – the requirement to describe the invention to the public in order to get a patent – was emphasized more recently.

Fourth speaker is Kathy Strandburg (DePaul Law). She emphasizes the social structure of science, which fosters incentives to create that are not accounted for in patent law. She argues that scientific creation is an inherently social process, with its own kind of economy of jobs and prestige. This process is pretty successful and we should be careful not to mess it up. She argues, too, that patent law doctrine hasn’t accounted adequately for innovation by users, and the tendency of users to share their innovations freely. She talks about researchers as users. When researchers are designing and using tools, they acting as both scientists and users, so both of the factors mentioned so far will operate, to make the incentive bigger than the standard story would predict. All of this argues for a robust research use exemption – a common position that seems to be emerging from several speakers so far.

Fifth and final speaker is Wesley Cohen (Duke economist). He presents his research on the impact of patents on the development and use of biotech research tools. There has been lots of concern about patenting and overly strict licensing of research tools by universities. His group did empirical research on this topic, in the biotech realm. Here are the findings. (1) Few scientists actually check whether patents might apply to them, even when their institutions tell them to check. (2) When scientists were aware of a patent they needed to license, licenses were almost always available at no cost. (3) Only rarely do scientists change their research direction because of concern over others’ patents. (4) Though patents have little impact, the need to get research materials is a bigger impediment (scientists couldn’t get a required input 20% of the time), and leads more often to changes in research direction because of inability to get materials. (5) When scientists withheld materials from their peers, the most common reasons were (a) research business activity related to the material, and (b) competition between scientists. His bottom-line conclusion: “law on the books is not the same as law in action”.

Now for the Q&A. Several questions to Wes Cohen about the details of his study results. Yochai Benkler asks, in light of the apparent practical irrelevance of patents in biotech research, what would happen if the patent system started applying strongly to that research. Wes Cohen answers that this is not so likely to happen, because there is a norm of reciprocity now, and there will still be a need to maintain good relations between different groups and institutions. It seems to me that he isn’t arguing that Benkler’s hypothetical woudn’t be harmful, just that the hypo is unlikely to happen. (Guy in the row behind me just fell asleep. I think the session is pretty interesting…)

After lunch, we have a speech by Sergio Sa Leitao, Brazil’s Minister of Cultural Policies. He speaks in favor of cultural diversity – “a read-only culture is not right for Brazil” – and how to reconcile it with IP. His theme is the need to face up to reality and figure out how to cope with changes brought on by technology. He talks specifically about the music industry, saying that they lots precious time trying to maintain a business model that was no longer relevant. He gives some history of IP diplomacy relating to cultural diversity, and argues for continued attention to this issue in international negotiations about IP policy. He speaks in favor of a UNESCO convention on cultural diversity.

In the last session of the day, I’ll be attending a panel on compulsory licensing. I’ll be on the panel, actually, so I won’t be liveblogging.