November 21, 2024

Botnet Briefing

Yesterday I spoke at a Washington briefing on botnets. The event was hosted by the Senate Science and Technology Caucus, and sponsored by ACM and Microsoft. Along with opening remarks by Senators Pryor and Bennett, there were short briefings by me, Phil Reitinger of Microsoft, and Scott O’Neal of the FBI.

(Botnets are coordinated computer intrusions, where the attacker installs a long-lived software agent or “bot” on many end-user computers. After being installed, the bots receive commands from the attacker through a command-and-control mechanism. You can think of bots as a more advanced form of the viruses and worms we saw previously.)

Botnets are a serious threat, but as usual in cybersecurity there is no obvious silver bullet against them. I gave a laundry list of possible anti-bot tactics, including a mix of technical, law enforcement, and policy approaches.

Phil Reitinger talked about Microsoft’s anti-botnet activities. These range from general efforts to improve software security, to distribution of patches and malicious code removal tools, to investigation of specific bot attacks. I was glad to hear him call out the need for basic research on computer security.

Scott O’Neal talked about the FBI’s fight against botnets, which he said followed the Bureau’s historical pattern in dealing with new types of crime. At first, they responded to specific attacks by investigating and trying to identify the perpetrators. Over time they have adopted new tactics, such as infiltrating the markets and fora where botmasters meet. Though he didn’t explicitly prioritize the different types of botnet (mis)use, it was clear that commercially motivated denial-of-service attacks were prominent in his mind.

Much of the audience consisted of Senate and House staffers, who are naturally interested in possible legislative approaches to the botnet problem. Beyond seeing that law enforcement has adequate resources, there isn’t much that needs to be done. Current laws such as the Computer Fraud and Abuse Act, and anti-fraud and anti-spam laws, already cover botnet attacks. The hard part is catching the bad guys in the first place.

The one legislative suggestion we heard was to reduce the threshold for criminal violation in the Computer Fraud and Abuse Act. Using computers without authorization is a crime, but there are threshold requirements to make sure that trivial offenses can’t bring down the big hammer of felony prosecution.

The concern is that a badguy who breaks into a large number of computers and installs bots, but hasn’t yet used the bots to do harm, might be able to escape prosecution. He could still be prosecuted if certain types of bad intent can be proved, but where that is not possible he arguably might not meet the $5000 damage threshold. The law might be changed to allow prosecution when some designated number of computers are affected.

Paul Ohm has expressed skepticism about this kind of proposal. He points to a tendency to base cybersecurity policy on anecdote and worst-case predictions, even though a great deal of preventable harm is caused by simpler, more mundane attacks.

I’d like to see more data on how big a problem the current CFAA thresholds are. How many real badguys have escaped CFAA prosecution? Of those who did, how many could be prosecuted for other, equally serious violations? With data in hand, the cost-benefit tradeoffs in amending the CFAA will be easier.

Senator Bennett, in his remarks, characterized cybersecurity as a long-term fight. “You guys have permanent job security…. You’re working on a problem that will never be solved.”

Art of Science, and Princeton Privacy Panel

Today I want to recommend two great things happening at Princeton, one of which is also on the Net.

Princeton’s second annual Art of Science exhibit was unveiled recently, and it’s terrific, just like last year. Here’s some background, from the online exhibit:

In the spring of 2006 we again asked the Princeton University community to submit images—and, for the first time, videos and sounds—produced in the course of research or incorporating tools and concepts from science. Out of nearly 150 entries from 16 departments, we selected 56 works to appear in the 2006 Art of Science exhibition.

The practices of science and art both involve the single-minded pursuit of those moments of discovery when what one perceives suddenly becomes more than the sum of its parts. Each piece in this exhibition is, in its own way, a record of such a moment. They range from the image that validates years of research, to the epiphany of beauty in the trash after a long day at the lab, to a painter’s meditation on the meaning of biological life.

You can view the exhibit online, but the best way to see it is in person, in the main hallway of the Friend Center on the Princeton campus. One of the highlights is outdoors: a fascinating metal object that looks for all the world like a modernist sculpture but was actually built as a prototype winding coil for a giant electromagnet that will control superhot plasma in a fusion energy experiment. (The online photo doesn’t do it justice.)

If you’re on the Princeton campus on Friday afternoon (June 2), you’ll want to see the panel discussion on “Privacy and Security in the Digital Age”, which I’ll be moderating. We have an all-star group of panelists:
* Dave Hitz (Founder, Network Appliance)
* Paul Misener (VP for Global Public Affairs, Amazon)
* Harriet Pearson (Chief Privacy Officer, IBM)
* Brad Smith (Senior VP and General Counsel, Microsoft)
It’s in 006 Friend, just downstairs from the Art of Science exhibit, from 2:00 to 3:00 on Friday.

These panelists are just a few of the distinguished Princeton alumni who will be on campus this weekend for Reunions.

Princeton-Microsoft IP Conference Liveblog

Today I’m at the Princeton-Microsoft Intellectual Property Conference. I’ll be blogging some of the panels as they occur. There are parallel sessions, and I’m on one panel, so I can’t cover everything.

The first panel is on “Organizing the Public Interest”. Panelists are Yochai Benkler, David Einhorn, Margaret Hedstrom, Larry Lessig, and Gigi Sohn. The moderator is Paul Starr.

Yochai Benker (Yale Law) speaks first. He has two themes: decentralization of creation, and emergence of a political movement around that creation. Possibility of altering the politics in three ways. First, the changing relationship between creators and users and growth in the number of creators changes how people relate to the rules. Second, we see existence proofs of the possible success of decentralized production: Linux, Skype, Flickr, Wikipedia. Third, a shift away from centralized, mass, broadcast media. He talks about political movements like free culture, Internet freedom, etc. He says these movements are coalescing and allying with each other and with other powers such as companies or nations. He is skeptical of the direct value of public reason/persuasion. He thinks instead that changing social practices will have a bigger impact in the long run.

David Einhorn (Counsel for the Jackson Laboratory, a research institution) speaks second. “I’m here to talk about mice.” Jackson Lab has lots of laboratory mice – the largest collection (community? inventory?) in the world. Fights developed around access to certain strains of mice. Gene sequences created in the lab are patentable, and research institutions are allowed to exploit those patents (even if the university was government-funded). This has led to some problems. There is an inherent tension between patent exploitation and other goals of universities (creation and open dissemination of knowledge). Lines of lab mice were patentable, and suddenly lawyers were involved whenever researchers used to get mice. It sounds to me like Jackson Lab is a kind of creative commons for mice. He tells stories about how patent negotiations have blocked some nonprofit research efforts.

Margaret Hedstrom (Univ. of Michigan) speaks third. She talks about the impact of IP law on libraries and archives, and how those communities have organized themselves. In the digital world, there has been a shift from buying copies of materials, to licensing materials – a shift from the default copyright rules to the rules that are in the license. This means, for instance, that libraries may not be able to lend out material, or may not be able to make archival copies. Some special provisions in the law apply to libraries and archives, but not to everybody who does archiving (e.g., the Internet Archive is in the gray area). The orphan works problem is a big deal for libraries and archives, and they are working to chip away at this and other narrow legal issues. They are also talking to academic authors, urging them to be more careful about which rights they assign to journals who publish their articles.

Larry Lessig (Stanford Law) speaks fourth. He starts by saying that most of his problems are caused by his allies, but his opponents are nicer and more predictable in some ways. Why? (1) Need to unite technologists and lawyers. (2) Need to unite libertarians and liberals. Regarding tech and law, the main conflict is about what constitutes success. He says technologists want 99.99% success, lawyers are happy with 60%. (I don’t think this is quite right.) He says that fair use and network neutrality are essentially the same issue, but they’re handled inconsistently. He dislikes the fair use system (though he likes fair use itself) because the cost and uncertainty of the system bias so strongly against use without permission, even when those uses ought to be fair – people don’t want to be right, they want to avoid having suits filed against them. Net neutrality, he says, is essentially the same problem as fair use, because it is about how to limit the ability of properties owners who have monopoly power (i.e., copyright owners or ISPs) to use their monopoly property rights against the public interest. The challenge is how to keep the coalition together while addressing these issues.

Gigi Sohn (PublicKnowledge) is the last speaker. Her topic is “what it’s like to be a public interest advocate on the ground.” PublicKnowledge plays a key role in doing thiis, as part of a larger coalition. She lists six strategies that are used in practice to change the debate: (1) day to day, face to face advocacy with policymakers; (2) coalition-building with other NGOs, such as Consumers Union, librarians, etc., and especially industry (different sectors on different issues); (3) message-building, both push and pull communications; (4) grassroots organizing; (5) litigation, on offense and defense (with a shout-out to EFF); (6) working with scholars to build a theoretical framework on these topics. How has it worked? “We’ve been very good at stopping bad things”: broadcast flag, analog hole, database protection laws, etc. She says they/we haven’t been so successful at making good things happen.

Time for Q&A. Tobias Robison (“Precision Blogger”) asks Gigi how to get the financial clout needed to continue the fight. Gigi says it’s not so expensive to play defense.

Sandy Thatcher (head of Penn State University Press) asks how to reconcile the legitimate needs of copyright owners with their advocacy for narrower copyright. He suggests that university presses need the DMCA to survive. (I want to talk to him about that later!) Gigi says, as usual, that PK is interested in balance, not in abolishing the core of copyright. Margaret Hedstrom says that university presses are in a tough spot, and we don’t need to have as many university presses as we have. Yochai argues that university presses shouldn’t act just like commercial presses – if university presses are just like commercial presses why should universities and scholars have any special loyalty to them?

Anne-Marie Slaughter (Dean of the Woodrow Wilson Schoel at Princeton) suggests that some people will be willing to take less money in exchange for the pyschic satisfaction of helping people by spreading knowledge. She suggests that this is a way of showing leadership. Larry Lessig answers by arguing that many people, especially those with smaller market share, can benefit financially from allowing more access. Margaret Hedstrom gives another example of scholarly books released permissively, leading to more sales.

Wes Cohen from Duke Uhiversity asserts that IP rulings (like Madey v. Duke, which vastly narrowed the experimental use exception in patent law) have had relatively litle impact on the day-to-day practice of scientific research. He asks David Einhorn whether his matches his experience. David E. says that bench scientists “are going to do what they have always done” and people are basically ignoring these rules, just hoping that one research organization will sue another and that damages will be small anyway. But, he says, the law intrudes when one organization has to get research materials from another. He argues that this is a bad thing, especially when (as in most biotech research) both organizations are funded by the same government agency. Bill [didn’t catch the last name], who runs tech transfer for the University of California, says that there have been problems getting access to stem cell lines.

The second panel is on the effect of patent law. Panelists are Kathy Strandburg, Susan Mann, Wesley Cohen, Stephen Burley, and Mario Biagioli. Moderator is Rochelle Dreyfuss.

First speaker is Susan Mann (Director of IP Policy, or something like that) at Microsoft. She talks about the relation between patent law and the structure of the software industry. She says people tend not to realize how the contours of patent law shape how companies develop and design products. She gives a chronology of when and why patent law came to be applied to software. She argues that patents are better suited than copyright and trade secret for certain purposes, because patents are public, are only protected if novel and nonobvious, apply to methods of computation, and are more amenable to use in standards. She advocates process-oriented reforms to raise patent quality.

Stephen Burley (biotech researcher and entrepreneur) speaks second. He tells some stories about “me-too drugs”. Example: one of the competitors of Viagra differs from the Viagra molecule by only one carbon atom. Because of the way the viagra patent is written, the competitor could make their drug without licensing the Viagra patent. You might think this is pure free-riding, but in fact even these small differences have medical significance – in this case the drugs have the same primary effect but different side-effects. He tells another story where a new medical test cannot be independently validated by researchers because they can’t get a patent license. Here the patent is being used to prevent would-be customers from finding out about the quality of a product. (To a computer security researcher, this story sounds familiar.) He argues that the relatively free use of tools and materials in research has been hugely valuable.

Third speaker is Mario Biagioli (Harvard historian). He says that academic scientists have always been interested in patenting inventions, going back to Galileo, the Royal Society, Pascal, Huygens, and others. Galileo tried to patent the telescope. Early patents were given, not necessarily to inventors, but often to expert foreigners to give them an incentive to move. You might give a glassmaking patent to a Venetian glassmaker to give him an incentive to set up business in your city. Little explanation of how the invention worked was required, as long as the device or process produced the desired result. Novelty was not required. To get a patent, you didn’t need to invent something, you only needed to be the first to practice it in that particular place. The idea of specification – the requirement to describe the invention to the public in order to get a patent – was emphasized more recently.

Fourth speaker is Kathy Strandburg (DePaul Law). She emphasizes the social structure of science, which fosters incentives to create that are not accounted for in patent law. She argues that scientific creation is an inherently social process, with its own kind of economy of jobs and prestige. This process is pretty successful and we should be careful not to mess it up. She argues, too, that patent law doctrine hasn’t accounted adequately for innovation by users, and the tendency of users to share their innovations freely. She talks about researchers as users. When researchers are designing and using tools, they acting as both scientists and users, so both of the factors mentioned so far will operate, to make the incentive bigger than the standard story would predict. All of this argues for a robust research use exemption – a common position that seems to be emerging from several speakers so far.

Fifth and final speaker is Wesley Cohen (Duke economist). He presents his research on the impact of patents on the development and use of biotech research tools. There has been lots of concern about patenting and overly strict licensing of research tools by universities. His group did empirical research on this topic, in the biotech realm. Here are the findings. (1) Few scientists actually check whether patents might apply to them, even when their institutions tell them to check. (2) When scientists were aware of a patent they needed to license, licenses were almost always available at no cost. (3) Only rarely do scientists change their research direction because of concern over others’ patents. (4) Though patents have little impact, the need to get research materials is a bigger impediment (scientists couldn’t get a required input 20% of the time), and leads more often to changes in research direction because of inability to get materials. (5) When scientists withheld materials from their peers, the most common reasons were (a) research business activity related to the material, and (b) competition between scientists. His bottom-line conclusion: “law on the books is not the same as law in action”.

Now for the Q&A. Several questions to Wes Cohen about the details of his study results. Yochai Benkler asks, in light of the apparent practical irrelevance of patents in biotech research, what would happen if the patent system started applying strongly to that research. Wes Cohen answers that this is not so likely to happen, because there is a norm of reciprocity now, and there will still be a need to maintain good relations between different groups and institutions. It seems to me that he isn’t arguing that Benkler’s hypothetical woudn’t be harmful, just that the hypo is unlikely to happen. (Guy in the row behind me just fell asleep. I think the session is pretty interesting…)

After lunch, we have a speech by Sergio Sa Leitao, Brazil’s Minister of Cultural Policies. He speaks in favor of cultural diversity – “a read-only culture is not right for Brazil” – and how to reconcile it with IP. His theme is the need to face up to reality and figure out how to cope with changes brought on by technology. He talks specifically about the music industry, saying that they lots precious time trying to maintain a business model that was no longer relevant. He gives some history of IP diplomacy relating to cultural diversity, and argues for continued attention to this issue in international negotiations about IP policy. He speaks in favor of a UNESCO convention on cultural diversity.

In the last session of the day, I’ll be attending a panel on compulsory licensing. I’ll be on the panel, actually, so I won’t be liveblogging.

Ed Talks in SANE

Today, I gave a keynote at the SANE (System Administration and Network Engineering) conference, in Delft, the Netherlands. SANE has an interesting group of attendees, mostly high-end system and network jockeys, and people who like to hang around with them.

At the request of some attendees, I am providing a PDF of my slides, with a few images redacted to placate the copyright gods.

The talk was a quick overview of what I used to think of as the copyfight, but I now think of as the technologyfight. The first part of the talk set the stage, using two technologies as illustrations: the VCR, and Sony-BMG’s recent copy-protected CDs. I then switched gears and talked about the political/regulatory side of the techfight.

In the last part of the talk, I analogized the techfight to the Cold War. I did this with some trepidation, as I didn’t want to imply that the techfight is just like the Cold War or that it is as important as the Cold War was. But I think that the Cold War analogy is useful in thinking about the techfight.

The analogy works best in suggesting a strategy for those on the openness/technology/innovation/end-to-end side of the techfight. In the talk, I used the Cold War analogy to suggest a three-part strategy.

Part 1 is to contain. The West did not seek to win the Cold War by military action; instead it tried to contain the other side militarily so as to win in other ways. Similarly, the good guys in the techfight will not win with lawyers; but lawyers must be used when necessary to contain the other side. Kennan’s definition of containment is apt: “a long-term, patient but firm and vigilant containment of [the opponent’s] expansive tendencies”.

Part 2 is to explain. This means trying to influence public opinion by explaining the benefits of an open and free environment (in the Cold War, an open and free society) and by rebutting the other side’s arguments in favor of a more constraining, centrally planned system.

Part 3 is to create. Ultimately the West won the Cold War because people could see that ordinary citizens in the West had better, more creative, more satisfying lives. Similarly, the best strategy in the techfight is simply to show what technology can do – how it can improve the lives of ordinary citizens. This will be the decisive factor.

In the break afterward, somebody referred to a P.J. O’Rourke quote to the effect that the West won the Cold War because it, unlike its opponents, could provide its citizens with comfortable shoes. (If you’re the one who told me this, please remind me of your name.) No doubt O’Rourke was exaggerating for comic effect, but he did capture something important about the benefits of a free society and, by analogy, of a free and open technology ecosystem.

Another American approached me afterward and said that by talking about the Cold War as having been won by one side and lost by the other, I was portraying myself, to the largely European audience, as the stereotypical conservative American. I tried to avoid giving this impression (so as not to distract from my message), calling the good side of the Cold War “the West” and emphasizing the cultural rather than military aspects of the Cold War. I had worried a little about how people would react to my use of the Cold War analogy, but ultimately I decided that the analogy was just too useful to pass up. I think it worked.

All in all, it was great fun to meet the SANE folks and see Delft. Now back to real life.

Happy Endings

Cameron Wilson at the USACM Policy Blog writes about a Cato Institute event about copyright policy, which was held Wednesday. The panel on the DMCA was especially interesting. (audio download; audio stream; video stream)

Tim Lee, author of the recent Cato paper on the ill effects of the DMCA, spoke first.

The second speaker was Solveig Singleton of PFF, who offered some amazing arguments. Here is her response to the well-documented list of DMCA misuses:

Even if you set aside some of the errors in the Cato paper, you’re left with a set of examples, many of which have happy endings, without any change to the law. Ed Felten’s case, for example. There are other cases. There were lawsuits that were threatened but not brought. Lawsuits that were brought but ultimately failed. Lawsuits that succeeded but on grounds other than the DMCA.

(This is my transcription from the audio stream.)

To call the case of my colleagues and me a “happy ending” takes some real chutzpah. Let’s catalog the happy consequences of our case. One person lost his job, and another nearly did. Countless hours of pro bono lawyer time were consumed. Anonymous donors gave up large amounts of money to support our defense. I lost at least months of my professional life, and other colleagues did too. And after all this, the ending was that we were able to publish our work – something which, before the DMCA, we would have been able to do with no trouble at all.

In the end, yes, we were happy – in the same way one is happy to recover from food poisoning. Which is not really an argument in favor of food poisoning.

She goes on to argue for the efficacy of the DMCA, using the example of Apple’s FairPlay technology (which is used by the iTunes music store):

But … are they [Apple] going to be able to get music developers to the table to negotiate with them to help create this library [of music] if they can’t make some reasonable assurances that that content isn’t going to show up free everywhere else?

Never mind that all of the songs Apple sells are available for free on P2P networks, despite FairPlay and the DMCA. Never mind that FairPlay has a huge and widely known hole – the ability to burn songs to an unprotected CD – which Apple created deliberately.

It’s understandable that DMCA advocates don’t want to give a realistic, straightforward explanation of exactly why the DMCA is needed. If they tried to do so, it would become clear that the DMCA, as written, is poorly suited for their purpose. Instead, we get strawmen and arguments from counterfactual assumptions.

I’ll close with a quote from Emery Simon of the Business Software Alliance, another speaker on the same panel, making a claim so far off-base that I won’t even bother to rebut it:

[If not] for copy protection technologies, whether it’s Macrovision or CSS or Fairplay, my VCR and my television set would be devices no more useful to me than my car without gasoline.