April 24, 2024

Archives for May 2004

Must-Read Copyright Articles

Recently I read two great articles on copyright: Tim Wu’s Copyright’s Communications Policy and Mark Lemley’s Ex Ante Versus Ex Post Justifications for Intellectual Property.

Wu’s paper, which has already been praised widely in the copyright blogosphere, argues that copyright law, in addition to its well-known purpose of creating incentives for authors, has another component that amounts to a government policy on communications systems. This idea has been kicking around for some time, but Wu really nails it. His paper has a fascinating historical section describing what happened when new technologies, such as player pianos, radio, and cable TV, affected the copyright balance. In each case, after lots of legal maneuvering, a deal was cut between the incumbent industry and the challenger. Wu goes on to explain why this is the case, and what it all means for us today. There’s much more to this paper; a single paragraph can’t do it justice.

Lemley’s paper is a devastating critique of a new style of copyright-extension argument. The usual rationale for copyright is that it operates ex ante (which is lawyerspeak for beforehand): by promising authors a limited monopoly on copying and distribution of any work they might create in the future, we give them an incentive to create. After the work is created, the copyright monopoly leads to inefficiencies, but these are necessary because we have to keep our promise to the author. The goal of copyright is to keep others from free-riding on the author’s creative work.

Recently, we have begun hearing ex post arguments for copyright, saying that even for works that have already been created, the copyright monopoly is more efficient than a competitive market would be. Some of the arguments in favor of copyright term extension are of this flavor. Lemley rebuts these arguments very convincingly, arguing that they (a) are theoretically unsound, (b) are contradicted by practical experience, and (c) reflect an odd anti-market, central-planning bias. Based on this description, you might think Lemley’s article is long and dense; but it’s short and surprisingly readable. (Don’t be fooled by the number of pages in the download – they’re mostly endnotes.)

Broadcast Flag for Radio

JD Lasica has an important story about an FCC proposal, backed by the recording industry, to impose a broadcast-flag mandate on the design of digital radios. As JD suggests, this issue deserves much more attention than it has gotten.

He also has copies of correspondence on this issue exchanged between RIAA president Cary Sherman and Consumer Electronics Association (CEA) CEO Gary Shapiro. Shapiro notes that this proposal directly contradicts the RIAA’s “Policy Principles on Digital Content,” which say this:

Technology and record companies believe that technical protection measures dictated by the government (legislation or regulations mandating how these technologies should be designed, function and deployed, and what devices must do to respond to them) are not practical. The imposition of technical mandates is not the best way to serve the long-term interests of record companies, technology companies, and consumers … The role of government, if needed at all, should be limited to enforcing compliance with voluntarily developed functional specifications reflecting consensus among affected interests.

The FCC’s proposal will be open for public comment between June 16 and July 16.

New Email Spying Tool

A company called didtheyreadit.com has launched a new email-spying tool that is generating some controversy, and should generate more. The company claims that its product lets you invisibly track what happens to email messages you send: how many times they are read; when, where (net address and geographic location), and for how long they are read; how many times they are forwarded, and so on.

The company has two sales pitches. They tell privacy-sensitive people that the purpose is to tell a message’s sender whether the message got through to its destination, as implied by their company name. But elsewhere, they tout the pervasiveness and invisibility of their tracking tool (from their home page: “email that you send is invisibly tracked so that recipients will never know you’re using didtheyreadit”).

Alex Halderman and I signed up for the free trial of the service, and sent tracked messages to a few people (with their consent), to figure out how the product works and how it is likely to fail in practice.

The product works by translating every tracked message into HTML format, and inserting a Web bug into the HTML. The Web bug is a one-pixel image file that is served by a web server at didtheyreadit.com. When the message recipient views the message on an HTML-enabled mailer, his viewing software will try to load the web bug image from the didtheyreadit server, thereby telling didtheyreadit.com that the email message is being viewed, and conveying the viewer’s network address, from which his geographic location may be deduced. The server responds to the request by streaming out a file very slowly (about eight bytes per second), apparently for as long as the mail viewer is willing to keep the connection open. When the user stops viewing the email message, his mail viewer gives up on loading the image; this closes the image-download connection, thereby telling didtheyreadit that the user has stopped viewing the message.

This trick of putting Web bugs in email has been used by spammers for several years now. You can do it yourself, if you have a Web site. What’s new here is that this is being offered as a conveniently packaged product for ordinary consumers.

Because this is an existing trick, many users are already protected against it. You can protect yourself too, by telling your email-reading software to block loading of remote images in email messages. Some standard email-filtering or privacy-enhancement tools will also detect and disable Web bugs in email. So users of the didtheyreadit product can’t be assured that the tracking will work.

It’s also possible to detect these web bugs in your incoming email. If you look at the source code for the message, you’ll see an IMG tag, containing a URL at didtheyreadit.com. Here’s an example:

<img src=”http://didtheyreadit.com/index.php/worker?code=e070494e8453d5a233b1a6e19810f” width=”1″ height=”1″ />

The code, “e0704…810f” in my example, will be different in each tracked message. You can generate spurious “viewing” of the tracked message by loading the URL into your browser. Or you can put a copy of the entire web bug (everything that is intended above) into a Web page or paste it into an unrelated email message, to confuse didtheyreadit’s servers about where the message went.

Products like this sow the seeds of their own destruction, by triggering the adoption of technical measures that defeat them, and the creation of social norms that make their use unacceptable.

Penn State: No Servers in Dorms

Yesterday I attended the Educause Policy Conference in Washington, where I spoke on a panel on “Sharing Information and Controlling Content: Continuing Challenges for Higher Education.”

One of the most interesting parts of the day was a brief presentation by Russ Vaught, the Associate Vice Provost for IT at Penn State. He said that Penn State has a policy banning server software of all kinds from dormitory computers. No email servers; no web servers; no DNS servers; no chat servers; no servers of any kind. The policy is motivated by a fear that server software might be used to infringe copyrights.

This is a wrongheaded policy that undermines the basic educational mission of the university. As educators, we’re teaching our students to create, analyze, and disseminate ideas. We like nothing more than to see our students disseminating their ideas; and network servers are the greatest idea-disseminating technology ever invented. Keeping that technology away from our students is the last thing we should be doing.

The policy is especially harmful to computer science students, who would otherwise gain hands-on experience by managing their own computer systems. For example, it’s much easier to teach a student about email, and email security, if she has run an email server herself. At Penn State, that can’t happen.

The policy also seems to ignore some basic technical facts. Servers are a standard feature of computer systems, and most operating systems, including Windows, come with servers built in and turned on by default. Many homework assignments in computer science courses (including courses I teach) involve writing or running servers.

Penn State does provide a cumbersome bureaucratic process that can make limited exceptions to the server ban, but “only … in the rarest of circumstances” and then only for carefully constrained activities that are part of the coursework in a particular course.

Listening to Mr. Vaught’s presentation, and talking privately to a Penn State official later in the day, I got the strong impression that, at times, Penn State puts a higher priority on fighting infringement than on educating its students.

Still More About End-User Liability

At the risk of alienating readers, here is one more post about the advisability of imposing liability on end-users for harm to third parties that results from break-ins to the end-users’ computers. I promise this is the last post on this topic, at least for this week.

Rob Heverly, in a very interesting reply to my last post, focuses on the critical question regarding liability policy: who is in the best position to avert harm. Assuming a scenario where an adversary breaks in to Alice’s computer, and uses it as a launching pad for attacks that harm Bob, the critical question is whether Alice or Bob is better positioned to prevent the harm to Bob.

Mr. Heverly (I won’t call him Rob because that’s too close to my hypothetical Bob’s name; and it’s an iron rule in security discussions that the second party in any example must be named Bob) says that it will always be easier for Bob to protect himself from the attack than for Alice to block the attack by preventing the compromise of her machine. I disagree. It’s not that his general rule is always wrong; but I think it will prove to be wrong often enough that one will have to look at individual cases. To analyze a specific case, we’ll have to look at a narrow class of attacks, evaluate the effectiveness and cost of Bob’s countermeasures against that attack, and compare that evaluation to what we know about Alice’s measures to protect herself. The result of such an evaluation is far from clear, even for straightforward attack classes such as spamming and simple denial of service attacks. Given our limited understanding of security technology, I don’t think experts will agree on the answer.

So the underlying policy question – whether to hold Alice liable for harm to Bob – depends on technical considerations that we don’t yet understand. Ultimately, the right answer may be different for different types of attacks; but drawing complicated distinctions between attack classes, and using different liability rules for different classes, would probably make the law too complicated. At this point, we just don’t know enough to mess with liability rules for end-users.