December 23, 2024

Aussie Judge Tweaks Kazaa Design

A judge in Australia has found Kazaa and associated parties liable for indirect copyright infringement, and has tentatively imposed a partial remedy that requires Kazaa to institute keyword-based filtering.

The liability finding is based on a conclusion that Kazaa improperly “authorized” infringement. This is roughly equivalent to a finding of indirect (i.e. contributory or vicarious) infringement under U.S. law. I’m not an expert in Australian law, so on this point I’ll refer you to Kim Weatherall’s recap.

As a remedy, the Kazaa parties will have to pay the 90% of the copyright owners’ trial expenses, and will have to pay damages for infringement, in an amount to be determined by future proceedings. (According to Kim Weatherall, Australian law does not allow the copyright owners to reap automatic statutory damages as in the U.S. Instead, they must prove actual damages, although the damages are boosted somehow for infringements that are “flagrant”.)

More interestingly, the judge has ordered Kazaa to change the design of their product, by incorporating keyword-based filtering. Kazaa allows users to search for files corresponding to certain artist names and song titles. The required change would disallow search terms containing certain forbidden patterns.

Designing such a filter is much harder than it sounds, because there are so many artist names and song names. These two namespaces are so crowded that a great many common names given to non-infringing recordings are likely to contain forbidden patterns.

The judge’s order uses the example of the band Powderfinger. Presumably the modified version of Kazaa would ban searches with “Powderfinger” as part of the artist name. This is all well and good when the artist name is so distinctive. But what if the artist name is a character string that occurs frequently in names, such as “beck”, “smiths”, or “x”? (All are names of artists with copyrighted recordings.) Surely there will be false positives.

It’s even worse for song names. You would have to ban simple words and phrases, like “Birthday”, “Crazy”, “Morning”, “Sailing”, and “Los Angeles”, to name just a few. (All are titles of copyrighted recordings.)

The judge’s order asks the parties to agree on the details of how a filter will work. If they can’t agree on the details, the judge will decide. Given the enormous number of artist and song names, and the crowded namespace, there are a great many details to decide, balancing over- and under-inclusiveness. It’s hard to see how the parties can agree on all of the details, or how the judge can impose a detailed design. The only hope is to appoint some kind of independent arbiter to make these decisions.

Ultimately, I think the tradeoff between over- and under-inclusiveness will prove too difficult – the filters will either fail to block many infringing files, or will block many non-infringing files, or both.

This is the same kind of filtering that Judge Patel ordered Napster to use, after she found Napster liable for indirect infringement. It didn’t work for Napster. Users just changed the spelling of artist and song names, adopting standard misspellings (e.g., “Metallica” changed to “Metalica” or “MetalIGNOREica” or the Pig Latin “Itallicamay”), or encoding the titles somehow. Napster updated its filters to compansate, but was always one step behind. And Napster’s job was easier, because the filtering was done on Napster’s own computers. Kazaa will have to try to download updates to users’ computers every time it changes its filters.

To the judge’s credit, he acknowledges that filtering will be imprecise and might even fail miserably. So he orders only that Kazaa must use filtering, but not that the filtering must succeed in stopping infringement. As long as Kazaa makes its best effort to make the agreed-upon (or ordered) filtering scheme work, it will have have satisfied the order, even if infringement goes on.

Kim Weatherall calls the judge’s decision “brave”, because it wades into technical design and imposes a remedy that requires an ongoing engagement between the parties, two things that courts normally try to avoid. I’m not optimistic about this remedy – it will impose costs on both sides and won’t do much to stop infringement. But at least the judge didn’t just order Kazaa to stop all infringement, an order with which no general-purpose communication technology could ever hope to comply.

In the end, the redesign may be moot, as the prospect of financial damages may kill Kazaa before the redesign must occur. Kazaa is probably dying anyway, as users switch to newer services. From now on, the purpose of Kazaa, in the words of the classic poster, may be to serve as a warning to others.

DMCA, and Disrupting the Darknet

Fred von Lohmann’s paper argues that the DMCA has failed to keep infringing copies of copyrighted works from reaching the masses. Fred argues that the DMCA has not prevented “protected” files from being ripped, and that once those files are ripped they appear on the darknet where they are available to everyone. I think Fred is right that the DMCA and the DRM (anti-copying) technologies it supports have failed utterly to keep material off the darknet.

Over at the Picker MobBlog, several people have suggested an alternate rationale for the DMCA: that it might help raise the cost and difficulty of using the darknet. The argument is that even if the DMCA doesn’t help keep content from reaching the darknet, it may help stop material on the darknet from reaching end users.

I don’t think this rationale works. Certainly, copyright owners using lawsuits and technical attacks in an attempt to disrupt the darknet. They have sued many end users and a few makers of technologies used for darknet filesharing. They have launched technical attacks including monitoring, spoofing, and perhaps even limited denial of service attacks. The disruption campaign is having a nonzero effect. But as far as I can tell, the DMCA plays no role in this campaign and does nothing to bolster it.

Why? Because nobody on the darknet is violating the DMCA. Files arrive on the darknet having already been stripped of any technical protection measures (TPMs, in the DMCA lingo). TPMs just aren’t present on the darknet. And you can’t circumvent a TPM that isn’t there.

To be sure, many darknet users break the law, and some makers of darknet technologies apparently break the law too. But they don’t break the DMCA; and indeed the legal attacks on the darknet have all been based on old-fashioned direct copyright infringement by end users, and contributory or vicarious infringement by technology makers. Even if there were no DMCA, the same legal and technical arms race would be going on, with the same results.

Though it has little if anything to do with the DMCA, the darknet technology arms race is an interesting topic in itself. In fact, I’m currently writing a paper about it, with my students Alex Halderman and Harlan Yu.

DMCA: An Avoidable Failure

In his new paper, Fred von Lohmann argues that the Digital Millennium Copyright Act of 1998, when evaluated on its own terms, is a failure. Its advocates said it would prevent widespread online copyright infringement; and it has not done so.

Fred is right on target in diagnosing the DMCA’s failure to do what its advocates predicted. What Fred doesn’t say, though, is that this failure should have been utterly predictable – it should have been obvious when the DMCA was grinding through Congress that things would end up like this.

Let’s look at the three assumptions that underlie the darknet argument [quoting Fred]:

  1. Any widely distributed object will be available to some fraction of users in a form that permits copying.
  2. Users will copy objects if it is possible and interesting to do so.
  3. Users are connected by high-bandwidth channels.

When the DMCA passed in 1998, #1 was obviously true, and #3 was about to become true. #2 was the least certain; but if #2 turned out to be false then no DMCA-like law would be necessary anyway. So why didn’t people see this failure coming in advance?

The answer is that many people did, but Congress ignored them. The failure scenario Fred describes was already conventional wisdom among independent computer security experts by 1998. Within the community, conversations about the DMCA were not about whether it would work – everybody knew it wouldn’t – but about why Washington couldn’t see what seemed obvious to us.

When the Darknet paper was published in 2001, people in the community cheered. Not because the paper had much to say to the security community – the paper’s main argument had long been conventional wisdom – but because the paper made the argument in a clear and accessible way, and because, most of all, the authors worked for a big IT company.

For quite a while, employees of big IT companies had privately denigrated DRM and the DMCA, but had been unwilling to say so in public. Put a microphone in front of them and they would dodge questions, change the subject, or say what their employer’s official policy was. But catch them in the hotel bar afterward and they would tell a different story. Everybody knew that dissenting from the corporate line was a bad career move; and nobody wanted to be the first to do it.

And so the Darknet paper caused quite a stir outside the security community, catalyzing a valuable conversation, to which Fred’s paper is a valuable contribution. It’s an interesting intellectual exercise to weigh the consequences of the DMCA in an alternate universe where it actually prevents online infringement; but if we restrict ourselves to the facts on the ground, Fred has a very strong argument.

The DMCA has failed to prevent online infringement; and that failure should have been predictable. To me, the most interesting question is how our policymakers can avoid making this kind of mistake again.

Measuring the DMCA Against the Darknet

Next week I’ll be participating in a group discussion of Fred von Lohmann’s new paper, “Measuring the DMCA Against the Darknet“, over at the Picker MobBlog. Other participants will include Julie Cohen, Wendy Gordon, Doug Lichtman, Jessica Litman, Bill Patry, Bill Rosenblatt, Larry Solum, Jim Speta, Rebecca Tushnet, and Tim Wu.

I’m looking forward to a lively debate. I’ll cross-post my entries here, with appropriate links back to the discussion over there.

HD-DVD Camp Disses Blu-Ray DRM

Proponents of HD-DVD, one of the two competing next-gen DVD standards, have harsh words for the newly announced DRM technologies adopted by the competing Blu-Ray standard, according to a Consumer Electronics Daily article quoted by an AVS Forum commenter.

[Fox engineering head Andy] Setos confirmed BD+ [one of the newly announced Blu-Ray technologies] was based on the Self-Protecting Digital Content (SPDC) encryption developed by San Francisco’s Cryptography Research. That system, which provides “renewable security” in the event AACS is hacked, was rejected for HD DVD over concerns about playability and reliability issues (CED Aug 2 p1). BDA [the Blu-Ray group] obviously had a different conclusion, Setos said.

[Hitachi advisor Mark] Knox also took a shot at the BD+ version of SPDC, calling its “Virtual Machine” concept “a goldmine for hackers.” He said the Virtual Machine “must have access to critical security info, so any malicious code designed to run on this VM would also have access. In the words of one of the more high-tech guys ‘This feeble attempt to shut the one door on hackers is going to open up a lot of windows instead.’”

There’s an interesting technical issue behind this. SPDC’s designers say that most DRM schemes are weak because a fixed DRM design is built in to player devices; and once that design is broken – as it inevitably will be – the players are forever vulnerable. Rather than using a fixed DRM design, SPDC builds into the player device a small operating system. (They call it a lightweight virtual machine, but if you look at what it does it’s clearly an operating system.) Every piece of content can come with a computer program, packaged right on the disc with the content, which the operating system loads and runs when the content is loaded. These programs can also store data and software permanently on the player. (SPDC specifications aren’t available, but they have a semi-technical white paper and a partial security analysis.)

The idea is that rather than baking a single DRM scheme into the player, you can ship out a new DRM scheme whenever you ship out a disc. Different content publishers can use different DRM schemes, by shipping different programs on their discs. So, the argument goes, the system is more “renewable”.

The drawback for content publishers is that adversaries can switch from attacking the DRM to attacking the operating system. If somebody finds a security bug in the operating system (and, let’s face it, OS security bugs aren’t exactly unprecedented), they can exploit it to undermine any and all DRM, or to publish discs that break users’ players, or to cause other types of harm.

There are also risks for users. The SPDC documents talk about the programs having access to permanent storage on the player, and connecting to the Internet. This means a disc could install software that watches how you use your player, and reports that information to somebody across the Net. Other undesirable behaviors are possible too. And there’s nothing much the user can do to prevent them – content publishers, in the name of security, will try to prevent reverse engineering of their programs or the spread of information about what they do – and even the player manufacturer won’t be able to promise users that programs running on the player will be well-behaved.

Even beyond this, you have all of the usual reliability problems that arise on operating systems that store data and run code on behalf of independent software vendors. Users generally cope with such problems by learning about how the OS works and tweaking its configuration; but this strategy won’t work too well if the workings of the OS are supposed to be secret.

The HD-DVD advocates are right that SPDC (aka BD+) opens a real can of worms. Unless the SPDC/BD+ specifications are released, I for one won’t trust that the system is secure and stable enough to make anybody happy.