December 22, 2024

DRM Wars: The Next Generation

Last week at the Usenix Security Symposium, I gave an invited talk, with the same title as this post. The gist of the talk was that the debate about DRM (copy protection) technologies, which has been stalemated for years now, will soon enter a new phase. I’ll spend this post, and one or two more, explaining this.

Public policy about DRM offers a spectrum of choices. On one end of the spectrum are policies that bolster DRM, by requiring or subsidizing it, or by giving legal advantages to companies that use it. On the other end of the spectrum are policies that hinder DRM, by banning or regulating it. In the middle is the hands-off policy, where the law doesn’t mention DRM, companies are free to develop DRM if they want, and other companies and individuals are free to work around the DRM for lawful purposes. In the U.S. and most other developed countries, the move has been toward DRM-bolstering laws, such as the U.S. DMCA.

The usual argument in favor of bolstering DRM is that DRM retards peer-to-peer copyright infringement. This argument has always been bunk – every worthwhile song, movie, and TV show is available via P2P, and there is no convincing practical or theoretical evidence that DRM can stop P2P infringement. Policymakers have either believed naively that the next generation of DRM would be different, or accepted vague talk about speedbumps and keeping honest people honest.

At last, this is starting to change. Policymakers, and music and movie companies, are starting to realize that DRM won’t solve their P2P infringement problems. And so the usual argument for DRM-bolstering laws is losing its force.

You might expect the response to be a move away from DRM-bolstering laws. Instead, advocates of DRM-bolstering laws have switched to two new arguments. First, they argue that DRM enables price discrimination – business models that charge different customers different prices for a product – and that price discrimination benefits society, at least sometimes. Second, they argue that DRM helps platform developers lock in their customers, as Apple has done with its iPod/iTunes products, and that lock-in increases the incentive to develop platforms. I won’t address the merits or limitations of these arguments here – I’m just observing that they’re replacing the P2P piracy bogeyman in the rhetoric of DMCA boosters.

Interestingly, these new arguments have little or nothing to do with copyright. The maker of almost any product would like to price discriminate, or to lock customers in to its product. Accordingly, we can expect the debate over DRM policy to come unmoored from copyright, with people on both sides making arguments unrelated to copyright and its goals. The implications of this change are pretty interesting. They’ll be the topic of my next post.

Rethinking DRM Dystopia

Thanks to Ed for the flattering introduction – now if only I can live up to it! It’s an honor (and a little intimidating) to be guest blogging on FTT after several years as an avid reader. I’ve never blogged before, but I am looking forward to the thoughtful, user-driven exchanges and high transparency that blogs in general, and FTT in particular, seem to cultivate. Please consider yourself, dear reader, every bit as warmly invited to comment and engage with my posts as you are with Ed’s, and Alex’s.

I want to use this first post to flag something that startled me, and to speculate a little about the lessons that might be drawn from it. I was surprised to read recently that Zune, Microsoft’s new music service, will probably scan users’ iTunes libraries and automatically buy for them (at Microsoft’s expense) copies of any protected music they own on the iTunes service.

Let’s suppose, for the sake of argument, that this early report is right – that Microsoft is, in fact, going to make an offer to all iTunes users to replicate their libraries of iTunes, FairPlay-protected music on the new Zune service at no added cost to the users. There are several questions of fact that leap to mind. Did Microsoft obtain the licensing rights to all of the music that is for sale on iTunes? If not, there will be some iTunes music that is not portable to the new service. Will copyright holders be getting the same amount from Microsoft, when their songs are re-purchased on behalf of migrating iTunes users, as they will get when a user makes a normal purchase of the same track in the Zune system? The copyright holders have a substantial incentive to offer Microsoft a discount on this kind of “buy out” mass purchasing. As Ed pointed out to me, it is unlikely that users would otherwise choose to re-purchase all of their music, at full price, out of their own pockets simply in order to be able to move from iTunes to Zune. By discounting their tracks to enable migration to a new service, the copyright holders would be helping create a second viable mass platform for online music sales – a move that would, in the long run, probably increase their sales.

I have spent a fair amount of time and energy worrying about dystopian scenarios in which a single vertically integrated platform, protected by legally-reinforced DRM technologies, locks users in and deprives them not only of first-order options (like the ability to copy songs to a second computer), but also of the second-order freedom to migrate away from a platform whose DRM provisions, catalog, or other features ultimately compare unfavorably to alternative platforms.

Of course, as it has turned out, the dominant DRM platform at the moment, FairPlay, actually does let people make copies of their songs on multiple computers. It is in general a fair bit less restrictive than what some of us have worried that we might, as consumers, ultimately end up being saddled with. Indeed, the relatively permissive structure of FairPlay DRM is very likely one of the factors that has contributed to Apple’s success in a marketplace that has seen many more restrictive alternative systems fail to take hold. But the dominance of Apple’s whole shiny white realm of vertical integration in the digital music market still has made it seem like it would be hard to opt against Apple, even if the platform were to get worse or if better platforms were to emerge to challenge it.

But now it seems that it may actually be easy as pie for any iTunes user to leave the Apple platform. The cost of the Zune player, which will presumably be exclusive to the Zune music service just as the iPod is to iTunes, is a significant factor, but given that reliability issues require users to replace iPods frequently, buying a new player doesn’t actually change the cost equation for a typical user over the long run.

What are the lessons here? Personally, I feel like I underestimated the power of the market to solve the possible problems raised by DRM. It appears that the “lock in” phenomenon creates a powerful incentive for competitors to invest heavily in acquiring new users, even to the point of buying them out. Microsoft is obviously the most powerful player in the technology field, and perhaps some will argue it is unique in its ability to make this kind of an offer. But I doubt that – if the Zune launch is a success, it will set a powerful precedent that DRM buyouts can be worthwhile. And even if Microsoft were unique in its ability to offer a buyout, the result in this case is that we’ll have two solid, competing platforms, each one vertically integrated. It’s no stretch of the imagination to think Apple may respond with a similar offer to lure Zune users to iTunes.

Bottom line: Markets are often surprisingly good at sorting out this kind of thing. Technology policy watchers underestimate the power of competition at our peril. It’s easy to see Microsoft or Apple as established firms coasting on their vertically integrated dominance, but the Zune buyout is a powerful reminder that that’s not what it feels like to be in this or most any other business. These firms, even the biggest, best and most dominant, are constantly working hard to outdo one another. Consumers often do very well as a result… even in a world of DRM.

Net Neutrality: Strike While the Iron Is Hot?

Bill Herman at the Public Knowledge blog has an interesting response to my net neutrality paper. As he notes, my paper was mostly about the technical details surrounding neutrality, with a short policy recommendation at the end. Here’s the last paragraph of my paper:

There is a good policy argument in favor of doing nothing and letting the situation develop further. The present situation, with the network neutrality issue on the table in Washington but no rules yet adopted, is in many ways ideal. ISPs, knowing that discriminating now would make regulation seem more necessary, are on their best behavior; and with no rules yet adopted we don’t have to face the difficult issues of line-drawing and enforcement. Enacting strong regulation now would risk side-effects, and passing toothless regulation now would remove the threat of regulation. If it is possible to maintain the threat of regulation while leaving the issue unresolved, time will teach us more about what regulation, if any, is needed.

Herman argues that waiting is a mistake, because the neutrality issue is in play now and that can’t continue for long. Normally, issues like these are controlled by a small group of legislative committee members, staffers, interest groups and lobbyists, but rarely an issue will open up for wider debate, giving broader constituencies influence over what happens. That’s when most of the important policy changes happen. Herman argues that the net neutrality issue is open now, and if we don’t act it will close again and we (the public) will lose our influence on the issue.

He makes a good point: the issue won’t stay in the public eye forever, and when it leaves the public eye change will be more difficult. But I don’t think it follows that we should enact strong neutrality regulation now. There are several reasons for this.

Tim Lee offers one reason in his response to Herman. Here’s Tim:

So let’s say Herman is right and the good guys have limited resources with which to wage this fight. What happens once network neutrality is the law of the land, Public Knowledge has moved onto its next legislative issue, and the only guys in the room at FCC hearings on network neutrality implementation are telco lawyers and lobbyists? The FCC will interpret the statute in a way that’s friendly to the telecom industry, for precisely the reasons Herman identifies. Over time, “network neutrality” will be redefined and reinterpreted to mean something the telcos can live with.

But it’s worse than that, because the telcos aren’t likely to stop at rendering the law toothless. They’re likely to continue lobbying for additional changes to the rules—by the FCC or Congress—that helps them exclude new competitors and cement their monopoly power? Don’t believe me? Look at the history of cable franchising. Look at the way the CAB helped cartelize the airline industry, and the ICC cartelized surface transportation. Look at FCC regulation of telephone service and the broadcast spectrum. All of those regulatory regimes were initially designed to control oligopolistic industries too, and each of them ended up becoming part of the problem.

I’m wary of Herman’s argument for other reasons too. Most of all, I’m not sure we know how to write neutrality regulations that will have the effects we want. I’m all in favor of neutrality as a principle, but it’s one thing to have a goal and another thing entirely to know how to write rules that will achieve that goal in practice. I worry that we’ll adopt well-intentioned neutrality regulations that we’ll regret later – and if the issue is frozen later it will be even harder to undo our mistakes. Waiting will help us learn more about the problem and how to fix it.

Finally, I worry that Congress will enact toothless rules or vague statements of principle, and then declare that the issue has been taken care of. That’s not what I’m advocating; but I’m afraid it’s what we’ll get if insist that Congress pass a net neutrality bill this year.

In any case, odds are good that the issue will be stalemated, and we’ll have to wait for the new Congress, next year, before anything happens.

Why Do Innovation Clusters Form?

Recently I attended a very interesting conference about high-tech innovation and public policy, with experts in various fields. (Such a conference will be either boring or fascinating, depending on who exactly is invited. This one was great.)

One topic of discussion was how innovation clusters form. “Innovation cluster” is the rather awkward term for a place where high-tech companies are concentrated. Silicon Valley is the biggest and best-known example.

It’s easy to understand why innovative people and companies tend to cluster. Companies spin out of other companies. People who move to an area to work for one company can easily jump to another one that forms nearby. Support services develop, such as law firms that specialize in supporting start-up companies or certain industries. Nerds like to live near other nerds. So once a cluster gets going, it tends to grow.

But why do clusters form in certain places and not others? We can study existing clusters to see what makes them different. For example, we know that clusters have more patent lawyers and fewer bowling alleys, per capita, than other places. But that doesn’t answer the question. Thinking that patent lawyers cause innovation is like thinking that ants cause picnics. What we want to know is not how existing clusters look, but how the birth of a cluster looks.

So what causes clusters to be born? Various arguments have been advanced. Technical universities can be catalysts, like Stanford in Silicon Valley. Weather and quality of life matter. Cheap land helps. Some argue that goverment-funded technology companies can be a nucleus – and perhaps funding cuts force previously government-funded engineers to improvise. Cultural factors, such as a general tolerance for experimentation and failure, can make a difference.

Simple luck plays a role, too. Even if all else is equal, a cluster will start forming somewhere first. The feedback cycle will start there, pulling resources away from other places. And that one place will pull ahead, for no particular reason except that it happened to reach critical mass first.

We like to have explanations for everything that happens. So naturally we’ll find it easy to discount the role of luck, and give credit instead to other factors. But I suspect that luck is more important than most people think.

Long-Tail Innovation

Recently I saw a great little talk by Cory Ondrejka on the long tail of innovation. (He followed up with a blog entry.)

For those not in the know, “long tail” is one of the current buzzphrases of tech punditry. The term was coined by Chris Anderson in a famous Wired article. The idea is that in markets for creative works, niche works account for a surprisingly large fraction of consumer demand. For example, Anderson writes that about one-fourth of Amazon’s book sales come from titles not among the 135,000 most popular. These books may sell in ones and twos, but there are so many of them that collectively they make up a big part of the market.

Traditional businesses generally did poorly at meeting this demand. A bricks-and-mortar book superstore stocks at most 135,000 titles, leaving at least one-fourth of the demand unmet. But online stores like Amazon can offer a much larger catalog, opening up the market to these long tail works.

Second Life, the virtual world run by Cory’s company, Linden Lab, lets users define the behavior of virtual items by writing software code in a special scripting language. Surprisingly many users do this, and the demand for scripted objects looks like a long tail distribution. If this is true for software innovation in general, Cory asked, what are the implications for business and for public policy?

The implications for public policy are interesting. Much of the innovation in the long tail is not motivated mainly by profit – the authors know that their work will not be popular. Policymakers should remember that not all valuable creativity is financially motivated.

But innovation can be deterred by imposing costs on it. The key issue is transaction costs. If you have to pay $200 to somebody before you can innovate, or if you have to involve lawyers, the innovation won’t happen. Or, just as likely, the innovation will happen anyway, and policymakers will wonder why so many people are ignoring the law. That’s what has happened with music remixes; and it could happen again for code.