January 19, 2025

Understanding the Newts

Recently I’ve been trying to figure out the politics of technology policy. There seem to be regularly drawn battle lines in Congress, but for the most part tech policy doesn’t play out as a Republican vs. Democratic or liberal vs. conservative conflict.

Henry Farrell, in a recent post at Crooked Timber, put his finger on one important factor. This was part of a larger online seminar on Chris Mooney’s book “The Republican War on Science” (which I won’t discuss here). Here’s the core of Henry Farrell’s observation:

There’s a strand of Republican thinking – represented most prominently by Newt Gingrich and by various Republican-affiliated techno-libertarians – that has a much more complicated attitude to science. Chris [Mooney] more or less admits in the book that he doesn’t get Newt, who on the one hand helped gut OTA [the Office of Technology Assessment] (or at the very least stood passively to one side as it was gutted) but on the other hand has been a proponent of more funding for many areas of the sciences. I want to argue that getting Newt is important.

What drives Newt and people like him? Why are they so vigorously in favour of some kinds of science, and so opposed to others? The answer lies, I think, in an almost blindly optimistic set of beliefs about technology and its likely consequences when combined with individual freedom. Technology doesn’t equal science of course; this viewpoint is sometimes pro-science, sometimes anti- and sometimes orthogonal to science as it’s usually practiced. Combining some half-baked sociology with some half arsed intellectual history, I want to argue that there is a pervasive strain of libertarian thought (strongly influenced by a certain kind of science fiction) that sees future technological development as likely to empower individuals, and thus as being highly attractive. When science suggests a future of limitless possibilities for individuals, people with this orientation tend to be vigorously in its favour. When, instead, science suggests that there are limits to how technology can be developed, or problems that aren’t readily solved by technological means, people with this orientation tend either to discount it or to be actively hostile to it.

This mindset is especially dicey when applied to technology policy. It’s one thing to believe, as Farrell implies here, that technology can always subdue nature. That view at least reflects a consistent faith in the power of technology. But in tech policy issues, we’re not thinking so much about technology vs. nature, as about technology vs. other technology. And in a technology vs. technology battle, an unshakable faith in technology isn’t much of a guide to action.

Consider Farrell’s example of the Strategic Defense Initiative, the original Reagan-era plan to develop strong defenses against ballistic missile attacks. At the time, belief that SDI would succeed was a pretty good litmus test for this kind of techno-utopianism. Most reputable scientists said at the time that SDI wasn’t feasible, and they turned out to be right. But the killer argument against SDI was that enemies would adapt to SDI technologies by deploying decoys, or countermeasures, or alternatives to ballistic missiles such as suitcase bombs. SDI was an attempt to defeat technology with technology.

The same is true in the copyright wars. Some techno-utopians see technology – especially DRM – as the solution. The MPAA’s rhetoric about DRM often hits this note – Jack Valenti is a master at professing faith in technology as solving the industry’s problems. But DRM tries to defeat technology with technology, so faith in technology doesn’t get you very far. To make good policy, what you really need is to understand the technologies on both sides of the battle, as well as the surrounding technical landscape that lets you predict the future of the technical battle.

The political challenge here is how to defuse the dangerous instincts of the less-informed techno-utopians. How can we preserve their general faith in technology while helping them see why it won’t solve all human problems?

Princeton-Microsoft Intellectual Property Conference

Please join us for the 2006 Princeton University – Microsoft Intellectual Property Conference, Creativity & I.P. Law: How Intellectual Property Fosters or Hinders Creative Work, May 18-19 at Princeton University. This public conference will explore a number of strategies for dealing with IP issues facing creative workers in the fields of information technology, biotechnology, the arts, and archiving/humanities.

The conference is co-sponsored by the Center for Arts and Cultural Policy Studies, the Program in Law and Public Affairs, and the Center for Information Technology Policy at the Woodrow Wilson School of Public and International Affairs and funded by the Microsoft Corporation, with additional support from the Rockefeller Foundation.

The conference features keynote addresses from Lawrence Lessig, Professor of Law at Stanford Law School, and Raymond Gilmartin, former CEO of Merck, Inc. A plenary address will be delivered by Sérgio Sá Leitão, Secretary for Cultural Policies at the Ministry of Culture, Brazil.

Six panels, bringing together experts from various disciplines and sectors, will examine the following topics:

  • Organizing the public interest
  • The construction of authorship
  • Patents and creativity
  • Tacit knowledge and the pragmatics of creative work: can IP law keep up?
  • Compulsory licensing: a solution to multiple-rights-induced gridlock?
  • New models of innovation: blurring boundaries and balancing conflicting norms

We expect the conference to generate a number of significant research initiatives designed to collect and analyze empirical data on the relationship between intellectual property regimes and the practices of creative workers.

Registration for the conference is strongly encouraged as space is limited for some events. For additional information and to register, please visit the conference web site. Online registration will be available beginning Friday, April 14.

We hope to see you in May.

Stanley N. Katz, Director, Center for Arts and Cultural Policy Studies
Paul J. DiMaggio, Research Director, Center for Arts and Cultural Policy Studies
Edward W. Felten, Director, Center for Information Technology Policy

Interoperability, and the Birth of the Web

Tim Berners-Lee was here yesterday, and he told some interesting stories about the birth and growth of the Web.

I was particularly intrigued by his description of the environment at CERN, where he worked during the relevant years. CERN was (and still is) the European nuclear physics research lab. It had a permanent staff, but there was also a constant flow, in and out, of researchers and groups from various countries and institutes. These people generally brought their own computers, or used the same kinds of computers as at their home institutions. This meant that the CERN computer network was a constantly changing hodgepodge of different systems.

In this environment, interoperability – the ability to make all of these systems work together, by using the same protocols and data formats – is necessary to accomplish much of anything. And so the computing people, including Tim B-L, were constantly working to design software that would allow disparate systems to work together.

This was, in some respects, the ideal environment for developing something like the web. You had a constant flow of people in and out, so institutional memory couldn’t just live in people’s heads but had to be written down. These people were scientists, so they wanted to write down what they knew in a way that would be accessible to many others. You had a diverse and constantly changing network, so your technical solution would have to be simple and workable across a range of architectures. And you had a clever technical staff.

One wonders where the equivalent place is today. Perhaps there is a place with the right ingredients to catalyze the growth of the next generation of online communication/collaboration tools. Perhaps CERN is still that place. Or perhaps our tools have evolved to the point where there doesn’t have to be a single place, but this can happen via some Wiki/chat/CVS site.

Guns vs. Random Bits

Last week Tim Wu gave an interesting lecture here at Princeton – the first in our infotech policy lecture series – entitled “Who Controls the Internet?”, based on his recent book of the same title, co-authored with Jack Goldsmith. In the talk, Tim argued that national governments will have a larger role than most people think, for good or ill, in the development and use of digital technologies.

Governments have always derived power from their ability to use force against their citizens. Despite claims that digital technologies would disempower government, Tim argued that it is now becoming clear that governments have the same sort of power they have always had. He argued that technology doesn’t open borders as widely as you might think.

An illustrative example is the Great Firewall of China. The Chinese government has put in place technologies to block their citizens’ access to certain information and to monitor their citizens’ communications. There are privacy-enhancing technologies that could give Chinese citizens access to the open Web and allow them to communicate privately. For example, they could encrypt all of their Internet traffic and pass it through a chain of intermediaries, so that all the government monitors saw was a stream of encrypted bits.

Such technologies work as a technical matter, but they don’t provide much comfort in practice, because people know that using such technologies – conspicuously trafficking in encrypted data – could lead to a visit from the police. Guns trump ciphers.

At the end of the lecture, Tim Lee (who happened to be in town) asked an important question: how much do civil liberties change this equation? If government can arbitrarily punish citizens, then it can deter the use of privacy-enhancing technologies. But are limits on government power, such as the presumption of innocence and limits on search and seizure, enough to turn the tables in practice?

From a technology standpoint, the key issue is whether citizens have the right to send and receive random (or random-looking) bits, without being compelled to explain what they are really doing. Any kind of private or anonymous communication can be packaged, via encryption, to look like random bits, so the right to communicate random bits (plus the right to use a programmable computer to pre- and post-process messages) gives people the ability to communicate out of the view of government.

My sense is that civil liberties, including the right to communicate random bits, go a long way in empowering citizens to communicate out of the view of government. It stands to reason that people who are more free offline will be tend to be more free online as well.

Which raises another question that Tim Wu didn’t have time to address at any length: can a repressive country walk the tightrope by retaining control over its citizens’ access to political information and debate, while giving them enough autonomy online to reap the economic benefits of the Net? Tim hinted that he thought the answer might be yes. I’m looking forward to reading “Who Controls the Internet?” to see more discussion of this point.

Korean Music Industry Puts Negative Value on DRM

The Korean music industry has negotiated a deal that puts a monetary price on the inconvenience customers experience due to Digital Restrictions Management (DRM) technology. According to a DRM Watch story:

In an agreement with the Korea Music Producers’ Association (KMPA), [the online service] Soribada will charge users KRW 500 (US $0.51) for DRM-protected music tracks and KRW 700 ($0.72) for non-DRM-protected tracks….

How should we interpret this deal? DRM Watch starts out on the right track but then goes terribly wrong:

The above figures can be read in a number of ways. Most importantly, they reflect the idea that users can do less with DRM-protected tracks than with unprotected ones, including some things that provide a better user experience and/or are allowed under Korea’s copyright laws.

But beyond that, those figures imply that KMPA is assuming a piracy rate for unprotected tracks of 40% relative to the piracy rate for DRM-protected tracks. Put another way, if KMPA assumes almost zero piracy for protected tracks, then it is assuming that for every unprotected track purchased, 0.4 tracks are illegally copied. We would be interested to know if there were any quantitatively analytic basis for that 40%.

To see what is wrong with this logic, let’s apply the same argument to an analogous situation. Suppose a first-class air ticket to Chicago costs $720, and a coach ticket costs $510. We cannot conclude that the airline expects 40% of first class tickets to be stolen! The price differential merely encodes the fact that customers value the first-class seat more than the coach seat.

In the same way, if non-DRM songs cost more than DRM songs, we can safely conclude that customers like non-DRM songs better.

It’s tempting to say that the 40% price difference reflects the value of the functionality that the average customer loses due to DRM. That’s more plausible than DRM Watch’s theory, but it’s still not quite right, because the price difference may be a price discrimination strategy.

Price discrimination by versioning is a standard tactic in information markets. For example, software companies often sell “standard” and “pro” versions of their products, where the standard version is just the pro version with some features disabled. High-end customers buy the pro version, and more cost-conscious customers buy the standard. By having two versions, the vendor can extract more revenue from the high-end customers, while still extracting some non-zero revenue from the cost-conscious customers.

KMPA’s two-tier pricing looks like a straightforward example of product versioning. The non-DRM version is for higher-end customers who know they like the song and are willing to pay for flexible use of it. The DRM version is for cost-conscious customers who might not be entirely sure they will like the song.

If this is a versioning strategy by KMPA, it may make sense for them to reduce deliberately the usefulness of the DRM version, even beyond the inherent limits of DRM. Think of the software vendor with standard and pro versions – the limitations of the standard version are not dictated by technical necessity but are chosen strategically by the vendor. The same may be true here – KMPA may have an incentive to make the DRM version less useful than it could be.

It’s worth noting that KMPA can rationally choose this versioning strategy even if it knows that DRM does nothing to stop copyright infringement. Indeed, the versioning strategy may even be rational if DRM causes infringement. All we can conclude from the KMPA’s pricing strategy is that DRM reduces customer value. But we knew that already.