November 23, 2024

Archives for 2006

Interoperability, and the Birth of the Web

Tim Berners-Lee was here yesterday, and he told some interesting stories about the birth and growth of the Web.

I was particularly intrigued by his description of the environment at CERN, where he worked during the relevant years. CERN was (and still is) the European nuclear physics research lab. It had a permanent staff, but there was also a constant flow, in and out, of researchers and groups from various countries and institutes. These people generally brought their own computers, or used the same kinds of computers as at their home institutions. This meant that the CERN computer network was a constantly changing hodgepodge of different systems.

In this environment, interoperability – the ability to make all of these systems work together, by using the same protocols and data formats – is necessary to accomplish much of anything. And so the computing people, including Tim B-L, were constantly working to design software that would allow disparate systems to work together.

This was, in some respects, the ideal environment for developing something like the web. You had a constant flow of people in and out, so institutional memory couldn’t just live in people’s heads but had to be written down. These people were scientists, so they wanted to write down what they knew in a way that would be accessible to many others. You had a diverse and constantly changing network, so your technical solution would have to be simple and workable across a range of architectures. And you had a clever technical staff.

One wonders where the equivalent place is today. Perhaps there is a place with the right ingredients to catalyze the growth of the next generation of online communication/collaboration tools. Perhaps CERN is still that place. Or perhaps our tools have evolved to the point where there doesn’t have to be a single place, but this can happen via some Wiki/chat/CVS site.

Guns vs. Random Bits

Last week Tim Wu gave an interesting lecture here at Princeton – the first in our infotech policy lecture series – entitled “Who Controls the Internet?”, based on his recent book of the same title, co-authored with Jack Goldsmith. In the talk, Tim argued that national governments will have a larger role than most people think, for good or ill, in the development and use of digital technologies.

Governments have always derived power from their ability to use force against their citizens. Despite claims that digital technologies would disempower government, Tim argued that it is now becoming clear that governments have the same sort of power they have always had. He argued that technology doesn’t open borders as widely as you might think.

An illustrative example is the Great Firewall of China. The Chinese government has put in place technologies to block their citizens’ access to certain information and to monitor their citizens’ communications. There are privacy-enhancing technologies that could give Chinese citizens access to the open Web and allow them to communicate privately. For example, they could encrypt all of their Internet traffic and pass it through a chain of intermediaries, so that all the government monitors saw was a stream of encrypted bits.

Such technologies work as a technical matter, but they don’t provide much comfort in practice, because people know that using such technologies – conspicuously trafficking in encrypted data – could lead to a visit from the police. Guns trump ciphers.

At the end of the lecture, Tim Lee (who happened to be in town) asked an important question: how much do civil liberties change this equation? If government can arbitrarily punish citizens, then it can deter the use of privacy-enhancing technologies. But are limits on government power, such as the presumption of innocence and limits on search and seizure, enough to turn the tables in practice?

From a technology standpoint, the key issue is whether citizens have the right to send and receive random (or random-looking) bits, without being compelled to explain what they are really doing. Any kind of private or anonymous communication can be packaged, via encryption, to look like random bits, so the right to communicate random bits (plus the right to use a programmable computer to pre- and post-process messages) gives people the ability to communicate out of the view of government.

My sense is that civil liberties, including the right to communicate random bits, go a long way in empowering citizens to communicate out of the view of government. It stands to reason that people who are more free offline will be tend to be more free online as well.

Which raises another question that Tim Wu didn’t have time to address at any length: can a repressive country walk the tightrope by retaining control over its citizens’ access to political information and debate, while giving them enough autonomy online to reap the economic benefits of the Net? Tim hinted that he thought the answer might be yes. I’m looking forward to reading “Who Controls the Internet?” to see more discussion of this point.

Korean Music Industry Puts Negative Value on DRM

The Korean music industry has negotiated a deal that puts a monetary price on the inconvenience customers experience due to Digital Restrictions Management (DRM) technology. According to a DRM Watch story:

In an agreement with the Korea Music Producers’ Association (KMPA), [the online service] Soribada will charge users KRW 500 (US $0.51) for DRM-protected music tracks and KRW 700 ($0.72) for non-DRM-protected tracks….

How should we interpret this deal? DRM Watch starts out on the right track but then goes terribly wrong:

The above figures can be read in a number of ways. Most importantly, they reflect the idea that users can do less with DRM-protected tracks than with unprotected ones, including some things that provide a better user experience and/or are allowed under Korea’s copyright laws.

But beyond that, those figures imply that KMPA is assuming a piracy rate for unprotected tracks of 40% relative to the piracy rate for DRM-protected tracks. Put another way, if KMPA assumes almost zero piracy for protected tracks, then it is assuming that for every unprotected track purchased, 0.4 tracks are illegally copied. We would be interested to know if there were any quantitatively analytic basis for that 40%.

To see what is wrong with this logic, let’s apply the same argument to an analogous situation. Suppose a first-class air ticket to Chicago costs $720, and a coach ticket costs $510. We cannot conclude that the airline expects 40% of first class tickets to be stolen! The price differential merely encodes the fact that customers value the first-class seat more than the coach seat.

In the same way, if non-DRM songs cost more than DRM songs, we can safely conclude that customers like non-DRM songs better.

It’s tempting to say that the 40% price difference reflects the value of the functionality that the average customer loses due to DRM. That’s more plausible than DRM Watch’s theory, but it’s still not quite right, because the price difference may be a price discrimination strategy.

Price discrimination by versioning is a standard tactic in information markets. For example, software companies often sell “standard” and “pro” versions of their products, where the standard version is just the pro version with some features disabled. High-end customers buy the pro version, and more cost-conscious customers buy the standard. By having two versions, the vendor can extract more revenue from the high-end customers, while still extracting some non-zero revenue from the cost-conscious customers.

KMPA’s two-tier pricing looks like a straightforward example of product versioning. The non-DRM version is for higher-end customers who know they like the song and are willing to pay for flexible use of it. The DRM version is for cost-conscious customers who might not be entirely sure they will like the song.

If this is a versioning strategy by KMPA, it may make sense for them to reduce deliberately the usefulness of the DRM version, even beyond the inherent limits of DRM. Think of the software vendor with standard and pro versions – the limitations of the standard version are not dictated by technical necessity but are chosen strategically by the vendor. The same may be true here – KMPA may have an incentive to make the DRM version less useful than it could be.

It’s worth noting that KMPA can rationally choose this versioning strategy even if it knows that DRM does nothing to stop copyright infringement. Indeed, the versioning strategy may even be rational if DRM causes infringement. All we can conclude from the KMPA’s pricing strategy is that DRM reduces customer value. But we knew that already.

Conscientious Objection in P2P

One argument made against using P2P systems like Grokster was that by using them you might participate in the distribution of bad content such as infringing files, hate speech, or child porn. If you use the Web to distribute or read content, you play no part in distributing anything you find objectionable – you only distribute a file if you choose to do so. P2P, the argument goes, is different.

Today I want to consider what you can do if you want to use P2P to access files, but you want to avoid participating in any way in the distribution of bad files. When I say a file is “bad” I mean only that you, personally, have a strong moral objection to it, so that you do not want to participate in its distribution. Different people will have different ideas about which files (if any) are bad. Saying that a file is bad is not the same as saying that it should be banned or that others should not be allowed to distribute it – choosing not to do something yourself is not the same as banning others from doing it. So this is not about censorship.

The original design of BitTorrent was friendly to those who wanted to avoid distributing bad files. You could distribute any files you liked, and by default you would automatically redistribute any file that you had downloaded. But you wouldn’t find yourself distributing any bad files (unless you downloaded bad files yourself), or even helping anybody find bad files. Others could read or publish what they wanted, but you wouldn’t help them unless you wanted to.

This is unlike Grokster or Gnutella, where your computer would (by default at least) help to construct an index that would help people find files of all types, including some bad files. You might think that’s fine and choose to participate in it, but then again you might be unhappy if the proportion of bad files that you were helping to index was too high for your taste, or their content too vile. Because BitTorrent didn’t have a built-in index, you could use it without running into this issue.

But then, about ten months ago, a new “trackerless” version of BitTorrent came along. This version had a big distributed index, provided cooperatively by the computers of everybody who was using BitTorrent. After this change, if you were using BitTorrent, you were helping to index files. (Strictly speaking, you would be providing “tracker information” for the files; I’m using “index” as shorthand.) Some of those files might be bad.

To be precise, you would be helping to index a small, and randomly chosen, subset of all the BitTorrent files in the world. And if it came to your attention that one of those files was bad, you could choose not to participate in indexing it, by simply refusing to respond to index queries about that file. Standard BitTorrent software doesn’t support this refusal tactic, but the tactic is possible given how the BitTorrent protocol is designed.

Your refusal to provide index information for a file would not, by itself, make the file unavailable. BitTorrent stores index information redundantly, so other people could answer the index queries that you refused to answer. Only if all (or too many) of the people assigned to index a file refused to do so would that file disappear.

If lots of people started refusing to index files they thought were bad, this would amount to a kind of jury system, in which each file was assigned to a random set of BitTorrent “citizens” who voted (by indexing, or refusing to do so) on whether the file should be available. If too many jurors voted to suppress a file, it would disappear.

By now, some of you are jumping up and down, shaking your fingers at me. This is an affront to free speech, you’re saying – every file should be available to everybody. To which I reply: don’t blame me. This is the way BitTorrent is designed. By switching to the trackerless protocol, BitTorrent’s designers created this possibility. And the problem – if you consider it one – can be fixed. How to fix it is a topic for another day.

The French DRM Law, and the Right to Interoperate

Thanks to Bernard Lang for yesterday’s discussion of the proposed French DRM law. The proposed law has been widely criticized in the U.S. press. Assuming Dr. Lang’s translation is correct, this criticism is mostly (but not entirely) off the mark.

Apple’s iTunes and iPod are good examples of the type of product that would be affected. Critics of the proposed law claim that (a) the proposal would increase infringement on record company copyrights, and (b) the proposal would strip Apple of its intellectual property.

The first claim is easily disposed of. iTunes songs are easily copied – everybody knows that iTunes lets you copy songs to unprotected CDs. And more to the point, record companies already sell all of their music in an unprotected format – the compact disc – which accounts for the vast majority of music sales. These songs are all on the P2P networks already, and any small difference in the difficulty of copying iTunes songs isn’t going to change that.

The second claim is the more interesting one. Some critics of the proposal claim that it would force Apple to publish the source code for iTunes. I don’t see that requirement in the proposed text. All the text requires is that Apple release enough information for other companies to make products that interoperate with iTunes. Apple can do this without publishing its source code. Apple can document the file format in which iTunes songs are stored, or it can create an interface that other DRM programs can invoke if they want to work with iTunes, or it can find another way to enable interoperation.

The key issue is whether third-party products can interoperate with iTunes. As Bernard Lang argued yesterday, current law does not give Apple the exclusive right to interoperate with Apple products. To change this, by creating such an exclusive right, would be a big change in public policy – one the proposed law would avoid, with its pro-interoperation provisions.

Interoperation was also a big theme in the important new DMCA white paper issued last week by the Cato Institute (and written by Tim Lee). Cato argues that the DMCA anticircumvention provisions have given incumbent companies an effective right to veto the development of interoperable products, and have thereby blocked innovation. France, wisely, wants to avoid this problem.

(Some commentators have argued that granting an exclusive right to interoperate can be efficient in some circumstances. Even if they’re right, it seems like bad policy to grant that right so indirectly, or to condition it on the presence of copyrighted content or on the use of certain kinds of access control technologies.)

But this is where the French proposal overreaches. Rather than simply protecting the ability of other companies to interoperate with iTunes, by keeping their path free of legal barriers, the proposal would require Apple to take affirmative steps to help rivals interoperate.

Imposing that obligation on Apple is not necessary, in my view. iTunes is not very complicated, so others should be able to figure out how to interoperate, for example by reverse engineering iTunes, as long as the law clearly allows them to do so. The disclosure obligation, though less onerous than critics say, won’t provide much extra benefit, so it’s not worth imposing its cost on Apple and others.

The best policy is for government to stay out DRM decisionmaking altogether. Let companies like Apple develop DRM schemes. Let others interoperate with those schemes, if they can figure out how. Ensure competition, and let the market decide which products will succeed, and which DRM schemes are viable. This is the essence of the Cato report, and of the USACM DRM principles. It’s my view, too.