April 24, 2014

avatar

Recommended Reading: The Success of Open Source

It’s easy to construct arguments that open source software can’t succeed. Why would people work for free to make something that they could get paid for? Who will do the dirty work? Who will do tech support? How can customers trust a “vendor” that is so diffuse and loosely organized?

And yet, open source has had some important successes. Apache dominates the market for web server software. Linux and its kin are serious players in the server operating system market. Linux is even a factor in the desktop OS market. How can this be reconciled with what we know about economics and sociology?

Many articles and books have been written about this puzzle. To my mind, Steven Weber’s book “The Success of Open Source” is the best. Weber explores the open source puzzle systematically, breaking it down into interesting subquestions and exploring answers. One of the book’s virtues is that it doesn’t claim to have complete answers; but it does present and dissect partial answers and hints. This is a book that could merit a full book club discussion, if people are interested.

avatar

Recommended Reading: Crime-Facilitating Speech

Eugene Volokh has an interesting new paper about Crime-Facilitating Speech (abridged version): “speech [that] provides information that makes it easier to commit crimes, torts, or other harms”. He argues convincingly that many free-speech cases pertain to crime-facilitating speech. Somebody wants to prevent speech because it may facilitate crime, but others argue that the speech has beneficial effects too. When should such speech be allowed?

The paper is a long and detailed discussion of this issues, with many examples. In the end, he asserts that crime-facilitating speech should be allowed except where (a) “the speech is said to a few people who the speaker knows are likely to use it to commit a crime or to escape punishment”, (b) the speech “has virtually no noncriminal uses”, (c) “the speech facilitates extraordinarily serious harms, such as nuclear or biological attacks”. But don’t just read the end – if you have time it’s well worth the effort to understand how he got there.

What struck me is how many of the examples relate to computer security or copyright enforcement. Many security researchers feel that the applied side of the field has become a legal minefield. Papers like this illustrate how that happened. The paper’s recommendations, if followed, would go a long way toward making legitimate research and publication safer.

avatar

ICANN Challenged on .xxx Domain

The U.S. government has joined other governments and groups in asking ICANN to delay implementation of a new “.xxx” top-level domain, according to a BBC story.

Adding a .xxx domain would make little difference in web users’ experiences. Those who want to find porn can easily find it already; and those who want to avoid it can easily avoid it. It might seem at first that the domain will create more “space” for porn sites. But there’s already “space” on the web for any new site, of any type, that somebody wants to create. The issue here is not whether sites can exist, but what they can call themselves.

Adding .xxx won’t make much difference in how sites are named, either. I wouldn’t be happy to see a porn site at freedom-to-tinker.xxx; nor would the operator of that site be happy to see my site here at freedom-to-tinker.com. The duplication just causes confusion. Your serious profit-oriented porn purveyor will want to own both the .com and .xxx versions of his site’s URL; and there’s nothing to stop him from owning both.

Note also that the naming system does not provide an easy way for end users to get a catalog of all names that end in a particular suffix. Anybody can build an index of sites that fall into a particular category. Such indices surely exist for porn sites.

The main effect of adding .xxx would be to let sites signal that they have hard-core content. That’s reportedly the reason adult theaters started labeling their content “XXX” in the first place – so customers who wanted such content could learn where to find it.

That kind of signaling is a lousy reason to create a new top-level domain. There are plenty of other ways to signal. For example, sites that wanted to signal their XXX nature could offer their home page at xxx.sitename.com in addition to www.sitename.com. But ICANN has chosen to create .xxx anyway.

Which brings us to the governments’ objections. Perhaps they object to .xxx as legitimizing the existence of porn on the net. Or perhaps they object to the creation of a mechanism that will make it easier for people to find porn.

These objections aren’t totally frivolous. There’s no top-level domain for religious groups, or for science, or for civic associations. Why create one for porn? And surely the private sector can fill the need for porn-signaling technology. Why is ICANN doing this? (Governments haven’t objected to ICANN’s decisions before, even though those decisions often made no more sense than this decision does. But that doesn’t mean ICANN is managing the namespace well.)

And so ICANN’s seemingly arbitrary management of the naming system brings it into conflict with governments. This is a sticky situation for ICANN. ICANN is nominally in charge of Internet naming, but ICANN’s legitimacy as a “government” for the net has always been shaky, and it has to worry about losing what legitimacy it has if the U.S. joins the other governments who want to replace ICANN with some kind of consortium of nations.

The U.S. government is asking ICANN to delay implementation of .xxx so it can study the issue. We all know what that means. Expect .xxx to fade away quietly as the study period never ends.

avatar

DMCA, and Disrupting the Darknet

Fred von Lohmann’s paper argues that the DMCA has failed to keep infringing copies of copyrighted works from reaching the masses. Fred argues that the DMCA has not prevented “protected” files from being ripped, and that once those files are ripped they appear on the darknet where they are available to everyone. I think Fred is right that the DMCA and the DRM (anti-copying) technologies it supports have failed utterly to keep material off the darknet.

Over at the Picker MobBlog, several people have suggested an alternate rationale for the DMCA: that it might help raise the cost and difficulty of using the darknet. The argument is that even if the DMCA doesn’t help keep content from reaching the darknet, it may help stop material on the darknet from reaching end users.

I don’t think this rationale works. Certainly, copyright owners using lawsuits and technical attacks in an attempt to disrupt the darknet. They have sued many end users and a few makers of technologies used for darknet filesharing. They have launched technical attacks including monitoring, spoofing, and perhaps even limited denial of service attacks. The disruption campaign is having a nonzero effect. But as far as I can tell, the DMCA plays no role in this campaign and does nothing to bolster it.

Why? Because nobody on the darknet is violating the DMCA. Files arrive on the darknet having already been stripped of any technical protection measures (TPMs, in the DMCA lingo). TPMs just aren’t present on the darknet. And you can’t circumvent a TPM that isn’t there.

To be sure, many darknet users break the law, and some makers of darknet technologies apparently break the law too. But they don’t break the DMCA; and indeed the legal attacks on the darknet have all been based on old-fashioned direct copyright infringement by end users, and contributory or vicarious infringement by technology makers. Even if there were no DMCA, the same legal and technical arms race would be going on, with the same results.

Though it has little if anything to do with the DMCA, the darknet technology arms race is an interesting topic in itself. In fact, I’m currently writing a paper about it, with my students Alex Halderman and Harlan Yu.

avatar

DMCA: An Avoidable Failure

In his new paper, Fred von Lohmann argues that the Digital Millennium Copyright Act of 1998, when evaluated on its own terms, is a failure. Its advocates said it would prevent widespread online copyright infringement; and it has not done so.

Fred is right on target in diagnosing the DMCA’s failure to do what its advocates predicted. What Fred doesn’t say, though, is that this failure should have been utterly predictable – it should have been obvious when the DMCA was grinding through Congress that things would end up like this.

Let’s look at the three assumptions that underlie the darknet argument [quoting Fred]:

  1. Any widely distributed object will be available to some fraction of users in a form that permits copying.
  2. Users will copy objects if it is possible and interesting to do so.
  3. Users are connected by high-bandwidth channels.

When the DMCA passed in 1998, #1 was obviously true, and #3 was about to become true. #2 was the least certain; but if #2 turned out to be false then no DMCA-like law would be necessary anyway. So why didn’t people see this failure coming in advance?

The answer is that many people did, but Congress ignored them. The failure scenario Fred describes was already conventional wisdom among independent computer security experts by 1998. Within the community, conversations about the DMCA were not about whether it would work – everybody knew it wouldn’t – but about why Washington couldn’t see what seemed obvious to us.

When the Darknet paper was published in 2001, people in the community cheered. Not because the paper had much to say to the security community – the paper’s main argument had long been conventional wisdom – but because the paper made the argument in a clear and accessible way, and because, most of all, the authors worked for a big IT company.

For quite a while, employees of big IT companies had privately denigrated DRM and the DMCA, but had been unwilling to say so in public. Put a microphone in front of them and they would dodge questions, change the subject, or say what their employer’s official policy was. But catch them in the hotel bar afterward and they would tell a different story. Everybody knew that dissenting from the corporate line was a bad career move; and nobody wanted to be the first to do it.

And so the Darknet paper caused quite a stir outside the security community, catalyzing a valuable conversation, to which Fred’s paper is a valuable contribution. It’s an interesting intellectual exercise to weigh the consequences of the DMCA in an alternate universe where it actually prevents online infringement; but if we restrict ourselves to the facts on the ground, Fred has a very strong argument.

The DMCA has failed to prevent online infringement; and that failure should have been predictable. To me, the most interesting question is how our policymakers can avoid making this kind of mistake again.

avatar

Measuring the DMCA Against the Darknet

Next week I’ll be participating in a group discussion of Fred von Lohmann’s new paper, “Measuring the DMCA Against the Darknet“, over at the Picker MobBlog. Other participants will include Julie Cohen, Wendy Gordon, Doug Lichtman, Jessica Litman, Bill Patry, Bill Rosenblatt, Larry Solum, Jim Speta, Rebecca Tushnet, and Tim Wu.

I’m looking forward to a lively debate. I’ll cross-post my entries here, with appropriate links back to the discussion over there.

avatar

HD-DVD Camp Disses Blu-Ray DRM

Proponents of HD-DVD, one of the two competing next-gen DVD standards, have harsh words for the newly announced DRM technologies adopted by the competing Blu-Ray standard, according to a Consumer Electronics Daily article quoted by an AVS Forum commenter.

[Fox engineering head Andy] Setos confirmed BD+ [one of the newly announced Blu-Ray technologies] was based on the Self-Protecting Digital Content (SPDC) encryption developed by San Francisco’s Cryptography Research. That system, which provides “renewable security” in the event AACS is hacked, was rejected for HD DVD over concerns about playability and reliability issues (CED Aug 2 p1). BDA [the Blu-Ray group] obviously had a different conclusion, Setos said.

[Hitachi advisor Mark] Knox also took a shot at the BD+ version of SPDC, calling its “Virtual Machine” concept “a goldmine for hackers.” He said the Virtual Machine “must have access to critical security info, so any malicious code designed to run on this VM would also have access. In the words of one of the more high-tech guys ‘This feeble attempt to shut the one door on hackers is going to open up a lot of windows instead.’”

There’s an interesting technical issue behind this. SPDC’s designers say that most DRM schemes are weak because a fixed DRM design is built in to player devices; and once that design is broken – as it inevitably will be – the players are forever vulnerable. Rather than using a fixed DRM design, SPDC builds into the player device a small operating system. (They call it a lightweight virtual machine, but if you look at what it does it’s clearly an operating system.) Every piece of content can come with a computer program, packaged right on the disc with the content, which the operating system loads and runs when the content is loaded. These programs can also store data and software permanently on the player. (SPDC specifications aren’t available, but they have a semi-technical white paper and a partial security analysis.)

The idea is that rather than baking a single DRM scheme into the player, you can ship out a new DRM scheme whenever you ship out a disc. Different content publishers can use different DRM schemes, by shipping different programs on their discs. So, the argument goes, the system is more “renewable”.

The drawback for content publishers is that adversaries can switch from attacking the DRM to attacking the operating system. If somebody finds a security bug in the operating system (and, let’s face it, OS security bugs aren’t exactly unprecedented), they can exploit it to undermine any and all DRM, or to publish discs that break users’ players, or to cause other types of harm.

There are also risks for users. The SPDC documents talk about the programs having access to permanent storage on the player, and connecting to the Internet. This means a disc could install software that watches how you use your player, and reports that information to somebody across the Net. Other undesirable behaviors are possible too. And there’s nothing much the user can do to prevent them – content publishers, in the name of security, will try to prevent reverse engineering of their programs or the spread of information about what they do – and even the player manufacturer won’t be able to promise users that programs running on the player will be well-behaved.

Even beyond this, you have all of the usual reliability problems that arise on operating systems that store data and run code on behalf of independent software vendors. Users generally cope with such problems by learning about how the OS works and tweaking its configuration; but this strategy won’t work too well if the workings of the OS are supposed to be secret.

The HD-DVD advocates are right that SPDC (aka BD+) opens a real can of worms. Unless the SPDC/BD+ specifications are released, I for one won’t trust that the system is secure and stable enough to make anybody happy.

avatar

Blu-Ray Tries to Out-DRM HD-DVD

Blu-Ray, one of the two competing next-gen DVD standards, has decided to up the ante by adopting even more fruitless anti-copying mechanism than the rival HD-DVD system. Blu-Ray will join HD-DVD in using the AACS technology (with its competition-limiting digital imprimatur). Blu-Ray will add two more technologies, called ROM-Mark and BD+.

ROM-Mark claims to put a hidden mark on all licensed discs. The mark will be detected by Blu-Ray players, which will refuse to play discs that don’t have it. But, somehow, it is supposed to be impossible for unlicensed disc makers to put marks on their discs. It’s not at all clear how this is supposed to work, but systems of this sort have always failed in the past, because it has always proved possible to make an exact copy of a licensed disc (including the mark).

BD+ will apparently allow the central Blu-Ray entity to update the anti-copying software in Blu-Ray players. This kind of updatability will inevitably add to the cost, complexity, and fragility of Blu-Ray players. Trying to do this raises some nasty technical issues that may not be solvable. I would like to find out more about how they think they can make this happen, especially for (say) cheap, portable players. (This technology was reportedly Fox’s reason for joining the Blu-Ray camp.)

As always, content will be copied regardless of what they try to do, and the main effect of these technologies will be to make player devices more expensive and less reliable, and to limit entry to the market for the devices. My guess is that some movie studio people actually believe these technologies will stop copying; and some know the technology won’t stop copying but want the power to limit entry.

Both groups must be happy to see the Blu-Ray and HD-DVD camps competing to make the most extravagant copy-prevention promises. To law-abiding consumers, each step in this bidding war means more expensive, less capable technologies.

avatar

Hollywood Controlling Parts of Windows Vista Design

A recent white paper (2MB Word file) from Microsoft details the planned “output content protection” in the upcoming Windows Vista (previously known as Longhorn) operating system product. It’s a remarkable document, illustrating the real costs of Hollywood’s quest to redesign the PC’s video hardware and software.

The document reveals that movie studios will have explicit veto power over what is included in some parts of Vista. For example, pages 22-24 describe the “High Bandwidth Cipher” which will be used to encrypt video data is it passes across the PC’s internal PCIe bus. Hollywood will allow the use of the AES cipher, but many PCs won’t be able to run AES fast enough, leading to stutter in the video. People are free to design their own ciphers, but they must go through an approval process before being included in Windows Vista. The second criterion for acceptance is this:

Content industry acceptance
The evidence must be presented to Hollywood and other content owners, and they must agree that it provides the required level of security. Written proof from at least three of the major Hollywood studios is required.

The document also describes how rational designs are made more expensive and complicated, or ruled out entirely, by the “robustness” rules Hollywood is demanding. Here’s an example, from page 27:

Given the data throughput possible with PCIe, there is a new class of discrete graphics cards that, to reduce costs, do not have much memory on the board. They use system memory accessed over the PCIe bus.

In the limit, this lack of local memory means that, for example, to decode, de-interlace, and render a frame of HD may require that an HD frame be sent backward and forward over the PCIe bus many times – it could be as many as 10 times.

The frames of premium content are required to be [encrypted] as they pass over the PCIe bus to system memory, and decrypted when they safely return to the graphics chip. It is the responsibility of the graphics chip to perform the encryption and decryption.

Depending on the hardware implementation, the on-chip cipher engine [which wouldn't be necessary absent the "robustness" requirements] might, or might not, go fast enough to encrypt the 3 GByte/sec (in each direction) memory data bandwidth.

These are just a few examples from a document that describes one compromise after another, in which performance, cost, and flexibility are sacrificed in a futile effort to prevent video content from leaking to the darknet. And the cost is high. As just one example, nearly all of us will have to discard our PC’s monitors and buy new ones to take advantage of new features that Microsoft could provide – more easily and at lower cost – on our existing monitors, if Hollywood would only allow it.

There can be little doubt that Microsoft is doing this because Hollywood demands it; and there won’t be much doubt among independent security experts that none of these compromises will make a dent in the availability of infringing video online. Law-abiding people will be paying more for PCs, and doing less with them, because of the Hollywood-decreed micromanagement of graphics system design.

avatar

DRM Textbooks Offered to Princeton Students

There’s a story going around the blogosphere that Princeton is experimenting with DRMed e-textbooks. Here’s an example:

Princeton University, intellectual home of Edward Felten and Alex Halderman, has evidently begun to experiment with DRM’d textbooks. According to this post, there are quite a few digital restrictions being managed:

  • Textbook is locked to the computer where you downloaded it from;
  • Copying and burning to CD is prohibited;
  • Printing is limited to small passages;
  • Unless otherwise stated, textbook activation expires after 5 months (*gasp*);
  • Activated textbooks are not returnable;
  • Buyback is not possible.

There an official press release from the publishers for download here.

Several people have written, asking for my opinion on this.

First, a correction. As far as I can tell, Princeton University has no part in this experiment. The Princeton University Store, a bookstore that is located on the edge of the campus but is not affiliated with the University, will be the entity offering DRMed textbooks. The DRM company’s press release tries to leave the impression that Princeton University itself is involved, but this appears to be incorrect.

In any case, I don’t see a reason to object to the U-Store offering these e-books, as long as students are informed about the DRM limitations and can still get the dead-tree version instead. It’s hard to see the value proposition for students in the DRMed version, unless the price is very low. It appears the price will be about two-thirds of the new-book price, which is obviously a bad deal. Our students are smart enough to know which version to buy – and the faculty will be happy to advise them if they’re not sure.

I don’t object to other people wasting their money developing products that consumers won’t want. People waste their money on foolish schemes every day. I wish for their sake that they would be smarter. But why should I object to this product or try to stop it? A product this weak will die on its own.

The problem with DRM is not that bad products can be offered, but that public policy sometimes protects bad products by thwarting the free market and the free flow of ideas. The market will kill DRM, if the market is allowed to operate.

UPDATE (August 12): The DRM vendor announced yesterday that usage restrictions will be eased somewhat. The expiration time has been extended to at least twelve months (longer for some publishers), and restrictions on printing have been loosened in some cases.