November 26, 2024

Hatch "Clarifies" His Position

Senator Orrin Hatch issued a short press release yesterday, backtracking from his previous (mis-)statement about remedies for copyright infringement. There are some interesting tidbits in the release, which I quote here in full, with the surprising bits italicized:

HATCH COMMENTS ON COPYRIGHT ENFORCEMENT

Washington – Sen. Orrin G. Hatch (R-Utah), Chairman of the Senate Judiciary Committee, today issued the following statement:

“I am very concerned about Internet piracy of personal and copyrighted materials, and I want to find effective solutions to these problems.

“I made my comments at yesterday’s hearing because I think that industry is not doing enough to help us find effective ways to stop people from using computers to steal copyrighted, personal or sensitive materials. I do not favor extreme remedies – unless no moderate remedies can be found. I asked the interested industries to help us find those moderate remedies.”

We can assume that every word of the release was chosen carefully, since it was issued in writing by Hatch’s office to clarify his position after a previous misstatement.

It’s significant, then, that he wants technology to prevent not only copyright infringement but also “piracy” of “personal or sensitive” information.

Note also that he does not entirely disavow his previous statement that appeared to advocate vigilante destruction of the computers of suspected violators – he still favors “extreme remedies” if “moderate remedies” prove infeasible, an eventuality that seems likely given his apparent belief that we have no moderate remedies today.

If the mainstream press is paying attention, they ought to find this alarming, since much of what they do involves collecting and publishing information that some people would prefer to call “personal or sensitive”. If “extreme remedies” for copyright infringement are a bad idea, “extreme remedies” for making truthful statements about other people are even worse.

Layers

Lawrence Solum and Minn Chung have a new paper, “The Layers Principle: Internet Architecture and the Law,” in which they argue that layering is an essential part of the Internet’s architecture and that Internet regulation should therefore respect the Internet’s layered nature. It’s a long paper, so no short commentary can do it justice, but here are a few reactions.

First, there is no doubt that layering is a central design principle of the Internet, or of any well-designed multipurpose network. When we teach computer science students about networks, layering is one of the most important concepts we try to convey. Solum and Chung are right on target about the importance of layering.

They’re on shakier ground, though, when they relate their layering principle to the end-to-end principle that Lessig has popularized in the legal/policy world. (The end-to-end principle says that most of the “brains” in the Internet should be at the endpoints, e.g. in end users’ computers, rather than in the core of the network itself.) Solum and Chung say that end-to-end is a simple consequence of their layering principle. That’s true, but only because the end-to-end principle is built in to their assumptions, in a subtle way, from the beginning. In their account, layering occurs only at the endpoints, and not in the network itself. While this is not entirely accurate, it’s not far wrong, since the layering is much deeper at the endpoints than in the core of the Net. But the reason this is true is that the Net is designed on the end-to-end principle. There are alternative designs that use deep layering everywhere, but those were not chosen because they would have violated the end-to-end principle. End-to-end is not necessarily a consequence of layering; but end-to-end is, tautologically, a consequence of the kind of end-to-end style layering that Solum and Chung assume.

Layering and end-to-end, then, are both useful rules of thumb for understanding how the Internet works. It follows, naturally, that regulation of the Net should be consistent with both principles. Any regulatory proposal, in any sphere of human activity, is backed by a story about how the proposed regulation will lead to a desirable result. And unless that story makes sense – unless it is rooted in an understanding of how the sphere being regulated actually works – then the proposal is almost certainly a bad one. So regulatory plans that are inconsistent with end-to-end or layering are usually unwise.

Of course, these two rules of thumb don’t give us the complete picture. The Net is more complicated, and sometimes a deeper understanding is needed to evaluate a policy proposal. For example, a few widespread and helpful practices such as Network Address Translation violate both the end-to-end principle and layering; and so a ban on address translation would be consistent with end-to-end and layering, but inconsistent with the actual Internet. Rules of thumb are at best a lesser substitute for detailed knowledge about how the Net really works. Thus far, we have done a poor job of incorporating that knowledge into the regulatory process. Solum and Chunn’s paper has its flaws, but it is a step in the right direction.

[UPDATE (Sept. 11, 2003): Looking back at this entry, I realize that by devoting most of my “ink” to my area of disagreement with Solum and Chunn, I might have given the impression that I didn’t like their paper. Quite the contrary. It’s a very good paper overall, and anyone serious about Internet policy should read it.]

DRM and Black Boxes

Lisa Rein has posted (with permission) a video of my short presentation at the Berkeley DRM conference. I talked about the push to turn technologies into “black boxes” that the public is not allowed to study, understand, or discuss, and how that paralyzes public debate on important issues such as electronic voting.

RIAA/Student Suits Back in the News

Jesse Jordan, one of the students sued by the RIAA, is back in the news. It’s not that anything new has happened; it’s just that Jordan and his father are complaining about the unfairness of the suit and of the $12,000 settlement.

It’s true, as Seth Finkelstein observes, that continuing to fight the suit was a lose-lose proposition for Jordan. Even if he won, his legal bills would have far exceeded the $12,000 for which he settled (and the odds are poor that the court would order the plaintiffs to cover his legal bills).

The plaintiffs’ contributory infringement claim against Jordan, based on the assertion that he ran a “Napster-like network” (which was really just an ordinary search engine) was indeed questionable. If that were the only claim against him, then I would agree that the suit looked a bit like a shakedown.

But let’s not forget the plaintiffs’ other claim, that Jordan was a direct infringer, based on his alleged redistribution of hundreds of copyrighted works from his own computer. If proven, this claim would have cost Jordan much more than $12,000 in damages. And it seems reasonable to assume that the direct infringement claim was not baseless, especially given that Jordan has not denied it.

If so, then the only unfair aspect of Jordan’s story is that he was singled out, from among all of the direct infringers out there, as the target of a lawsuit. In other words, the problem is that a great many direct infringers are out there, any of whom could be sued at the industry’s whim.

A huge gulf has developed, between the ubiquity of casual file sharing and the law’s treatment of it as a Very Serious Offense; and this cannot go on forever. Something has to give. Either the law will change, or the industry will sue file sharers into submission, or both. So far we have an uneasy truce that nobody likes.

UPDATE (3:50 PM): I originally wrote that Jordan would have had to pay the plaintiffs’ legal bills if he lost, but they wouldn’t have to pay his if he won. Louis Trager pointed out that that was incorrect, so I have corrected the text. The Copyright Act allows a court to order the losing party to pay the winning party’s legal costs, regardless of which party wins. In other words, Jordan might have had his legal bills covered, if he won his case. But of course that would be unlikely absent a total victory; and total victory would have been a long shot given the direct infringement claim.

"If It's Not Snake Oil, It's Pretty Awesome"

In today’s Los Angeles Times, Jon Healey writes about a new DRM proposal from a company called Music Public Broadcasting. The company’s claims, which are not substantiated in the story, give off a distinct aroma of snake oil.

The warning signs are all there. First, there is the flamboyant, self-promoting entrepreneur, newly arrived from another field. In this case, it’s a guy named Hank Risan, who was previously a dealer in high-end musical instruments.

“He is a very flamboyant guy, and he does things with a level of style that I don’t think is duplicated in the fretted-instrument industry,” said Stanley Jay, president of Mandolin Bros. Ltd., another elite dealer of stringed instruments. “In this industry, to make yourself stand apart, you need to be self-promotional. And he does that extremely well.”

Second, there’s the vaguely articulated theoretical breakthrough, described in mystical terms unintelligible to experts in the field:

Risan drew on his mathematical skills to come up with a different approach to the problem of unauthorized recording. Drawing on a branch of topology known as network theory, Risan said he could look at the networks a computer uses to move data internally and “visualize how to protect the copyrighted material as it transfers through those networks.”

The firm claims that its technology controls those pathways, letting copyright owners dictate what can and can’t be copied. “We control pathways that don’t even exist yet,” Risan said.

Third, there is the evidence that the product hasn’t been demonstrated or explained to its customers. But if it actually turns out to work, they are of course eager to buy it.

Zach Zalon of Radio Free Virgin, the online radio arm of Virgin Group, said he would love to license technology that prevented his stations’ Webcasts from being recorded by “stream ripping” programs. Stream rippers break through every anti-piracy program on the market, Zalon said, “so if you could somehow defeat that, it’s fantastic.”

An executive at a major record company who’s seen the technology for protecting streams and CDs said he was impressed, although he’s not sure the demonstration can be duplicated in the real world. “If it’s not snake oil, it’s pretty awesome,” he said.

And finally, the new product claims to invalidate an accepted, fundamental principle in the field – but without really explaining how it does so.

But as piracy experts are fond of saying, anything that can be played on a computer can be recorded, regardless of how it’s protected. Encrypted streams and downloads must be unscrambled to be heard on a computer’s speakers or shown on its screen. And there are several programs that can intercept music or video on its way to the speakers or screen after it’s been unscrambled.

As always, the burden of proof should be on those who are making the extravagant technical claims. If Risan and his company ever substantiate their claims, by explaining at a detailed technical level why their products prevent capture of audio streams, then those claims will deserve respect. Until they do that, skepticism is, as always, the best course.