August 27, 2016

Archives for June 2003


P2P Evolution to Accelerate

The Washington Post online has a nice summary/directory of articles on the RIAA’s upcoming crackdown on peer-to-peer file sharers. The crackdown seems like a risky move, but it seems the industry can’t think of anything else to do about their P2P problem.

When the industry sued Napster into oblivion, Napster was replaced, hydra-like, by a newer generation of P2P systems that are apparently resistant to the tactics that took down Napster.

The RIAA’s new crackdown, if it works, will most likely cause yet another step in the evolution of P2P systems. P2P systems that provide only weak anonymity protection for their users will fade away, replaced by a new generation of P2P technology that resists the RIAA’s new tactics.

The RIAA’s new tactic is to join a P2P network semi-anonymously, and then to pierce the anonymity of people who are offering files. There are two countermeasures that can frustrate this tactic, and the use of these countermeasures is already starting to grow slowly.

The first countermeasure is to provide stronger anonymity protection for users, to prevent investigators from so easily unmasking users who are sharing files.

The second countermeasure is share files only among small friends-and-family groups, making it difficult for investigators to join the group. If every P2P user is a member of a few of these overlapping small groups, then files can still diffuse from place to face fairly quickly.

All of this must look pretty unfair from the RIAA’s point of view. No matter how strong the RIAA’s legal and ethical arguments against file sharing are, people will continue to share files as long as they view it as a basically benign activity. It seems to me that only a change in public attitudes, or a change in the basic legal structure of copyright, can solve the file sharing problem.


A Modest Proposal

Now that the Supreme Court has ruled that Congress can condition Federal funding for libraries on the libraries’ use of censorware (i.e., that a law called CIPA is consistent with the constitution), it’s time to take a serious look at the deficiencies of censorware, and what can be done about them.

Suppose you’re a librarian who wants to comply with CIPA, but otherwise you want your patrons to have access to as much material on the Net as possible. From your standpoint, the popular censorware products have four problems. (1) They block some unobjectionable material. (2) They fail to block some material that is obscene or harmful to minors. (3) They try to block material that Congress does not require to be blocked, such as certain political speech. (4) They don’t let you find out what they block.

(1) and (2) are just facts of life – no technology can eliminate these problems. But (3) and (4) are solvable – it’s possible to build a censorware program that doesn’t try to block anything except as required by the law, and it’s possible for a program’s vendor to reveal what their product blocks. But of course it’s unlikely that the main censorware vendors will give you (3) or (4).

So why doesn’t somebody create an open-source censoreware program that is minimally compliant with CIPA? This would give librarians a better option, and it would put pressure on the existing vendors to narrow their blocking lists and to say what they block.

I can understand why people would have been hesitant to create such a program in the past. Most people who want to minimize the intrusiveness of censorware have thus far done so by not using censorware; so there hasn’t been much of a market for a narrowly tailored product. But that may change as librarians are forced to use censorware.

Also, censorware opponents have found the lameness and overbreadth of existing censorware useful, especially in court. But now, in libraries at least, that usefulness is mostly past, and it’s time to figure out how to cope with CIPA in the least harmful way. More librarian-friendly censorware seems like a good start.

[Note: I must admit that I’m not entirely convinced by my own argument here. But I do think it has some merit and deserves discussion, and nobody else seemed to be saying it. So let the flaming begin!]


Hatch “Clarifies” His Position

Senator Orrin Hatch issued a short press release yesterday, backtracking from his previous (mis-)statement about remedies for copyright infringement. There are some interesting tidbits in the release, which I quote here in full, with the surprising bits italicized:


Washington – Sen. Orrin G. Hatch (R-Utah), Chairman of the Senate Judiciary Committee, today issued the following statement:

“I am very concerned about Internet piracy of personal and copyrighted materials, and I want to find effective solutions to these problems.

“I made my comments at yesterday’s hearing because I think that industry is not doing enough to help us find effective ways to stop people from using computers to steal copyrighted, personal or sensitive materials. I do not favor extreme remedies – unless no moderate remedies can be found. I asked the interested industries to help us find those moderate remedies.”

We can assume that every word of the release was chosen carefully, since it was issued in writing by Hatch’s office to clarify his position after a previous misstatement.

It’s significant, then, that he wants technology to prevent not only copyright infringement but also “piracy” of “personal or sensitive” information.

Note also that he does not entirely disavow his previous statement that appeared to advocate vigilante destruction of the computers of suspected violators – he still favors “extreme remedies” if “moderate remedies” prove infeasible, an eventuality that seems likely given his apparent belief that we have no moderate remedies today.

If the mainstream press is paying attention, they ought to find this alarming, since much of what they do involves collecting and publishing information that some people would prefer to call “personal or sensitive”. If “extreme remedies” for copyright infringement are a bad idea, “extreme remedies” for making truthful statements about other people are even worse.



Lawrence Solum and Minn Chung have a new paper, “The Layers Principle: Internet Architecture and the Law,” in which they argue that layering is an essential part of the Internet’s architecture and that Internet regulation should therefore respect the Internet’s layered nature. It’s a long paper, so no short commentary can do it justice, but here are a few reactions.

First, there is no doubt that layering is a central design principle of the Internet, or of any well-designed multipurpose network. When we teach computer science students about networks, layering is one of the most important concepts we try to convey. Solum and Chung are right on target about the importance of layering.

They’re on shakier ground, though, when they relate their layering principle to the end-to-end principle that Lessig has popularized in the legal/policy world. (The end-to-end principle says that most of the “brains” in the Internet should be at the endpoints, e.g. in end users’ computers, rather than in the core of the network itself.) Solum and Chung say that end-to-end is a simple consequence of their layering principle. That’s true, but only because the end-to-end principle is built in to their assumptions, in a subtle way, from the beginning. In their account, layering occurs only at the endpoints, and not in the network itself. While this is not entirely accurate, it’s not far wrong, since the layering is much deeper at the endpoints than in the core of the Net. But the reason this is true is that the Net is designed on the end-to-end principle. There are alternative designs that use deep layering everywhere, but those were not chosen because they would have violated the end-to-end principle. End-to-end is not necessarily a consequence of layering; but end-to-end is, tautologically, a consequence of the kind of end-to-end style layering that Solum and Chung assume.

Layering and end-to-end, then, are both useful rules of thumb for understanding how the Internet works. It follows, naturally, that regulation of the Net should be consistent with both principles. Any regulatory proposal, in any sphere of human activity, is backed by a story about how the proposed regulation will lead to a desirable result. And unless that story makes sense – unless it is rooted in an understanding of how the sphere being regulated actually works – then the proposal is almost certainly a bad one. So regulatory plans that are inconsistent with end-to-end or layering are usually unwise.

Of course, these two rules of thumb don’t give us the complete picture. The Net is more complicated, and sometimes a deeper understanding is needed to evaluate a policy proposal. For example, a few widespread and helpful practices such as Network Address Translation violate both the end-to-end principle and layering; and so a ban on address translation would be consistent with end-to-end and layering, but inconsistent with the actual Internet. Rules of thumb are at best a lesser substitute for detailed knowledge about how the Net really works. Thus far, we have done a poor job of incorporating that knowledge into the regulatory process. Solum and Chunn’s paper has its flaws, but it is a step in the right direction.

[UPDATE (Sept. 11, 2003): Looking back at this entry, I realize that by devoting most of my “ink” to my area of disagreement with Solum and Chunn, I might have given the impression that I didn’t like their paper. Quite the contrary. It’s a very good paper overall, and anyone serious about Internet policy should read it.]


DRM and Black Boxes

Lisa Rein has posted (with permission) a video of my short presentation at the Berkeley DRM conference. I talked about the push to turn technologies into “black boxes” that the public is not allowed to study, understand, or discuss, and how that paralyzes public debate on important issues such as electronic voting.


RIAA/Student Suits Back in the News

Jesse Jordan, one of the students sued by the RIAA, is back in the news. It’s not that anything new has happened; it’s just that Jordan and his father are complaining about the unfairness of the suit and of the $12,000 settlement.

It’s true, as Seth Finkelstein observes, that continuing to fight the suit was a lose-lose proposition for Jordan. Even if he won, his legal bills would have far exceeded the $12,000 for which he settled (and the odds are poor that the court would order the plaintiffs to cover his legal bills).

The plaintiffs’ contributory infringement claim against Jordan, based on the assertion that he ran a “Napster-like network” (which was really just an ordinary search engine) was indeed questionable. If that were the only claim against him, then I would agree that the suit looked a bit like a shakedown.

But let’s not forget the plaintiffs’ other claim, that Jordan was a direct infringer, based on his alleged redistribution of hundreds of copyrighted works from his own computer. If proven, this claim would have cost Jordan much more than $12,000 in damages. And it seems reasonable to assume that the direct infringement claim was not baseless, especially given that Jordan has not denied it.

If so, then the only unfair aspect of Jordan’s story is that he was singled out, from among all of the direct infringers out there, as the target of a lawsuit. In other words, the problem is that a great many direct infringers are out there, any of whom could be sued at the industry’s whim.

A huge gulf has developed, between the ubiquity of casual file sharing and the law’s treatment of it as a Very Serious Offense; and this cannot go on forever. Something has to give. Either the law will change, or the industry will sue file sharers into submission, or both. So far we have an uneasy truce that nobody likes.

UPDATE (3:50 PM): I originally wrote that Jordan would have had to pay the plaintiffs’ legal bills if he lost, but they wouldn’t have to pay his if he won. Louis Trager pointed out that that was incorrect, so I have corrected the text. The Copyright Act allows a court to order the losing party to pay the winning party’s legal costs, regardless of which party wins. In other words, Jordan might have had his legal bills covered, if he won his case. But of course that would be unlikely absent a total victory; and total victory would have been a long shot given the direct infringement claim.


“If It’s Not Snake Oil, It’s Pretty Awesome”

In today’s Los Angeles Times, Jon Healey writes about a new DRM proposal from a company called Music Public Broadcasting. The company’s claims, which are not substantiated in the story, give off a distinct aroma of snake oil.

The warning signs are all there. First, there is the flamboyant, self-promoting entrepreneur, newly arrived from another field. In this case, it’s a guy named Hank Risan, who was previously a dealer in high-end musical instruments.

“He is a very flamboyant guy, and he does things with a level of style that I don’t think is duplicated in the fretted-instrument industry,” said Stanley Jay, president of Mandolin Bros. Ltd., another elite dealer of stringed instruments. “In this industry, to make yourself stand apart, you need to be self-promotional. And he does that extremely well.”

Second, there’s the vaguely articulated theoretical breakthrough, described in mystical terms unintelligible to experts in the field:

Risan drew on his mathematical skills to come up with a different approach to the problem of unauthorized recording. Drawing on a branch of topology known as network theory, Risan said he could look at the networks a computer uses to move data internally and “visualize how to protect the copyrighted material as it transfers through those networks.”

The firm claims that its technology controls those pathways, letting copyright owners dictate what can and can’t be copied. “We control pathways that don’t even exist yet,” Risan said.

Third, there is the evidence that the product hasn’t been demonstrated or explained to its customers. But if it actually turns out to work, they are of course eager to buy it.

Zach Zalon of Radio Free Virgin, the online radio arm of Virgin Group, said he would love to license technology that prevented his stations’ Webcasts from being recorded by “stream ripping” programs. Stream rippers break through every anti-piracy program on the market, Zalon said, “so if you could somehow defeat that, it’s fantastic.”

An executive at a major record company who’s seen the technology for protecting streams and CDs said he was impressed, although he’s not sure the demonstration can be duplicated in the real world. “If it’s not snake oil, it’s pretty awesome,” he said.

And finally, the new product claims to invalidate an accepted, fundamental principle in the field – but without really explaining how it does so.

But as piracy experts are fond of saying, anything that can be played on a computer can be recorded, regardless of how it’s protected. Encrypted streams and downloads must be unscrambled to be heard on a computer’s speakers or shown on its screen. And there are several programs that can intercept music or video on its way to the speakers or screen after it’s been unscrambled.

As always, the burden of proof should be on those who are making the extravagant technical claims. If Risan and his company ever substantiate their claims, by explaining at a detailed technical level why their products prevent capture of audio streams, then those claims will deserve respect. Until they do that, skepticism is, as always, the best course.


Lessons from the SCO/IBM Dispute

Conventional wisdom about the SCO/IBM dustup is that it demonstrates a serious flaw in the open-source model – an asserted lack of “quality control” on open-source code that leaves end users open to potential copyright and patent infringement suits. If any pimply-faced teenager can contribute code to open-source projects, how can you be sure that that code isn’t copyrighted or patented by somebody?

SCO charges that IBM took code from a SCO-owned version of Unix and copied it into the open-source Linux operating system, in violation of a contract between IBM and SCO. There is also some ambiguous evidence that SCO may own copyrights on some of the allegedly-copied code, in which case IBM might be liable for copyright infringement.

It may well turn out that SCO’s claims are hooey, in which case the only lesson to be learned is that we shouldn’t take the claims of desperate companies too seriously. But let’s assume, just for the sake of argument, that SCO is right, and that IBM, in violation of contracts and copyrights, did copy code without permission into Linux. What lesson do these (hypothetical) facts have to teach?

Assuming that SCO’s charges are correct, the moral of the story is not, as the conventional wisdom would have it, to avoid software that comes from pimply-faced teenagers. Quite the contrary. The moral is to be wary of software from big, established companies like IBM. In SCO’s story, the pimply-faced teenagers are bystanders – the gray-haired guys in expensive suits are the crooks.

More likely, though, the fact that SCO’s story involves their code ending up in an open-source IBM product, rather than a closed-source one, is just a red herring. IBM would have had just as large an incentive to copy code into a closed-source product, and doing so would have reduced the chance of getting caught. Nobody has offered a plausible reason why the open-source nature of the end product matters.

Now let’s turn to SCO’s argument that ordinary Linux users might be liable for infringing SCO’s copyrights, even if they didn’t know that Linux contained SCO’s code. It’s hard to see how the merits of this argument depend on the fact that Linux is open-source. SCO’s arguments would seem to apply just as well to customers who made copies of closed-source IBM products (presumably, with IBM’s permission but without SCO’s). Once again, the open-source issue seems to be irrelevant.

Now it may well be that open-source products are more prone to copyright infringement or patent infringement than closed-source products. That’s an important question; but I don’t see how the SCO/IBM dispute will help us answer it.


How To Annoy Your Mother-in-Law

Look up her age here. Then send her an email informing her that anyone on the Net can do the same.

UPDATE (9:00 PM): How to run up your mother-in-law’s AOL bill: tell her she can look up her friends’ ages.


Privacy, Blogging, and Conflict of Interest

Blogging can create the most interesting conflicts of interest. Here is a particularly juicy example:

William Safire’s column in today’s New York Times questions the motives of the new LifeLog program at DARPA. (DARPA, the Defense Advanced Research Projects Agency, is the part of the U.S. Department of Defense (DoD) that funds external research and development.)

LifeLog is a latter-day version of the Memex, which was proposed by Vannevar Bush in his famous 1945 Atlantic Monthly article, “As We May Think.” Bush foresaw the Memex as a sort of universal aid to memory that would help you remember everything you had seen and heard. If you couldn’t remember the name of that great Vietnamese restaurant your brother told you about last month, your Memex would know.

Bush realized that the technology to build a real Memex was far in the future, from his perspective in 1945. As of yet, nobody has built a real Memex, because the basic technology hasn’t been available. But that is about to change. Recording devices are getting cheaper and smaller, storage devices are getting cheaper and more capacious, and wireless communication is knitting devices together. Within a few years, it will be possible to build the first Memex. Inevitably, someone will do so.

The DARPA LifeLog program is trying to build a smart Memex. LifeLog is supposed to be smart, so that it can figure out the context of actions, so as to help you recall more accurately and naturally.

LifeLog makes Safire nervous:

But wouldn’t the ubiquitous partner be embarrassing at times? Relax, says the program description, presumably written by Dr. Doug Gage, who didn’t answer my calls, e-mails or frantic telepathy. “The goal of the data collection is to `see what I see’ rather than to `see me.’ Users are in complete control of their own data-collection efforts, decide when to turn the sensors on or off and decide who will share the data.”

That’s just dandy for the personal privacy of the “user,” who would be led to believe he controlled the only copy of his infinitely detailed profile. But what about the “use-ee” — the person that [LifeLog’s] user is looking at, listening to, sniffing or conspiring with to blow up the world?

The human user may have opt-in control of the wireless wire he is secretly wearing, but all the people who come in contact with [LifeLog] and its willing user-spy would be ill-used without their knowledge. Result: Everybody would be snooping on everybody else, taping and sharing that data with the government and the last media conglomerate left standing.

Now we come to the conflicts of interest. Safire laments his inability to talk to DARPA program manager Doug Gage. It so happens that I discussed this very topic with Dr. Gage on Monday – and that I have an audio recording of that conversation! One of my colleagues made the recording, with Dr. Gage’s consent, as a Memex-style aid to memory. [But was his consent really uncoerced, since it might look hypocritical for him to withhold consent under the circumstances? Discuss.]

I would be lying if I said that the thought of publishing the tape never crossed my mind. But it seems obvious that publishing the tape would be unfair to Dr. Gage. He clearly saw me as just another computer scientist. He probably didn’t know that as a blogger I sometimes wear the hat of a pseudo-journalist. It seems unfair to act like a journalist when he was treating me as a non-journalist.

At this point I should probably tell you that I was meeting with Dr. Gage because I’m considering applying to him for funding to do research on how to make LifeLog, and Memexes in general, more privacy-friendly. (The LifeLog announcement explicitly invites proposals for such privacy research.) Publishing the tape would not endear me to the man who will ultimately decide whether to fund my research, so my decision not to publish it cannot be entirely disinterested.

On the other hand, publishing the tape would provide a perfect illustration of the need for the very research I want to fund, by illustrating how one person’s Memex records information that another person considers private. This is exactly the problem that the research is supposed to address. Not publishing the tape just reinforces the counter-argument that the research is not necessary because people can be trusted to respect each others’ confidences.

[In case you’re wondering, there is nothing shocking on the tape. If anything, Mr. Safire would probably find its contents mildly reassuring.]

Clearly, the shrewdly self-interested course of action for me is to write about all of these angles, without actually publishing the tape, and to throw in a gratuitous link to one of my own relevant research papers. Fortunately I would never stoop to that level.