November 21, 2024

Archives for February 2005

Forecast for Infotech Policy in the New Congress

Cameron Wilson, Director of the ACM Public Policy Office in Washington, looks at changes (made already or widely reported) in the new Congress and what they tell us about likely legislative action. (He co-writes the ACM U.S. Public Policy Blog, which is quite good.)

He mentions four hot areas. The first is regulation of peer-to-peer technologies. Once the Supreme Court’s decision in Grokster comes down, expect Congress to spring into action, to protect whichever side claims to be endangered by the decision. A likely focal point for this is the new Intellectual Property subcommittee of the Senate Judiciary Committee. (The subcommittee will be chaired by Sen. Orrin Hatch, who has not been shy about regulating infotech in the name of copyright. He championed of the Induce Act.) This issue will start out being about P2P but could easily expand to regulate a wider class of technologies.

The second area is telecom. Sen. Ted Stevens is the new chair of the Senate Commerce Committee, and he seems eager to work on a big revision of the Telecom Act of 1996. This will be a battle royal involving many interest groups, and telecom policy wonks will be fully absorbed. Regulation of non-telecom infotech products seems likely to creep into the bill, given the technological convergence of telecom with the Internet.

The third area is privacy. The Real ID bill, which standardizes state driver’s licenses to create what is nearly a de facto national ID card, is controversial but seems likely to become law. The recent ChoicePoint privacy scandal may drive further privacy legislation. Congress is likely to do something about spyware as well.

The fourth area is security and reliability of systems. Many people on the Hill will want to weigh in on this issue, but it’s not clear what action will be taken. There are also questions over which committees have jurisdiction. Many of us hope that PITAC’s report on the sad state of cybersecurity research funding will trigger some action.

As someone famous said, it’s hard to make predictions, especially about the future. There will surely be surprises. About the only thing we can be sure of is that infotech policy will get even more attention in this Congress than in the last one.

More on Ad-Blocking

I’m on the road today, so I don’t have a long post for you. (Good news: I’m in Rome. Bad news: It’s Rome, New York.)

Instead, let me point you to an interesting exchange about copyright and ad-blocking software on my course blog, in which “Archer” opens with a discussion of copyright and advertising revenue, and Harlan Yu responds by asking whether distributing Firefox AdBlock is a contributory infringement.

There’s plenty of interesting writing on the course blog. Check it out!

UPDATE (Feb. 28): Another student, “Unsuspecting Innocent,” has more on this topic.

Can P2P Nets Be Poisoned?

Christin, Weigend, and Chuang have an interesting new paper on corruption of files in P2P networks. Some files are corrupted accidentally (they call this “pollution”), and some might be corrupted deliberately (“poisoning”) by copyright owners or their agents. The paper measures the availability of popular, infringing files on the eDonkey, Overnet, Gnutella, and FastTrack networks, and simulates the effect of different pollution strategies that might be used.

The paper studied a few popular files for which corruption efforts were not occurring (or at least not succeeding). Polluted versions of these files are found, especially on FastTrack, but these aren’t a barrier to user access because non-corrupted files tend to have more replicas available than polluted files do, and the systems return files with more replicas first.

They move on to simulate the effect of various pollution strategies. They conclude that a sufficiently sophisticated pollution strategy, which injects different decoy versions of a file at different times, and injects many replicas of the same decoy at the same time, would significantly reduce user access to targeted files.

Some P2P programs use simple reputation systems to try to distinguish corrupted files from non-corrupted ones; the paper argues that these will be ineffective against their best pollution strategy. But they also note that better reputation systems could can detect their sophisticated poisoning strategy.

They don’t say anything more about the arms race between reputation technologies and pollution technologies. My guess is that in the long run reputation systems will win, and poisoning strategies will lose their viability. In the meantime, though, it looks like copyright owners have much to gain from poisoning.

[UPDATE (6:45 PM): I changed the second paragraph to eliminate an error that was caused by my misreading of the paper. Originally I said, incorrectly, that the study found little if any evidence of pollution for the files they studied. In fact, they chose those files because they were not subject to pollution. Thanks to Cypherpunk, Joe Hall, and Nicolas Christin for pointing out my error.]

Google AutoLink: Doesn't Cross the Line, Yet

Google’s new Toolbar includes a feature called AutoLink that adds hyperlinks to certain content in Web pages. For example, if it spots a street address in the page, it hyperlinks the address to Google Maps; if it spots the ISBN number for a book, it hyperlinks it to that book’s page on Amazon.

Some people are unhappy about this, seeing it as a violation of Google’s famous “don’t be evil” rule. I don’t see the problem – at least not yet. (Others think it is a strategic mistake by Google, which it may be. I’m not considering that question here.)

Thus far, AutoLink is applied to a page only when the user installs certain versions of the Google Toolbar. The user also has an explicit switch to turn AutoLink on and off. Nothing is going to happen unless the user asks for it, so it’s hard to see how the interests of sophisticated users are harmed. (The story might be different for novice users, if the feature is presented in a way that tends to confuse them about which hyperlinks came from the original site author. I haven’t looked at that issue.)

What about the interests of website authors? If Google is rewriting my site without my permission, that may affect my copyright interests in my page. In the same way that ad-blocking software may harm a site author’s business, an overly aggressive page-rewriting tool could potentially erode the site author’s ability to generate revenue via hyperlinks on the page. For example, if I mention a product and hyperlink to it via an online store’s affiliates program, I can get revenue if one of my readers clicks through the link and buys the product. If Google adds hyperlinks to other stores (or, worse yet, replaces my hyperlink with one of theirs), this will cost me money.

A site author could argue that the rewritten page is a derivative work, created without permission, and thus infringes copyright. Can the user be held responsible for this? Can Google?

I don’t know the legally correct answer to that question, but it’s interesting to think about whether it’s fair to allow the kind of rewriting that Google is doing. (I’m using “fair” in the ordinary sense here, not in the legalistic sense of “fair use”.) I think that what Google is doing now is fair. Links are added only at the user’s request and only in limited circumstances (street addresses, package tracking numbers, ISBNs, and vehicle ID numbers), and the original site author’s links are apparently left intact. If the system expands too far, it could cross the line, but it hasn’t done that yet.

(Andrew Grossman at Tech Liberation Front has an interesting post about this, if you can ignore the gratuitous ad hominem attacks. He’s right that this shouldn’t been seen as an antitrust issue.)

Broadcast Flag in Court

Tomorrow the DC Circuit will hear arguments in the case challenging the FCC’s authority to impose the Broadcast Flag regulation. The case will determine whether the FCC can control the design of computers, in the name of copyright. It will also determine whether the ill-conceived Broadcast Flag rule will be imposed.

Today’s New York Times has a disappointing story, by Tom Zeller Jr., rehashing the arguments about the Broadcast Flag. I say it’s disappointing because it reiterates without comment the MPAA’s logically disconnected hash of arguments about the Broadcast Flag. I think the press has a responsibility, at the very least, not to let logically fallacious arguments pass without comment.

The article starts by describing Mike Godwin downloading an episode of the Showtime series “Huff.” After some scene-setting, we read this:

The M.P.A.A. has argued that without the broadcast flag rule, content creators would have no incentive to provide digital content over the airwaves, because people could simply pluck video streams out of the air and redistribute them to millions of viewers over the Internet.

“It’s very simple,” said Fritz Attaway, a vice president and Washington general counsel for the M.P.A.A. “Without the broadcast flag, high-value content would migrate to where it could be protected.”

In practical terms, such “protected” places would be cable and satellite systems where digital content can be more easily scrambled, encrypted or otherwise controlled, leaving broadcast networks at a distinct disadvantage in the new digital marketplace.

The fallacy here should be pretty obvious. “Huff” is already distributed only in a “protected” place – a premium cable channel – and it’s available for infringing downloaders. (Other cable and satellite offerings are similarly available on P2P.) This is not evidence that cable-like protection is needed for broadcast. To the contrary, it’s evidence that the “protection” of cable-like DRM is illusory.

Similarly, the article repeats without comment the MPAA argument that they will be forced to withhold high-resolution broadcast service unless the Broadcast Flag is imposed. This argument couldn’t be more wrong in its view of broadcasters’ incentives.

In fact, P2P infringement gives broadcasters a powerful incentive to offer higher-quality, higher-resolution content.
High-res content makes legitimate broadcast service more attractive to viewers. P2P versions can’t match these increases in resolution because doing so would make P2P files much bigger, clogging P2P systems with enormous files and making downloads much slower. If broadcasters have to “compete against free” their best hope is to actually compete, by improving their product – especially when the competitor can’t match the improvement.

If the Broadcast Flag actually did reduce infringement, then imposing it would only reduce broadcasters’ incentive to switch to high-res broadcast. Looking at the evidence, though, it could hardly be more clear that the Broadcast Flag won’t reduce the availability of P2P content at all. Even ignoring the Flag’s many technical loopholes, the best it could possibly offer is the same level of protection that cable content gets today. The evidence is overwhelming that that level is insufficient to keep programs off the P2P networks. Remember Huff?

The real story here, for an enterprising reporter, lies in how the MPAA convinced the FCC to mandate the Broadcast Flag despite offering only these weak arguments in the public proceeding.