October 12, 2024

Archives for September 2005

Net Governance Debate Heats Up

European countries surprised the U.S. Wednesday by suggesting that an international body rather than the U.S. government should have ultimate control over certain Internet functions. According to Tom Wright’s story in the International Herald Tribune,

The United States lost its only ally [at the U.N.’s World Summit on the Information Society] late Wednesday when the EU made a surprise proposal to create an intergovernmental body that would set principles for running the Internet. Currently the U.S. Commerce Department approves changes to the Internet’s “root zone files”, which are administered by the Internet Corporation for Assigned Names and Numbers, or Icann, a nonprofit organization based in Marina del Rey, California.

As often happens, this discussion seems to confuse control over Internet naming with control over the Internet as a whole. Note the juxtaposition: the EU wants a new body to “set principles for running the Internet”; currently the U.S. controls naming via Icann.

This battle would be simpler and less intense if it were only about naming. What is really at issue is who will have the perceived legitimacy to regulate the Internet. The U.S. fears a U.N.-based regulator, as do I. Much of the international community fears and resents U.S. hegemony over the Net. (General anti-Americanism plays a role too, as in the Inquirer’s op-ed.)

The U.S. would have cleaner hands in this debate if it swore off broad regulation of the Net. It’s hard for the U.S. to argue against creating a new Internet regulator when the U.S. itself looks eager to regulate the Net. Suspicion is strong that the U.S. will regulate the Net to the advantage of its entertainment and e-commerce industries. Here’s the Register’s story:

The UN’s special adviser for internet governance, Nitin Desai, told us that the issue of control was particularly stark for developing nations, where the internet is not so much an entertainment or e-commerce medium but a vital part of the country’s infrastructure.

[Brazilian] Ambassador Porto clarified that point further: “Nowadays our voting system in Brazil is based on ICTs [information and communication technologies], our tax collection system is based on ICTs, our public health system is based on ICTs. For us, the internet is much more than entertainment, it is vital for our constituencies, for our parliament in Brazil, for our society in Brazil.” With such a vital resource, he asked, “how can one country control the Internet?”

The U.S. says flatly that it will not agree to an international governance scheme at this time.

If the U.S. doesn’t budge, and the international group tries to go ahead on its own, we might possibly see a split, where a new entity I’ll call “UNCANN” coexists with ICANN, with each of the two claiming authority over Internet naming. This won’t break the Internet, since each user will choose to pay attention to either UNCANN or ICANN. To the extent that UNCANN and ICANN assign names differently, there will be some confusion when UNCANN users talk to ICANN users. I wouldn’t expect many differences, though, so probably the creation of UNCANN wouldn’t make much difference, except in two respects. First, the choice to point one’s naming software at UNCANN or ICANN would probably take on symbolic importance, even if it made little practical difference. Second, UNCANN’s aura of legitimacy as a naming authority would make it easier for UNCANN to issue regulatory decrees that were taken seriously by the states that would ultimately have to implement them.

This last issue, of regulatory legitimacy, is the really important one. All the talk about naming is a smokescreen.

My guess is that the Geneva meeting will break up with much grumbling but no resolution of this issue. The EU and the rest of the international group won’t move ahead with its own naming authority, and the U.S. will tread more carefully in the future. That’s the best outcome we can hope for in the short term.

In the longer term, this issue will have to be resolved somehow. Until it is, many people around the world will keep asking the question, “Who runs the Internet?”, and not liking the answer.

The Pizzaright Principle

Lately, lots of bogus arguments for copyright expansion have been floating around. A handy detector for bogus arguments is the Pizzaright Principle.

Pizzaright – the exclusive right to sell pizza – is a new kind of intellectual property right. Pizzaright law, if adopted, would make it illegal to make or serve a pizza without a license from the pizzaright owner.

Creating a pizzaright would be terrible policy, of course. We’re much better off letting the market decide who can make and sell pizza.

The Pizzaright Principle says that if you make an argument for expanding copyright or creating new kinds of intellectual property rights, and if your argument serves equally well as an argument for pizzaright, then your argument is defective. It proves too much. Whatever your argument is, it had better rest on some difference between pizzaright and the exclusive right you want to create.

Let’s apply the Pizzaright Principle to two well-known bogus arguments for intellectual property expansion.

Suppose Alice argues that extending the term of copyright is good, because it gives the copyright owner a revenue stream that can be invested in creating new works. She could equally well argue that pizzaright is good, because it gives the pizzaright owner a revenue stream that can be invested in creating new pizzas.

(The flaw in Alice’s argument is that the decision whether to invest in a new copyrighted work, or a new pizza, is rationally based only on the cost of the investment and the expected payoff. Making a transfer payment to the would-be investor doesn’t change his decision, assuming that capital markets are efficient.)

Suppose that Bob argues that the profitability of broadcasting may be about to decrease, so broadcasters should be given new intellectual property rights. He could equally well argue that if the pizza business has become less profitable, a pizzaright should be created.

(The flaw in Bob’s argument was the failure to show that the new right furthers the interests of society as a whole, as opposed to the narrow interests of the broadcasters or pizzamakers.)

The Pizzaright Principle is surprisingly useful. Try it out on the next IP expansion argument you hear.

Secure Flight: Shifting Goals, Vague Plan

The Transportation Security Administration (TSA) released Friday a previously confidential report by the Secure Flight Working Group (SFWG), an independent expert committee on which I served. The committee’s charter was to study the privacy implications of the Secure Flight program. The final report is critical of TSA’s management of Secure Flight.

(Besides me, the committee members were Martin Abrams, Linda Ackerman, James Dempsey, Daniel Gallington, Lauren Gelman, Steven Lilienthal, Bruce Schneier, and Anna Slomovic. Members received security clearances and had access to non-public information; but everything I write here is based on public information. I should note that although the report was meant to reflect the consensus of the committee members, readers should not assume that every individual member agrees with everything said in the report.)

Secure Flight is a successor to existing programs that do three jobs. First, they vet air passengers against a no-fly list, which contains the names of people who are believed to pose a danger to aviation and so are not allowed to fly. Second, they vet passengers against a watch list, which contains the names of people who are believed to pose a more modest danger and so are subject to a secondary search at the security checkpoint. Third, they vet passengers’ reservations against the CAPPS I criteria, and subject those who meet the criteria to a secondary search. (The precise CAPPS I criteria are not public, but it is widely believed that the criteria include whether the passenger paid cash for the ticket, whether the ticket is one-way, and other factors.)

The key section of the report is on pages 5-6. Here’s the beginning of that section:

The SFWG found that TSA has failed to answer certain key questions about Secure Flight: First and foremost, TSA has not articulated what the specific goals of Secure Flight are. Based on the limited test results presented to us, we cannot assess whether even the general goal of evaluating passengers for the risk they represent to aviation security is a realistic or feasible one or how TSA proposes to achieve it. We do not know how much or what kind of personal information the system will collect or how data from various sources will flow through the system.

The lack of clear goals for the program is a serious problem (p. 5):

The TSA is under a Congressional mandate to match domestic airline passenger lists against the consolidated terrorist watch list. TSA has failed to specify with consistency whether watch list matching is the only goal of Secure Flight at this state. The Secure Flight Capabilities and Testing Overview, dated February 9, 2005 (a non-public document given to the SFWG), states in the Appendix that the program is not looking for unknown terrorists and has no intention of doing so. On June 29, 2005, Justin Oberman (Assistant Administrator, Secure Flight/Registered Traveler [at TSA]) testified to a Congressional committee that “Another goal proposed for Secure Flight is its use to establish “Mechanisms for … violent criminal data vetting.” Finally, TSA has never been forthcoming about whether it has an additional, implicit goal – the tracking of terrorism suspects (whose presence on the terrorist watch list does not necessarily signify intention to commit violence on a flight).

The report also notes that TSA had not answered questions about what the system’s architecture would be, whether Secure Flight would be linked to other TSA systems, whether and how the system would use commercial data sources, and how oversight would work. TSA had not provided enough information to evaluate the security of Secure Flight’s computer systems and databases.

The report ends with these recommendations:

Congress should prohibit live testing of Secure Flight until it receives the following from the [Homeland Security Secretary].

First, a written statement of the goals of Secure Flight signed by the Secretary of DHS that only can be changed on the Secretary’s order. Accompanying documentation should include: (1) a description of the technology, policy and processes in place to ensure that the system is only used to achieve the stated goals; (2) a schematic that describes exactly what data is collected, from what entities, and how it flows though the system; (3) rules that describe who has access to the data and under what circumstances; and (4) specific procedures for destruction of the data. There should also be an assurance that someone has been appointed with sufficient independence and power to ensure that the system development and subsequent use follow the documented procedures.

In conclusion, we believe live testing of Secure Flight should not commence until there has been adequate time to review, comment, and conduct a public debate on the additional documentation outlined above.

Speaking for myself, I joined the committee with an open mind. A system along the general lines of Secure Flight might make sense, and might properly balance security with privacy. I wanted to see whether Secure Flight could be justified. I wanted to hear someone make the case for Secure Flight. TSA had said that it was gathering evidence and doing analysis to do so.

In the end, TSA never did make a case for Secure Flight. I still have the same questions I had at the beginning. But now I have less confidence that TSA can successfully run a program like Secure Flight.

Google Print, Damages and Incentives

There’s been lots of discussion online of this week’s lawsuit filed against Google by a group of authors, over the Google Print project. Google Print is scanning in books from four large libraries, indexing the books’ contents, and letting people do Google-style searches on the books’ contents. Search results show short snippets from the books, but won’t let users extract long portions. Google will withdraw any book from the program at the request of the copyright holder. As I understand it, scanning was already underway when the suit was filed.

The authors claim that scanning the books violates their copyright. Google claims the project is fair use. Everybody agrees that Google Print is a cool project that will benefit the public – but it might be illegal anyway.

Expert commentators disagree about the merits of the case. Jonathan Band thinks Google should win. William Patry thinks the authors should win. Who am I to argue with either of them? The bottom line is that nobody knows what will happen.

So Google was taking a risk by starting the project. The risk is larger than you might think, because if Google loses, it won’t just have to reimburse the authors for the economic harm they have suffered. Instead, Google will have to pay statutory damages of up to $30,000 for every book that has been scanned. That adds up quickly! (I don’t know how many books Google has scanned so far, but I assume it’s a nontrivial numer.)

You might wonder why copyright law imposes such a high penalty for an act – scanning one book – that causes relatively little harm. It’s a good question. If Google loses, it makes economic sense to make Google pay for the harm it has caused (and to impose an injunction against future scanning). This gives Google the right incentive, to weigh the expected cost of harm to the authors against the project’s overall value.

Imposing statutory damages makes technologists like Google too cautious. Even if a new technology creates great value while doing little harm, and the technologist has a strong (but not slam-dunk) fair use case, the risk of statutory damages may deter the technology’s release. That’s inefficient.

Some iffy technologies should be deterred, if they create relatively little value for the harm they do, or if the technologist has a weak fair use case. But statutory damages deter too many new technologies.

[Law and economics mavens may object that under some conditions it is efficient to impose higher damages. That’s true, but I don’t think those conditions apply here. I don’t have space to address this point further, but please feel free to discuss it in the comments.]

In light of the risk Google is facing, it’s surprising that Google went ahead with the project. Maybe Google will decide now that discretion is the better part of valor, and will settle the case, stopping Google Print in exchange for the withdrawal of the lawsuit.

The good news, in the long run at least, is that this case will remind policymakers of the value of a robust fair use privilege.

Who Is An ISP?

There’s talk in Washington about a major new telecommunications bill, to update the Telecom Act of 1996. A discussion draft of the bill is floating around.

The bill defines three types of services: Internet service (called “Broadband Internet Transmission Service” or BITS for short); VoIP; and broadband television. It lays down specific regulations for each type of service, and delegates regulatory power to the FCC.

In bills like this, much of the action is in the definitions. How you’re regulated depends on which of the definitions you satisfy, if any. The definitions essentially define the markets in which companies can compete.

Here’s how the Internet service market is defined:

The term “BITS” or “broadband Internet transmission service” –
(A) means a packet-switched service that is offered to the public, or [effectively offered to the public], with or without a fee, and that, regardless of the facilities used –
(i) is transmitted in a packed-based protocol, including TCP/IP or a successor protocol; and
(ii) provides to subscribers the capability to send and receive packetized information; …

The term “BITS provider” means any person who provides or offers to provide BITS, either directly or through an affiliate.

The term “packet-switched service” means a service that routes or forwards packets, frames, cells, or other data units based on the identification, address, or other routing information contained in the packets, frames, cells, or other data units.

The definition of BITS includes ordinary Internet Service Providers, as we would expect. But that’s not all. It seems to include public chat servers, which deliver discrete messages to specified destination users. It seems to include overlay networks like Tor, which provide anonymous communication over the Internet using a packet-based protocol. As Susan Crawford observes, it seems to cover nodes in ad hoc mesh networks. It even seems to include anybody running an open WiFi access point.

What happens to you if you’re a BITS provider? You have to register with the FCC and hope your registration is approved; you have to comply with consumer protection requirements (“including service appointments and responses to service interruptions and outages”); and you have to comply with privacy regulation which, ironically, require you to keep track of who your users are so you can send them annual notices telling them that you are not storing personal information about them.

I doubt the bill’s drafters meant to include chat or Tor as BITS providers. The definition can probably be rewritten to exclude cases like these.

A more interesting question is whether they meant to include open access points. It’s hard to justify applying heavyweight regulation to the individuals or small businesses who run access points. And it seems likely that many would ignore the regulations anyway, just as most consumers seem ignore the existing rules that require an FCC license to use the neighborhood-range walkie-talkies sold at Wal-Mart.

The root of the problem is the assumption that Internet connectivity will be provided only by large institutions that can amortize regulatory compliance costs over a large subscriber base. If this bill passes, that will be a self-fulfilling prophecy – only large institutions will be able to offer Internet service.