November 30, 2024

Google Print, Damages and Incentives

There’s been lots of discussion online of this week’s lawsuit filed against Google by a group of authors, over the Google Print project. Google Print is scanning in books from four large libraries, indexing the books’ contents, and letting people do Google-style searches on the books’ contents. Search results show short snippets from the books, but won’t let users extract long portions. Google will withdraw any book from the program at the request of the copyright holder. As I understand it, scanning was already underway when the suit was filed.

The authors claim that scanning the books violates their copyright. Google claims the project is fair use. Everybody agrees that Google Print is a cool project that will benefit the public – but it might be illegal anyway.

Expert commentators disagree about the merits of the case. Jonathan Band thinks Google should win. William Patry thinks the authors should win. Who am I to argue with either of them? The bottom line is that nobody knows what will happen.

So Google was taking a risk by starting the project. The risk is larger than you might think, because if Google loses, it won’t just have to reimburse the authors for the economic harm they have suffered. Instead, Google will have to pay statutory damages of up to $30,000 for every book that has been scanned. That adds up quickly! (I don’t know how many books Google has scanned so far, but I assume it’s a nontrivial numer.)

You might wonder why copyright law imposes such a high penalty for an act – scanning one book – that causes relatively little harm. It’s a good question. If Google loses, it makes economic sense to make Google pay for the harm it has caused (and to impose an injunction against future scanning). This gives Google the right incentive, to weigh the expected cost of harm to the authors against the project’s overall value.

Imposing statutory damages makes technologists like Google too cautious. Even if a new technology creates great value while doing little harm, and the technologist has a strong (but not slam-dunk) fair use case, the risk of statutory damages may deter the technology’s release. That’s inefficient.

Some iffy technologies should be deterred, if they create relatively little value for the harm they do, or if the technologist has a weak fair use case. But statutory damages deter too many new technologies.

[Law and economics mavens may object that under some conditions it is efficient to impose higher damages. That’s true, but I don’t think those conditions apply here. I don’t have space to address this point further, but please feel free to discuss it in the comments.]

In light of the risk Google is facing, it’s surprising that Google went ahead with the project. Maybe Google will decide now that discretion is the better part of valor, and will settle the case, stopping Google Print in exchange for the withdrawal of the lawsuit.

The good news, in the long run at least, is that this case will remind policymakers of the value of a robust fair use privilege.

Who Is An ISP?

There’s talk in Washington about a major new telecommunications bill, to update the Telecom Act of 1996. A discussion draft of the bill is floating around.

The bill defines three types of services: Internet service (called “Broadband Internet Transmission Service” or BITS for short); VoIP; and broadband television. It lays down specific regulations for each type of service, and delegates regulatory power to the FCC.

In bills like this, much of the action is in the definitions. How you’re regulated depends on which of the definitions you satisfy, if any. The definitions essentially define the markets in which companies can compete.

Here’s how the Internet service market is defined:

The term “BITS” or “broadband Internet transmission service” –
(A) means a packet-switched service that is offered to the public, or [effectively offered to the public], with or without a fee, and that, regardless of the facilities used –
(i) is transmitted in a packed-based protocol, including TCP/IP or a successor protocol; and
(ii) provides to subscribers the capability to send and receive packetized information; …

The term “BITS provider” means any person who provides or offers to provide BITS, either directly or through an affiliate.

The term “packet-switched service” means a service that routes or forwards packets, frames, cells, or other data units based on the identification, address, or other routing information contained in the packets, frames, cells, or other data units.

The definition of BITS includes ordinary Internet Service Providers, as we would expect. But that’s not all. It seems to include public chat servers, which deliver discrete messages to specified destination users. It seems to include overlay networks like Tor, which provide anonymous communication over the Internet using a packet-based protocol. As Susan Crawford observes, it seems to cover nodes in ad hoc mesh networks. It even seems to include anybody running an open WiFi access point.

What happens to you if you’re a BITS provider? You have to register with the FCC and hope your registration is approved; you have to comply with consumer protection requirements (“including service appointments and responses to service interruptions and outages”); and you have to comply with privacy regulation which, ironically, require you to keep track of who your users are so you can send them annual notices telling them that you are not storing personal information about them.

I doubt the bill’s drafters meant to include chat or Tor as BITS providers. The definition can probably be rewritten to exclude cases like these.

A more interesting question is whether they meant to include open access points. It’s hard to justify applying heavyweight regulation to the individuals or small businesses who run access points. And it seems likely that many would ignore the regulations anyway, just as most consumers seem ignore the existing rules that require an FCC license to use the neighborhood-range walkie-talkies sold at Wal-Mart.

The root of the problem is the assumption that Internet connectivity will be provided only by large institutions that can amortize regulatory compliance costs over a large subscriber base. If this bill passes, that will be a self-fulfilling prophecy – only large institutions will be able to offer Internet service.

Movie Studios Form DRM Lab

Hollywood argues – or at least strongly implies – that technology companies could stop copyright infringement if they wanted to, but have chosen not to do so. I have often wondered whether Hollywood really believes this, or whether the claim is just a ploy to gain political advantage.

Such a ploy might be very effective if it worked. Imagine that you somehow convinced policymakers that the auto industry could make cars that operated with no energy source at all. You could then demand that the auto industry make all sorts of concessions in energy policy, and you could continue to criticize them for foot-dragging no matter how much they did.

If you were using this ploy, the dumbest thing you could do is to set up your own “Perpetual Motion Labs” to develop no-energy-source cars. Your lab would fail, of course, and its failure would demonstrate that your argument was bogus all along. You would only set up the lab if you thought that perpetual-motion cars were pretty easy to build.

Which brings us to the movie industry’s announcement, yesterday, that they will set up “MovieLabs”, a $30 million research effort to develop effective anti-copying technologies. The only sensible explanation for this move is that Hollywood really believes that there are easily-discovered anti-copying technologies that the technology industry has failed to find.

So Hollywood is still in denial about digital copying.

The pressure will be on MovieLabs to find strong anti-copying technologies, because a failure by MovieLabs can’t be blamed on the tech industry. Failure will show, instead, that stopping digital copying is much harder than Hollywood thought. And MovieLabs will fail, just as Perpetual Motion Labs would.

When MovieLabs fails, expect the spinners to emerge again, telling us that MovieLabs has a great technology that it can’t tell us about, or that there’s a great technology that isn’t quite finished, or that the goal all along was not to stop P2P copying but only to reduce some narrow, insignificant form of copying. Expect, most of all, that MovieLabs will go to almost any length to avoid independent evaluation of its technologies.

This is a chance for Hollywood to learn what the rest of us already know – that cheap and easy copying is an unavoidable side-effect of the digital revolution.

P2P Still Growing; Traffic Shifts to eDonkey

CacheLogic has released a new report presentation on peer-to-peer traffic trends, based on measurement of networks worldwide. (The interesting part starts at slide 5.)

P2P traffic continued to grow in 2005. As expected, there was no dropoff after the Grokster decision.

Traffic continues to shift away from the FastTrack network (used by Kazaa and others), mostly toward eDonkey. BitTorrent is still quite popular but has lost some usage share. Gnutella showed unexpected growth in the U.S., though its share is still small.

CacheLogic speculates, plausibly, that these trends reflect a usage shift away from systems that faced heavier legal attacks. FastTrack saw several legal attacks, including the Grokster litigation, along with many lawsuits against individual users. BitTorrent itself didn’t come under legal attack, but some sites directories of (mostly) infringing BitTorrent traffic were shut down. eDonkey came in for fewer legal attacks, and the lawyers mostly ignored Gnutella as insignificant; these systems grew in popularity. So far in 2005, legal attacks have shifted users from one system to another, but they haven’t reduced overall P2P activity.

Another factor in the data, which CacheLogic doesn’t say as much about, is a possible shift toward distribution of larger files. The CacheLogic traffic data count the total number of bytes transferred, so large files are weighted much more heavily than small files. This factor will tend to inflate the apparent importance of BitTorrent and eDonkey, which transfer large files efficiently, at the expense of FastTrack and Gnutella, which don’t cope as well with large files. Video files, which tend to be large, are more common on BitTorrent and eDonkey. Overall, video accounted for about 61% of P2P traffic, and audio for 11%. Given the size disparity between video and audio, it seems likely that the majority of content (measured by number of files, or by dollar value, or by minutes of video/audio content) was still audio.

The report closes by predicting the continued growth of P2P, which seems like a pretty safe bet. It notes that copyright owners are now jumping on the P2P bandwagon, having learned the lesson of BitTorrent, which is that P2P is a very efficient way to distribute files, especially large files. As for users,

End users love P2P as it gives them access to the media they want, when they want it and at high speed …

Will the copyright owners’ authorized P2P systems give users the access and flexibility they have come to expect? If not, users will stick with other P2P systems that do.

Secrecy in Science

There’s an interesting dispute between astronomers about who deserves credit for discovering a solar system object called 2003EL61. Its existence was first announced by Spanish astronomers, but another team in the U.S. believes that the Spaniards may have learned about the object due to an information leak from the U.S. team.

The U.S. team’s account appears on their web page and was in yesterday’s NY Times. The short version is that the U.S. team published an advance abstract about their paper, which called the object by a temporary name that encoded the date it had been discovered. They later realized that an obscure website contained a full activity log for the telescope they had used, which allowed anybody with a web browser to learn exactly where the telescope had been pointing on the date of the discovery. This in turn allowed the object’s orbit to be calculated, enabling anybody to point their telescope at the object and “discover” it. Just after the abstract was released, the Spanish team apparently visited the telescope log website; and a few days later the Spanish team announced that they had discovered the object.

If this account is true, it’s clearly a breach of scientific ethics by the Spaniards. The seriousness of the breach depends on other circumstances which we don’t know, such as the possibility that the Spaniards had already discovered the object independently and were merely checking whether the Americans’ object was the same one. (If so, their announcement should have said that the American team had discovered the object independently.)

[UPDATE (Sept. 15): The Spanish team has now released their version of the story. They say they discovered the object on their own. When the U.S. group’s abstract, containing a name for the object, appeared on the Net, the Spaniards did a Google search for the object name. The search showed a bunch of sky coordinates. They tried to figure out whether any of those coordinates corresponded to the object they had seen, but they were unable to tell one way or the other. So they went ahead with their own announcement as planned.

This is not inconsistent with the U.S. team’s story, so it seems most likely to me that both stories are true. If so, then I was too hasty in inferring a breach of ethics, for which I apologize. I should have realized that the Spanish team might have been unable to tell whether the objects were the same.]

When this happened, the American team hastily went public with another discovery, of an object called 2003UB313 which may be the tenth planet in our solar system. This raised the obvious question of why the team had withheld the announcement of this new object for as long as they did. The team’s website has an impassioned defense of the delay:

Good science is a careful and deliberate process. The time from discovery to announcement in a scientific paper can be a couple of years. For all of our past discoveries, we have described the objects in scientific papers before publicly announcing the objects’ existence, and we have made that announcement in under nine months…. Our intent in all cases is to go from discovery to announcement in under nine months. We think that is a pretty fast pace.

One could object to the above by noting that the existence of these objects is never in doubt, so why not just announce the existence immediately upon discovery and continue observing to learn more? This way other astronomers could also study the new object. There are two reasons we don’t do this. First, we have dedicated a substantial part of our careers to this survey precisely so that we can discover and have the first crack at studying the large objects in the outer solar system. The discovery itself contains little of scientific interest. Almost all of the science that we are interested in doing comes from studying the object in detail after discovery. Announcing the existence of the objects and letting other astronomers get the first detailed observations of these objects would ruin the entire scientific point of spending so much effort on our survey. Some have argued that doing things this way “harms science” by not letting others make observations of the objects that we find. It is difficult to understand how a nine month delay in studying an object that no one would even know existed otherwise is in any way harmful to science!

Many other types of astronomical surveys are done for precisely the same reasons. Astronomers survey the skies looking for ever higher redshift galaxies. When they find them they study them and write a scientific paper. When the paper comes out other astronomers learn of the distant galaxy and they too study it. Other astronomers cull large databases such as the 2MASS infrared survey to find rare objects like brown dwarves. When they find them they study them and write a scientific paper. When the paper comes out other astronomers learn of the brown dwarves and they study them in perhaps different ways. Still other astronomers look around nearby stars for the elusive signs of directly detectable extrasolar planets. When they find one they study it and write a scientific paper….. You get the point. This is the way that the entire field of astronomy – and probably all of science – works. It’s a very effective system; people who put in the tremendous effort to find these rare objects are rewarded with getting to be the first to study them scientifically. Astronomers who are unwilling or unable to put in the effort to search for the objects still get to study them after a small delay.

This describes an interesting dynamic that seems to occur in all scientific fields – I have seen it plenty of times in computer science – where researchers withhold results from their colleagues for a while, to ensure that they get a headstart on the followup research. That’s basically what happens when an astronomer delays announcing the discovery of an object, in order to do followup analyses of the object for publication.

The argument against this secrecy is pretty simple: announcing the first result would let more people do followup work, making the followup work both quicker and more complete on average. Scientific discovery would benefit.

The argument for this kind of secrecy is more subtle. The amount of credit one gets for a scientific result doesn’t always correlate with the difficulty of getting the result. If a result is difficult to get but doesn’t create much credit to the discoverer, then there is an insufficient incentive to look for that result. The incentive is boosted if the discoverer gets an advantage in doing followup work, for example by keeping the original result secret for a while. So secrecy may increase the incentive to do certain kinds of research.

Note that there isn’t much incentive to keep low-effort / high-credit research secret, because there are probably plenty of competing scientists who are racing to do such work and announce it first. The incentive to keep secrets is biggest for high-effort / low-credit research which enables low-effort / high-credit followup work. And this is exactly the case where incentives most need to be boosted.

Michael Madison compares the astronomers’ tradeoff between publication and secrecy to the tradeoff an inventor faces between keeping an invention secret, and filing for a patent. As a matter of law, discovered scientific facts are not patentable, and that’s a good thing.

As Madison notes, science does have its own sort of “intellectual property” system that tries to align incentives for the public good. There is a general incentive to publish results for the public good – scientific credit goes to those who publish. Secrecy is sometimes accepted in cases where secret-keeping is needed to boost incentives, but the system is designed to limit this secrecy to cases where it is really needed.

But this system isn’t perfect. As the astronomers note, the price of secrecy is that followup work by others is delayed. Sometimes the delay isn’t too serious – 2003UB313 will still be plodding along in its orbit and there will be plenty of time to study it later. But sometimes delay is a bigger deal, as when an astronomical object is short-lived and cannot be studied at all later. Another example, which arises more often in computer security, is when the discovery is about an ongoing risk to the public which can be mitigated more quickly if it is more widely known. Scientific ethics tend to require at least partial publication in cases like these.

What’s most notable about the scientific system is that it works pretty well, at least within the subject matter of science, and it does so without much involvement by laws or lawyers.