November 26, 2024

Study Shows DMCA Takedowns Based on Inconclusive Evidence

A new study by Michael Piatek, Yoshi Kohno and Arvind Krishnamurthy at the University of Washington shows that copyright owners’ representatives sometimes send DMCA takedown notices where there is no infringement – and even to printers and other devices that don’t download any music or movies. The authors of the study received more than 400 spurious takedown notices.

Technical details are summarized in the study’s FAQ:

Downloading a file from BitTorrent is a two step process. First, a new user contacts a central coordinator [a “tracker” – Ed] that maintains a list of all other users currently downloading a file and obtains a list of other downloaders. Next, the new user contacts those peers, requesting file data and sharing it with others. Actual downloading and/or sharing of copyrighted material occurs only during the second step, but our experiments show that some monitoring techniques rely only on the reports of the central coordinator to determine whether or not a user is infringing. In these cases whether or not a peer is actually participating is not verified directly. In our paper, we describe techniques that exploit this lack of direct verification, allowing us to frame arbitrary Internet users.

The existence of erroneous takedowns is not news – anybody who has seen the current system operating knows that some notices are just wrong, for example referring to unused IP addresses. Somewhat more interesting is the result that it is pretty easy to “frame” somebody so they get takedown notices despite doing nothing wrong. Given this, it would be a mistake to infer a pattern of infringement based solely on the existence of takedown notices. More evidence should be required before imposing punishment.

Now it’s not entirely crazy to send some kind of soft “warning” to a user based on the kind of evidence described in the Washington paper. Most of the people who received such warnings would probably be infringers, and if it’s nothing more than a warning (“Hey, it looks like you might be infringing. Don’t infringe.”) it could be effective, especially if the recipients know that with a bit more work the copyright owner could gather stronger evidence. Such a system could make sense, as long as everybody understood that warnings were not evidence of infringement.

So are copyright owners overstepping the law when they send takedown notices based on inconclusive evidence? Only a lawyer can say for sure. I’ve read the statute and it’s not clear to me. Readers who have an informed opinion on this question are encouraged to speak up in the comments.

Whether or not copyright owners can send warnings based on inconclusive evidence, the notification letters they actually send imply that there is strong evidence of infringement. Here’s an excerpt from a letter sent to the University of Washington about one of the (non-infringing) study computers:

XXX, Inc. swears under penalty of perjury that YYY Corporation has authorized XXX to act as its non-exclusive agent for copyright infringement notification. XXX’s search of the protocol listed below has detected infringements of YYY’s copyright interests on your IP addresses as detailed in the attached report.

XXX has reasonable good faith belief that use of the material in the manner complained of in the attached report is not authorized by YYY, its agents, or the law. The information provided herein is accurate to the best of our knowledge. Therefore, this letter is an official notification to effect removal of the detected infringement listed in the attached report. The attached documentation specifies the exact location of the infringement.

The statement that the search “has detected infringements … on your IP addresses” is not accurate, and the later reference to “the detected infringement” also misleads. The letter contains details of the purported infringement, which once again give the false impression that the letter’s sender has verified that infringement was actually occurring:

Evidentiary Information:
Notice ID: xx-xxxxxxxx
Recent Infringement Timestamp: 5 May 2008 20:54:30 GMT
Infringed Work: Iron Man
Infringing FileName: Iron Man TS Kvcd(A Karmadrome Release)KVCD by DangerDee
Infringing FileSize: 834197878
Protocol: BitTorrent
Infringing URL: http://tmts.org.uk/xbtit/announce.php
Infringers IP Address: xx.xx.xxx.xxx
Infringer’s DNS Name: d-xx-xx-xxx-xxx.dhcp4.washington.edu
Infringer’s User Name:
Initial Infringement Timestamp: 4 May 2008 20:22:51 GMT

The obvious question at this point is why the copyright owners don’t do the extra work to verify that the target of the letter is actually transferring copyrighted content. There are several possibilities. Perhaps BitTorrent clients can recognize and shun the detector computers. Perhaps they don’t want to participate in an act of infringement by sending or receiving copyrighted material (which would be necessary to know that something on the targeted computer is willing to transfer it). Perhaps it simply serves their interests better to send lots of weak accusations, rather than fewer stronger ones. Whatever the reason, until copyright owners change their practices, DMCA notices should not be considered strong evidence of infringement.

Government Data and the Invisible Hand

David Robinson, Harlan Yu, Bill Zeller, and I have a new paper about how to use infotech to make government more transparent. We make specific suggestions, some of them counter-intuitive, about how to make this happen. The final version of our paper will appear in the Fall issue of the Yale Journal of Law and Technology. The best way to summarize it is to quote the introduction:

If the next Presidential administration really wants to embrace the potential of Internet-enabled government transparency, it should follow a counter-intuitive but ultimately compelling strategy: reduce the federal role in presenting important government information to citizens. Today, government bodies consider their own websites to be a higher priority than technical infrastructures that open up their data for others to use. We argue that this understanding is a mistake. It would be preferable for government to understand providing reusable data, rather than providing websites, as the core of its online publishing responsibility.

In the current Presidential cycle, all three candidates have indicated that they think the federal government could make better use of the Internet. Barack Obama’s platform explicitly endorses “making government data available online in universally accessible formats.” Hillary Clinton, meanwhile, remarked that she wants to see much more government information online. John McCain, although expressing excitement about the Internet, has allowed that he would like to delegate the issue, possible to a vice-president.

But the situation to which these candidates are responding – the wide gap between the exciting uses of Internet technology by private parties, on the one hand, and the government’s lagging technical infrastructure on the other – is not new. The federal government has shown itself consistently unable to keep pace with the fast-evolving power of the Internet.

In order for public data to benefit from the same innovation and dynamism that characterize private parties’ use of the Internet, the federal government must reimagine its role as an information provider. Rather than struggling, as it currently does, to design sites that meet each end-user need, it should focus on creating a simple, reliable and publicly accessible infrastructure that “exposes” the underlying data. Private actors, either nonprofit or commercial, are better suited to deliver government information to citizens and can constantly create and reshape the tools individuals use to find and leverage public data. The best way to ensure that the government allows private parties to compete on equal terms in the provision of government data is to require that federal websites themselves use the same open systems for accessing the underlying data as they make available to the public at large.

Our approach follows the engineering principle of separating data from interaction, which is commonly used in constructing websites. Government must provide data, but we argue that websites that provide interactive access for the public can best be built by private parties. This approach is especially important given recent advances in interaction, which go far beyond merely offering data for viewing, to offer services such as advanced search, automated content analysis, cross-indexing with other data sources, and data visualization tools. These tools are promising but it is far from obvious how best to combine them to maximize the public value of government data. Given this uncertainty, the best policy is not to hope government will choose the one best way, but to rely on private parties with their vibrant marketplace of engineering ideas to discover what works.

To read more, see our preprint on SSRN.

The Microsoft Case: The Second Browser War

Today I’ll wrap up my series of posts looking back at the Microsoft Case, by looking at the Second Browser War that is now heating up.

The First Browser War, of course, started in the mid-1990s with the rise of Netscape and its Navigator browser. Microsoft was slow to spot the importance the Web and raced to catch up. With version 3 of its Internet Explorer browser, released in 1996, Microsoft reached technical parity with Netscape. This was not enough to capture market share – most users stuck with the familiar Navigator – and Microsoft responded by adopting the tactics that provoked the antitrust case. With the help of these tactics, Microsoft won the first browser war, capturing the lion’s share of the browser market as Navigator was sold to AOL and then faded into obscurity.

On its way over the cliff, Netscape spun off an open source version of its browser, dubbing it Mozilla, after the original code name for Netscape’s browser. Over time, the Mozilla project released other software and renamed its browser as Mozilla Firefox. Microsoft, basking in its browser-war victory and high market share, moved its attention elsewhere as Firefox improved steadily. Now Firefox market share is around 15% and growing, and many commentators see Firefox as technically superior to current versions of Internet Explorer. Lately, Microsoft is paying renewed attention to Internet Explorer and the browser market. This may be the start of a Second Browser War.

It’s interesting to contrast the Second Browser War with the First. I see four main differences.

First, Firefox is an open-source project where Navigator was not. The impact of open source here is not in its zero price – in the First Browser War, both browsers had zero price – but in its organization. Firefox is developed and maintained by a loosely organized coalition of programmers, many of whom work for for-profit companies. There is also a central Mozilla organization, which has its own revenue stream (coming mostly from Google in exchange for Firefox driving search traffic to Google), but the central organization plays a much smaller role in browser development than Netscape did. Mozilla, not needing to pay all of its developers from browser revenue, has a much lower “burn rate” than Netscape did and is therefore much more resistant to attacks on its revenue stream. Indeed, the Firefox technology will survive, and maybe even prosper, even if the central organization is destroyed. In short, an open source competitor is much harder to kill.

The second difference is that this time Microsoft starts with most of the market share, whereas before it had very little. Market share tends to be stable – customers stick with the familiar, unless they have a good reason to switch – so the initial leader has a significant advantage. Microsoft might be able to win the Second Browser War, at least in a market-share sense, just by maintaining technical parity.

The third difference is that technology has advanced a lot in the intervening decade. One implication is that web-based applications are more widespread and practical than before. (But note that participants in the First Browser War probably overestimated the practicality of web-based apps.) This has to be a big issue for Microsoft – the rise of web-based apps reduce its Windows monopoly power – so if anything Microsoft has a stronger incentive to fight hard in the new browser war.

The final difference is that the Second Browser War will be fought in the shadow of the antitrust case. Microsoft will not use all the tactics it used last time but will probably focus more on technical innovation to produce a browser that is at least good enough that customers won’t switch to Firefox. If Firefox responds by innovating more itself, the result will be an innovation race that will benefit consumers.

The First Browser War brought a flood of innovation, along with some unsavory tactics. If the Second Browser War brings us the same kind of innovation, in a fair fight, we’ll all be better off, and the browsers of 2018 will be better than we expected.

The Microsoft Case: The Government's Theory, in Hindsight

Continuing my series of posts on the tenth anniversary of the Microsoft antitrust case, I want to look today at the government’s theory of the case, and how it looks with ten years of hindsight.

The source of Microsoft’s power in Windows was what the government dubbed the “applications barrier to entry”. Users chose their operating system in order to get the application software they wanted. Windows had by far the biggest and best selection of applications, due to its high market share (over 95% on the PC platform). To enter the PC OS market, a company would not only have to develop a competitive operating system but would also have to entice application developers to port their applications to the new system, which would be very slow and expensive if not impossible. This barrier to entry, coupled with its high market share, gave Microsoft monopoly power.

The rise of the browser, specifically Netscape Navigator and its built-in Java engine, threatened to reduce the applications barrier to entry, the government claimed. Software would be written to run in the browser rather than using the operating system’s services directly, and such software would run immediately on any new operating system as soon as the browser was ported to the new system. Cross-platform browsers would reduce the applications barrier to entry and thereby weaken Microsoft’s Windows monopoly. The government accused Microsoft of acting anticompetitively to sabotage the development of cross-platform browser technology.

The imminent flowering of browser-based applications was widely predicted at the time, and the evidence showed that top executives at Netscape, Microsoft, and Sun seemed to believe it. Yet we know in hindsight that things didn’t unfold that way: browser-based applications were not a big trend in 1998-2003. Why not? There are two possible explanations. Either the government was right and Microsoft did succeed in squashing the trend toward browser-based applications, or the government and the conventional wisdom were both wrong and there was really no trend to squash.

This highlights one of the main difficulties in antitrust analysis: hypothetical worlds. To evaluate the key issue of whether consumers and competition were harmed, one always needs to compare the actual world against a hypothetical world in which the defendant did not commit the accused acts. What would have happened if Microsoft had simply competed to produce the best Internet Explorer browser? It’s a fascinating question which we can never answer with certainty.

What actually happened, after Microsoft’s accused acts, the lawsuit, and the settlement, in the years since the case was filed? Netscape crumbled. The browser market became quiet; Microsoft tweaked Internet Explorer here and there but the pace of innovation was much slower than it had been during the browser war. Then the open source browser Mozilla Firefox arose from the ashes of Netscape. Firefox was slow to start but gained momentum as its developer community grew. When Firefox passed 10% market share and (arguably) exceeded IE technically, Microsoft stepped up the pace of its browser work, leading to what might be another browser war.

We also saw, finally, the rise of browser-based applications that had been predicted a decade ago. Today browser-based applications are all the rage. The applications barrier to entry is starting to shrink, though the barrier will still be significant until browser-based office suites reach parity with Microsoft Office. In short, the scenario the government predicted (absent Microsoft’s accused acts) is developing now, ten years later.

Why now? One reason is the state of technology. Today’s browser-based applications simply couldn’t have run on the computers of 1998, but today’s computers have the horsepower to handle browser-based apps and more is known about how to make them work. Another reason, perhaps, is that Microsoft is not acting against Firefox in the way it acted against Netscape a decade ago. A new browser war – in which Microsoft and Firefox compete to make the most attractive product – is the best outcome for consumers.

Life doesn’t always offer do-overs, but we may get a do-over on the browser war, and this time it looks like Microsoft will take the high road.

The Microsoft Case: A Window Into the Software Industry

This week I’m publishing reflections on the Microsoft antitrust case, which was filed ten years ago. Today I want to consider how the case change the public view of the software industry.

Microsoft’s internal emails were a key part of the government’s evidence. The emails painted a vivid picture of how the company made its strategy decisions. Executives discussed frankly how “it will be very hard to increase browser market share on the merits of [Internet Explorer] alone. It will be very important to leverage the OS asset to make people use IE”. Often the tone was one of controlling customers and sabotaging competitors, rather than technical innovation.

Probably the most cringe-inducing metaphor in the whole case was “knifing the baby”. Here’s a trial dispatch from Business Week:

In particularly colorful testimony on Nov. 5 [1998], [Apple VP Avie] Tevanian described an April, 1997, meeting between two Apple and two Microsoft officials. Tevanian, who was not at the meeting, said Microsoft officials suggested that Apple abandon its business of providing “playback” software that enables users to view multimedia content on the computers. Instead, they offered Apple the much smaller portion of the market for the tools that developers use to create the content. In Apple’s mind, though, the playback software was its baby.

According to Tevanian, Apple executive Peter Hoddie asked Microsoft officials, “‘Are you asking us to kill playback? Are you asking us to knife the baby?'” He said Microsoft official Christopher Phillips responded, “‘Yes, we want you to knife the baby.’ It was very clear.”

Stories like this shredded the public perception of software companies as idealistic lab-coated technical innovators. It wasn’t just Microsoft whose reputation took a beating – it was Apple who gave us the baby-knifing metaphor. One shrewd observer told me at the time that the difference between Microsoft and its competitors was not motive but opportunity – the other companies would have done what Microsoft did, if they had the chance.

None of these companies were as crude and brutal as they looked in court – litigation has a way of highlighting the extremes – but there was more than a grain of truth to the idea that software markets are driven by power and dealmaking, along with engineering. Another classic moment in the trial came when a Microsoft lawyer was cross-examining Netscape CEO Jim Barksdale about emails written by Netscape founder and Silicon Valley superhero Jim Clark. The lawyer asked Barksdale whether he regarded Clark as “a truthful man”. Barksdale paused before answering, “I regard him as a salesman.”