November 22, 2024

Archives for 2008

Satellite Case Raises Questions about the Rule of Law

My friend Julian Sanchez reports on a September 29 ruling by a federal magistrate judge that retailers will not be required to disclose the names of customers who purchased open satellite hardware that is currently the subject of a copyright lawsuit. The Plaintiff, Echostar, sought the records as part of its discovery procedures in a lawsuit against Freetech, a firm that manufacturers consumer equipment capable of receiving free satellite content. The equipment attracted Echostar’s attention because it’s also the case that with minor modifications Freetech’s devices can be used to illicitly receive Echostar’s proprietary video content. Echostar contends that satellite piracy—not the interception of legitimate free content—is the primary purpose of Freetech’s boxes. And it argues that this makes them illegal circumvention devices under the Digital Millennium Copyright Act.

The ruling is a small victory for customer privacy. But as Julian notes, the case still has some troubling implications. Echostar claims it can demonstrate that Freetech has been colluding with satellite pirates to ensure its boxes can be used to illegally intercept Echostar content. But it is a long way from proving that allegation in court. Unfortunately, the very existence of the lawsuit has had a devastating impact on its business. Freetech’s sales have dropped about 90 percent in recent years. In other words, Echostar has nearly destroyed Freetech’s business long before it has actually proved that Freetech had done anything wrong.

Second, and more fundamentally, it appears that Freetech’s liability may turn on whether there is a “commercially significant” use of Freetech’s products other than satellite piracy. As Fred von Lohmann points out, that has the troubling implication that Freetech’s liability under copyright law is dependent on the actions of customers over whom it has no control. As Fred puts it, this means that under the DMCA, a manufacturer could “go to bed selling a legitimate product and wake up liable.” It’s a basic principle of law that people should be liable for what they do, not for what third parties do without their knowledge or consent.

None of this is to condone satellite piracy. I have little sympathy for people who face legal penalties for obtaining proprietary satellite content without paying for it. But there are legal principles that are even more important than enforcing copyright. The progress of the Echostar case so far suggests that in its zeal to protect the rights of content owners, copyright law is trampling on those principles. Which is one more reason to think that the DMCA’s anti-circumvention provisions needs to be reformed or repealed.

Political Information Overload and the New Filtering

[We’re pleased to introduce Luis Villa as a guest blogger. Luis is a law student at Columbia Law School, focusing on law and technology, including intellectual property, telecommunications, privacy, and e-commerce. Outside of class he serves as Editor-in-Chief of the Science and Technology Law Review. Before law school, Luis did great work on open source projects, and spent some time as “geek in residence” at the Berkman Center. — Ed]

[A big thanks to Ed, Alex, and Tim for the invitation to participate at Freedom To Tinker, and the gracious introduction. I’m looking forward to my stint here. — Luis]

A couple weeks ago at the Web 2.0 Expo NY, I more-or-less stumbled into a speech by Clay Shirky titled “It’s Not Information Overload, It’s Filter Failure.” Clay argues that there has always been a lot of information, so our modern complaints about information overload are more properly ascribed to a breakdown in the filters – physical, economic, and social- that used to keep information at bay. This isn’t exactly a shockingly new observation, but now that Clay put it in my head I’m seeing filters (or their absence) everywhere.

In particular, I’m seeing lots of great examples in online politics. We’ve probably never been so deluged by political information as we are now, but Clay would argue that this is not because there is more information- after all, virtually everyone has had political opinions for ages. Instead, he’d say that the old filters that kept those opinions private have become less effective. For example, social standards used to say ‘no politics at the dinner table’, and economics used to keep every Luis, Ed, and Alex from starting a newspaper with an editorial page. This has changed- social norms about politics have been relaxed, and ‘net economics have allowed for the blooming of a million blogs and a billion tweets.

Online political filtering dates back at least to Slashdot’s early attempts to moderate commenters, and criticism of them stretches back nearly as far. But the new deluge of political commentary from everyone you know (and everyone you don’t) rarely has filtering mechanisms, norms, or economics baked in yet. To a certain extent, we’re witnessing the birth of those new filters right now. Among the attempts at a ‘new filtering’ that I’ve seen lately:

  • The previously linked election.twitter.com. This is typical of the twitter ‘ambient intimacy‘ approach to filtering- everything is so short and so transient that your brain does the filtering for you (or so it is claimed), giving you a 100,000 foot view of the mumblings and grumblings of a previously unfathomably vast number of people.
  • fivethirtyeight.com: an attempt to filter the noise of the thousands of polls into one or two meaningful numbers by applying mathematical techniques originally developed for analysis of baseball players. The exact algorithms aren’t disclosed, but the general methodologies have been discussed.
  • The C-Span Debate Hub: this has not reached its full potential yet, but it uses some Tufte-ian tricks to pull data out of the debates, and (in theory) their video editing tool could allow for extensive discussion of any one piece of the debate, instead of the debate as a whole- surely a way to get some interesting collection and filtering.
  • Google’s ‘In Quotes’: this takes one first step in filtering (gathering all candidate quotes in one place, from disparate, messy sources) but then doesn’t build on that.

Unfortunately, I have no deep insights to add here. Some shallow observations and questions, instead:

  • All filters have impacts- keeping politics away from the dinner table tended to mute objections to the status quo, the ‘objectivity’ of the modern news media filter may have its own pernicious effects, and arguably information mangled by PowerPoint can blow up Space Shuttles. Have the designers of these new political filters thought about the information they are and are not presenting? What biases are being introduced? How can those be reduced or made more transparent?
  • In at least some of these examples the mechanisms by which the filtering occurs are not a matter of record (538’s math) or are not well understood (twitter’s crowd/minimal attention psychology). Does/should that matter? What if these filters became ‘dominant’ in any sense? Should we demand the source for political filtering algorithms?
  • The more ‘fact-based’ filters (538, inquotes) seem more successful, or at least more coherent and comprehensive. Are opinions still just too hard to filter with software or are there other factors at work here?
  • Slashdot’s nearly ten year old comment moderation system is still quite possibly the least bad filter out there. None of the ‘new’ politics-related filters (that I know of) pulls together reputation, meta-moderation, and filtering like slashdot does. Are there systemic reasons (usability, economics, etc.?) why these new tools seem so (relatively) naive?

We’re entering an interesting time. Our political process is becoming both less and more mediated– more ‘susceptible to software’ in Dave Weinberger’s phrase. Computer scientists, software interaction designers, and policy/process wonks would all do well to think early and often about the filters and values embedded in this software, and how we can (and can’t) ‘tinker’ with them to get the results we’d like to see.

Judge Suppresses Report on Voting Machine Security

A judge of the New Jersey Superior Court has prohibited the scheduled release of a report on the security and accuracy of the Sequoia AVC Advantage voting machine. Last June, Judge Linda Feinberg ordered Sequoia Voting Systems to turn over its source code to me (serving as an expert witness, assisted by a team of computer scientists) for a thorough examination. At that time she also ordered that we could publish our report 30 days after delivering it to the Court–which should have been today.

Three weeks after we delivered the report, on September 24th Judge Feinberg ordered us not to release it. This is part of a lawsuit filed by the Rutgers Constitutional Litigation Clinic, seeking to decommission of all of New Jersey’s voting computers. New Jersey mostly uses Sequoia AVC Advantage direct-recording electronic (DRE) models. None of those DREs can be audited: they do not produce a voter verified paper ballot that permit each voter to create a durable paper record of her electoral choices before casting her ballot electronically on a DRE. The legal basis for the lawsuit is quite simple: because there is no way to know whether the DRE voting computer is actually counting votes as cast, there is no proof that the voting computers comply with the constitution or with statutory law that require that all votes be counted as cast.

The question of whether this report can legally be suppressed was already argued once in this Court, in June 2008, and the Court concluded then that it should be released; I will discuss this below. But as a matter of basic policy–of running a democracy–the public and legislators who want to know the basic facts about the reliability of their elections need to be able to read reports such as this one. Members of the New Jersey Legislature–who need to act now because the NJ Secretary of State is not in compliance with laws the legislature passed in 2005–have asked to read this report, but they are precluded by the Court’s order. Members of the public must decide now, in time to request an absentee ballot, whether to cast their ballot by absentee (counted by optical scan) or to vote on paperless DRE voting machines. Citizens also need information so that they can communicate to their legislators their opinions about how New Jersey should conduct elections. Even the Governor and the Secretary of State of New Jersey are not permitted, by the Court’s order, to read this report in order to inform their policy making.

Examination of the AVC Advantage. In the spring of 2008, Judge Linda Feinberg ordered the defendants (officials of the State of New Jersey) to provide to the plaintiffs: (a) Sequoia AVC Advantage voting machines, (b) the source code to those voting machines, and (c) other specified information. The Sequoia Voting Systems company, which had not been a party to the lawsuit, objected to the examination of their source code by the plaintiffs’ experts, on the grounds that the source code contained trade secrets. The Court recognized that concern, and crafted a Protective Order that permitted the plaintiffs’ experts to examine the source code while protecting the trade secrets within it. However, the Court Order, issued by Judge Feinberg on June 20, does permit the plaintiffs’ experts to release this report to the public at a specified time (which has now arrived). In fact, the clause of this Order that permits the release of the report was the subject of lengthy legal argument in May-June 2008, and the plaintiffs’ experts were not willing to examine the AVC Advantage machines under conditions that prevent public discussion of their findings.

I served as the plaintiffs’ expert witness and led an examination team including myself and 5 other computer scientists (Maia Ginsburg, Harri Hursti, Brian Kernighan, Chris Richards, and Gang Tan). We examined the voting machines and source code during July-August 2008. On September 2nd we provided to the Court (and to the defendants and to Sequoia) a lengthy report concerning the accuracy and security of the Sequioa AVC Advantage. The terms of the Court’s Protective Order of June 20 permit us to release the report today, October 2nd.

However, on September 24 Judge Feinberg, “with great reluctance,” orally ordered the plaintiffs not to release the report on October 2nd, and not to publicly discuss their conclusions from the study. She did so after the attorney for Sequoia grossly mischaracterized our report. In order to respect the Judge’s temporary stay, I cannot now comment further on what the report does contain.

The plaintiffs are deeply troubled by the Court’s issuance of what is essentially a temporary restraining order restricting speech, without any motion or briefing whatsoever. Issuing such an order is an extreme measure, which should be done only in rare circumstances, and only if the moving party has satisfied its high burden of showing both imminent harm and likelihood of success on the merits. Those two requirements have not been satisfied, nor can they be. The plaintiffs have asked the Court to reconsider her decision to suppress our report. The Court will likely hear arguments on this issue sometime in October. We hope and expect that the Court will soon permit publication of our report.

Why Patent Exhaustion Matters

In Tuesday’s post, I explained why I thought Quanta v. LG was a good decision as a matter of law. Today I’d like to talk about why it’s an important outcome from a policy perspective.

The function of the patent exhaustion doctrine is to ensure that the lanes of commerce do not become clogged with excessive legal restrictions. It has parallels in other areas of the law. Copyright has the first sale doctrine, which says that copyright holders’ rights in a particular copy of a creative work end when the work is sold (or given away) by the copyright holder. Similarly, in property law, the law tends to look skeptically on what the lawyers call “servitudes running with chattels.” In plain English, this means that if someone sells you a car together with a contract stipulating that it never be driven on Tuesdays, and you turn around and sell that car to me without making me sign a similar contract, I’m not bound by the no-driving-on-Tuesdays rule. The original owner may be able to sue you for breach of contract, but nobody would say I’m guilty of theft for using the car in contravention of a contract I didn’t sign, even if you signed a contract saying otherwise.

To see why this is important, imagine trying to run a pawn shop or used bookstore in a world where every item comes with a license agreement limiting how the property may be used. Tracking, complying with, and enforcing such restrictions would be prohibitively expensive, both for the parties involved and for the legal system, so the courts naturally frown on efforts to encumber property with restrictive covenants.

The same considerations apply in the patent system. One of the biggest challenges facing the patent system right now—especially in the IT industry—is the fragmentation of patent rights. The combination of more patents, broader patents, and increasingly complex products has made patent clearance extremely difficult for high-tech firms. Patent exhaustion helps because it reduces the number of times a given patent needs to be licensed for any given consumer product. A firm can be reasonably certain that if an upstream supplier has already licensed a particular patent, that firm doesn’t need to negotiate a license itself. That’s a good thing because negotiating patent agreements is far from costless. Licensing a patent at one point in a supply chain is almost certainly more efficient than a system that requires a fresh patent license for every step in the supply chain.

It’s important to remember that the purpose of the patent system is to ensure inventors are adequately rewarded for their inventions, not to give patent holders the power to micro-manage the entire production process. The patent system should ensure patent holders get the royalties to which they are entitled, but it should otherwise stays out of the way so that downstream manufacturers can spend their resources on engineers rather than patent lawyers. The Supreme Court’s decision in Quanta was a step in the right direction.

Popular Websites Vulnerable to Cross-Site Request Forgery Attacks

Update Oct 15, 2008 We’ve modified the paper to reflect the fact that the New York Times has fixed this problem. We also clarified that our server-side protection techniques do not protect against active network attackers.

Update Oct 1, 2008 The New York Times has fixed this problem. All of the problems mentioned below have now been fixed.

Today Ed Felten and I (Bill Zeller) are announcing four previously unpublished Cross-Site Request Forgery (CSRF) vulnerabilities. We’ve described these attacks in detail in a technical report titled Cross-Site Request Forgeries: Exploitation and Prevention.

We found four major vulnerabilities on four different sites. These vulnerabilities include what we believe is the first CSRF vulnerability that allows the transfer of funds from a financial institution. We contacted all the sites involved and gave them ample time to correct these issues. Three of these sites have fixed the vulnerabilities listed below, one has not.

CSRF vulnerabilities occur when a website allows an authenticated user to perform a sensitive action but does not verify that the user herself is invoking that action. The key to understanding CSRF attacks is to recognize that websites typically don’t verify that a request came from an authorized user. Instead they verify only that the request came from the browser of an authorized user. Because browsers run code sent by multiple sites, there is a danger that one site will (unbeknownst to the user) send a request to a second site, and the second site will mistakenly think that the user authorized the request.

If a user visits an attacker’s website, the attacker can force the user’s browser to send a request to a page that performs a sensitive action on behalf of the user. The target website sees a request coming from an authenticated user and happily performs some action, whether it was invoked by the user or not. CSRF attacks have been confused with Cross-Site Scripting (XSS) attacks, but they are very different. A site completely protected from XSS is still vulnerable to CSRF attacks if no protections are taken. For more background on CSRF, see Shiflett, Grossman, Wikipedia, or OWASP.

We describe the four vulnerabilities below:

1. ING Direct (ingdirect.com)

Status: Fixed

We found a vulnerability on ING’s website that allowed additional accounts to be created on behalf of an arbitrary user. We were also able to transfer funds out of users’ bank accounts. We believe this is the first CSRF vulnerability to allow the transfer of funds from a financial institution. Specific details are described in our paper.

2. YouTube (youtube.com)

Status: Fixed

We discovered CSRF vulnerabilities in nearly every action a user could perform on YouTube. An attacker could have added videos to a user’s "Favorites," added himself to a user’s "Friend" or "Family" list, sent arbitrary messages on the user’s behalf, flagged videos as inappropriate, automatically shared a video with a user’s contacts, subscribed a user to a "channel" (a set of videos published by one person or group) and added videos to a user’s "QuickList" (a list of videos a user intends to watch at a later point). Specific details are described in our paper.

3. MetaFilter (metafilter.com)

Status: Fixed

A vulnerability existed on Metafilter that allowed an attacker to take control of a user’s account. A forged request could be used to set a user’s email address to the attacker’s address. A second forged request could then be used to activate the "Forgot Password" action, which would send the user’s password to the attacker’s email address. Specific details are described in our paper.

(MetaFilter fixed this vulnerability in less than two days. We appreciate the fact that MetaFilter contacted us to let us know the problem had been fixed.)

4. The New York Times (nytimes.com)

Status: Not Fixed. We contacted the New York Times in September, 2007. As of September 24, 2008, this vulnerability still exists. This problem has been fixed.

A vulnerability in the New York Time’s website allows an attacker to find out the email address of an arbitrary user. This takes advantage of the NYTimes’s "Email This" feature, which allows a user to send an email about a story to an arbitrary user. This emails contains the logged-in user’s email address. An attacker can forge a request to active the "Email This" feature while setting his email address as the recipient. When a user visit’s the attacker’s page, an email will be sent to the attacker’s email address containing the user’s email address. This attack can be used for identification (e.g., finding the email addresses of all users who visit an attacker’s site) or for spam. This attack is particularly dangerous because of the large number of users who have NYTimes’ accounts and because the NYTimes keeps users logged in for over a year.

Also, TimesPeople, a social networking site launched by the New York Times on September 23, 2008, is also vulnerable to CSRF attacks.

We hope the New York Times will decide to fix these vulnerabilities now that they have been made public. The New York Times appears to have fixed the problems detailed above.

Mitigation

Our paper provides recommendations for preventing these attacks. We provide a server-side plugin for the PHP MVC framework Code Igniter that can completely prevent CSRF. We also provide a client-side Firefox extension that can protect users from certain types of CSRF attacks (non-GET request attacks).

The Takeaway

We’ve found CSRF vulnerabilities in sites that have a huge incentive to do security correctly. If you’re in charge of a website and haven’t specifically protected against CSRF, chances are you’re vulnerable.

The academic literature on CSRF attacks has been rapidly expanding over the last two years and we encourage you to see our bibliography for references to other work. On the industry side, I’d like to especially thank Chris Shiflett and Jeremiah Grossman for tirelessly working to educate developers about CSRF attacks.