May 6, 2024

Archives for October 2008

Opting In (or Out) is Hard to Do

Thanks to Ed and his fellow bloggers for welcoming me to the blog. I’m thrilled to have this opportunity, because as a law professor who writes about software as a regulator of behavior (most often through the substantive lenses of information privacy, computer crime, and criminal procedure), I often need to vet my theories and test my technical understanding with computer scientists and other techies, and this will be a great place to do it.

This past summer, I wrote an article (available for download online) about ISP surveillance, arguing that recent moves by NebuAd/Charter, Phorm, AT&T, and Comcast augur a coming wave of unprecedented, invasive deep-packet inspection. I won’t reargue the entire paper here (the thesis is no doubt much less surprising to the average Freedom to Tinker reader than to the average lawyer) but you can read two bloggy summaries I wrote here and here or listen to a summary I gave in a radio interview. (For summaries by others, see [1] [2] [3] [4]).

Two weeks ago, Verizon and AT&T told Congress that they would monitor for marketing purposes only users who had opted in. According to Verizon VP Tom Tauke, “[B]efore a company captures certain Internet-usage data for targeted or customized advertising purposes, it should obtain meaningful, affirmative consent from consumers.”

I applaud this announcement, but I’m curious how the ISPs will implement this promise. It seems like there are two architectural puzzles here: how does the user convey consent, and how does the provider distinguish between the packets of consenting and nonconsenting users? For an ISP, neither step is nearly as straightforward as it is for a web provider like Google, which can simply set and check cookies. For the first piece, I suppose a user can click a check box on a web-based form or respond to an e-mail, letting the ISP know he would like to opt in. These solutions seem clumsy, however, and ISPs probably want a system that is as seamless and easy to use as possible, to maximize the number of people opting in.

Once ISPs have a “white list” of users who have opted in, how do they turn this into on-the-fly discretionary packet sniffing? Do they map white-listed users to IP addresses and add these to a filter, or is there a risk that things will get out of sync during dhcp lease renewals? Can they use cookies, perhaps redirecting every http session to an ISP-run web server first using 301 http status codes? (This seems to be the way Phorm implements opt-out, according to Richard Clayton’s illuminating analysis.) Do any of these solutions scale for an ISP with hundreds of thousands of users?

And are things any easier if the ISP adopts an opt-out system instead?

Satellite Piracy, Mod Chips, and the Freedom to Tinker

Tom Lee makes an interesting point about the satellite case I wrote about on Saturday: the problem facing EchoStar and other satellite manufacturers is strikingly similar to the challenges that have been faced for many years by video game console manufacturers. There’s a grey market in “mod chips” for video game consoles. Typically, they’re sold in a form that only allows them to be used for legitimate purposes. But many users purchase the mod chips and then immediately download new software that allows them to play illicit copies of copyrighted video games. It’s unclear exactly how the DMCA applies in this kind of case.

But as Tom notes, this dilemma is likely to get more common over time. As hardware gets cheaper and more powerful, companies are increasingly going to build their products using off-the-shelf hardware and custom software. And that will mean that increasingly, the hardware needed to do legitimate reverse engineering will be identical to the hardware needed to circumvent copy protection. The only way to prevent people from getting their hands on “circumvention devices” will be to prevent them from purchasing any hardware capable of interoperating with a manufacturer’s product without its permission.

Policymakers, then, face a fundamental choice. We can have a society in which reverse engineering for legitimate purposes is permitted, at the cost of some amount of illicit circumvention of copy protection schemes. Or it can have a society in which any unauthorized tinkering with copy-protected technologies is presumptively illegal. This latter position has the consequence of making copy protection more than just a copyright enforcement device (and a lousy one at that). It gives platform designers de facto control over who may build hardware devices that interoperable with their own. Thus far, Congress and the courts have chosen this latter option. You can probably infer from this blog’s title where many of its contributors stand.

Satellite Case Raises Questions about the Rule of Law

My friend Julian Sanchez reports on a September 29 ruling by a federal magistrate judge that retailers will not be required to disclose the names of customers who purchased open satellite hardware that is currently the subject of a copyright lawsuit. The Plaintiff, Echostar, sought the records as part of its discovery procedures in a lawsuit against Freetech, a firm that manufacturers consumer equipment capable of receiving free satellite content. The equipment attracted Echostar’s attention because it’s also the case that with minor modifications Freetech’s devices can be used to illicitly receive Echostar’s proprietary video content. Echostar contends that satellite piracy—not the interception of legitimate free content—is the primary purpose of Freetech’s boxes. And it argues that this makes them illegal circumvention devices under the Digital Millennium Copyright Act.

The ruling is a small victory for customer privacy. But as Julian notes, the case still has some troubling implications. Echostar claims it can demonstrate that Freetech has been colluding with satellite pirates to ensure its boxes can be used to illegally intercept Echostar content. But it is a long way from proving that allegation in court. Unfortunately, the very existence of the lawsuit has had a devastating impact on its business. Freetech’s sales have dropped about 90 percent in recent years. In other words, Echostar has nearly destroyed Freetech’s business long before it has actually proved that Freetech had done anything wrong.

Second, and more fundamentally, it appears that Freetech’s liability may turn on whether there is a “commercially significant” use of Freetech’s products other than satellite piracy. As Fred von Lohmann points out, that has the troubling implication that Freetech’s liability under copyright law is dependent on the actions of customers over whom it has no control. As Fred puts it, this means that under the DMCA, a manufacturer could “go to bed selling a legitimate product and wake up liable.” It’s a basic principle of law that people should be liable for what they do, not for what third parties do without their knowledge or consent.

None of this is to condone satellite piracy. I have little sympathy for people who face legal penalties for obtaining proprietary satellite content without paying for it. But there are legal principles that are even more important than enforcing copyright. The progress of the Echostar case so far suggests that in its zeal to protect the rights of content owners, copyright law is trampling on those principles. Which is one more reason to think that the DMCA’s anti-circumvention provisions needs to be reformed or repealed.

Political Information Overload and the New Filtering

[We’re pleased to introduce Luis Villa as a guest blogger. Luis is a law student at Columbia Law School, focusing on law and technology, including intellectual property, telecommunications, privacy, and e-commerce. Outside of class he serves as Editor-in-Chief of the Science and Technology Law Review. Before law school, Luis did great work on open source projects, and spent some time as “geek in residence” at the Berkman Center. — Ed]

[A big thanks to Ed, Alex, and Tim for the invitation to participate at Freedom To Tinker, and the gracious introduction. I’m looking forward to my stint here. — Luis]

A couple weeks ago at the Web 2.0 Expo NY, I more-or-less stumbled into a speech by Clay Shirky titled “It’s Not Information Overload, It’s Filter Failure.” Clay argues that there has always been a lot of information, so our modern complaints about information overload are more properly ascribed to a breakdown in the filters – physical, economic, and social- that used to keep information at bay. This isn’t exactly a shockingly new observation, but now that Clay put it in my head I’m seeing filters (or their absence) everywhere.

In particular, I’m seeing lots of great examples in online politics. We’ve probably never been so deluged by political information as we are now, but Clay would argue that this is not because there is more information- after all, virtually everyone has had political opinions for ages. Instead, he’d say that the old filters that kept those opinions private have become less effective. For example, social standards used to say ‘no politics at the dinner table’, and economics used to keep every Luis, Ed, and Alex from starting a newspaper with an editorial page. This has changed- social norms about politics have been relaxed, and ‘net economics have allowed for the blooming of a million blogs and a billion tweets.

Online political filtering dates back at least to Slashdot’s early attempts to moderate commenters, and criticism of them stretches back nearly as far. But the new deluge of political commentary from everyone you know (and everyone you don’t) rarely has filtering mechanisms, norms, or economics baked in yet. To a certain extent, we’re witnessing the birth of those new filters right now. Among the attempts at a ‘new filtering’ that I’ve seen lately:

  • The previously linked election.twitter.com. This is typical of the twitter ‘ambient intimacy‘ approach to filtering- everything is so short and so transient that your brain does the filtering for you (or so it is claimed), giving you a 100,000 foot view of the mumblings and grumblings of a previously unfathomably vast number of people.
  • fivethirtyeight.com: an attempt to filter the noise of the thousands of polls into one or two meaningful numbers by applying mathematical techniques originally developed for analysis of baseball players. The exact algorithms aren’t disclosed, but the general methodologies have been discussed.
  • The C-Span Debate Hub: this has not reached its full potential yet, but it uses some Tufte-ian tricks to pull data out of the debates, and (in theory) their video editing tool could allow for extensive discussion of any one piece of the debate, instead of the debate as a whole- surely a way to get some interesting collection and filtering.
  • Google’s ‘In Quotes’: this takes one first step in filtering (gathering all candidate quotes in one place, from disparate, messy sources) but then doesn’t build on that.

Unfortunately, I have no deep insights to add here. Some shallow observations and questions, instead:

  • All filters have impacts- keeping politics away from the dinner table tended to mute objections to the status quo, the ‘objectivity’ of the modern news media filter may have its own pernicious effects, and arguably information mangled by PowerPoint can blow up Space Shuttles. Have the designers of these new political filters thought about the information they are and are not presenting? What biases are being introduced? How can those be reduced or made more transparent?
  • In at least some of these examples the mechanisms by which the filtering occurs are not a matter of record (538’s math) or are not well understood (twitter’s crowd/minimal attention psychology). Does/should that matter? What if these filters became ‘dominant’ in any sense? Should we demand the source for political filtering algorithms?
  • The more ‘fact-based’ filters (538, inquotes) seem more successful, or at least more coherent and comprehensive. Are opinions still just too hard to filter with software or are there other factors at work here?
  • Slashdot’s nearly ten year old comment moderation system is still quite possibly the least bad filter out there. None of the ‘new’ politics-related filters (that I know of) pulls together reputation, meta-moderation, and filtering like slashdot does. Are there systemic reasons (usability, economics, etc.?) why these new tools seem so (relatively) naive?

We’re entering an interesting time. Our political process is becoming both less and more mediated– more ‘susceptible to software’ in Dave Weinberger’s phrase. Computer scientists, software interaction designers, and policy/process wonks would all do well to think early and often about the filters and values embedded in this software, and how we can (and can’t) ‘tinker’ with them to get the results we’d like to see.

Judge Suppresses Report on Voting Machine Security

A judge of the New Jersey Superior Court has prohibited the scheduled release of a report on the security and accuracy of the Sequoia AVC Advantage voting machine. Last June, Judge Linda Feinberg ordered Sequoia Voting Systems to turn over its source code to me (serving as an expert witness, assisted by a team of computer scientists) for a thorough examination. At that time she also ordered that we could publish our report 30 days after delivering it to the Court–which should have been today.

Three weeks after we delivered the report, on September 24th Judge Feinberg ordered us not to release it. This is part of a lawsuit filed by the Rutgers Constitutional Litigation Clinic, seeking to decommission of all of New Jersey’s voting computers. New Jersey mostly uses Sequoia AVC Advantage direct-recording electronic (DRE) models. None of those DREs can be audited: they do not produce a voter verified paper ballot that permit each voter to create a durable paper record of her electoral choices before casting her ballot electronically on a DRE. The legal basis for the lawsuit is quite simple: because there is no way to know whether the DRE voting computer is actually counting votes as cast, there is no proof that the voting computers comply with the constitution or with statutory law that require that all votes be counted as cast.

The question of whether this report can legally be suppressed was already argued once in this Court, in June 2008, and the Court concluded then that it should be released; I will discuss this below. But as a matter of basic policy–of running a democracy–the public and legislators who want to know the basic facts about the reliability of their elections need to be able to read reports such as this one. Members of the New Jersey Legislature–who need to act now because the NJ Secretary of State is not in compliance with laws the legislature passed in 2005–have asked to read this report, but they are precluded by the Court’s order. Members of the public must decide now, in time to request an absentee ballot, whether to cast their ballot by absentee (counted by optical scan) or to vote on paperless DRE voting machines. Citizens also need information so that they can communicate to their legislators their opinions about how New Jersey should conduct elections. Even the Governor and the Secretary of State of New Jersey are not permitted, by the Court’s order, to read this report in order to inform their policy making.

Examination of the AVC Advantage. In the spring of 2008, Judge Linda Feinberg ordered the defendants (officials of the State of New Jersey) to provide to the plaintiffs: (a) Sequoia AVC Advantage voting machines, (b) the source code to those voting machines, and (c) other specified information. The Sequoia Voting Systems company, which had not been a party to the lawsuit, objected to the examination of their source code by the plaintiffs’ experts, on the grounds that the source code contained trade secrets. The Court recognized that concern, and crafted a Protective Order that permitted the plaintiffs’ experts to examine the source code while protecting the trade secrets within it. However, the Court Order, issued by Judge Feinberg on June 20, does permit the plaintiffs’ experts to release this report to the public at a specified time (which has now arrived). In fact, the clause of this Order that permits the release of the report was the subject of lengthy legal argument in May-June 2008, and the plaintiffs’ experts were not willing to examine the AVC Advantage machines under conditions that prevent public discussion of their findings.

I served as the plaintiffs’ expert witness and led an examination team including myself and 5 other computer scientists (Maia Ginsburg, Harri Hursti, Brian Kernighan, Chris Richards, and Gang Tan). We examined the voting machines and source code during July-August 2008. On September 2nd we provided to the Court (and to the defendants and to Sequoia) a lengthy report concerning the accuracy and security of the Sequioa AVC Advantage. The terms of the Court’s Protective Order of June 20 permit us to release the report today, October 2nd.

However, on September 24 Judge Feinberg, “with great reluctance,” orally ordered the plaintiffs not to release the report on October 2nd, and not to publicly discuss their conclusions from the study. She did so after the attorney for Sequoia grossly mischaracterized our report. In order to respect the Judge’s temporary stay, I cannot now comment further on what the report does contain.

The plaintiffs are deeply troubled by the Court’s issuance of what is essentially a temporary restraining order restricting speech, without any motion or briefing whatsoever. Issuing such an order is an extreme measure, which should be done only in rare circumstances, and only if the moving party has satisfied its high burden of showing both imminent harm and likelihood of success on the merits. Those two requirements have not been satisfied, nor can they be. The plaintiffs have asked the Court to reconsider her decision to suppress our report. The Court will likely hear arguments on this issue sometime in October. We hope and expect that the Court will soon permit publication of our report.