November 22, 2024

Archives for 2008

Counting Electronic Votes in Secret

Things are not looking good for open government when it comes to observing poll workers on Election Night. Our state election laws, written for the old lever machines, now apply to Sequoia electronic voting machines. Andrew Appel and I have been asking a straightforward question: Can ordinary members of the public watch the procedures used by poll workers to count the votes?

I submitted a formal request to the Board of Elections of Mercer County (where Princeton University is located), seeking permission to watch the poll workers when they close the polls (on Sequoia AVC Advantage voting computers) and announce the results. They said no!

The Election Board said this election is “too important” to permit extra people in the polling place.

They even went so far as to suggest that my written application was fraudulent. I applied on behalf of five people: two Princeton University students, two professors, and myself. In an abundance of caution, I requested authorization in the form of “challenger badges” which the Board of Elections can issue at its discretion. By phone, I explained our interest in merely watching the poll workers.

Of course we understand that they might not want extra people getting in the way on Election Night — that’s why we took measures to get special authorization. To ensure that we could be lawfully present, we asked for challenger badges as non-partisan proponents and opponents of two Public Questions on the ballot, as permitted by NJSA 19:7-2. My request was entirely in compliance with state law, as all the prospective challengers are registered to vote in Mercer County.

In spite of this, the Board expressed reluctance, based on the identities of the prospective challengers. In particular, they cited Andrew’s status as an expert on Sequoia voting machines as a “concern,” and provided assurances that Sequoia has fixed all the problems he identified in past elections.

Other counties in New Jersey permit members of the public to watch the poll workers “read” the election results. Combined with Judge Feinberg’s decision to suppress Andrew’s report on the security of the Sequoia machines, Mercer County conveys the unfortunate impression it does not welcome scrutiny of its electronic voting process.

Piracy Statistics and the Importance of Journalistic Skepticism

If you’ve paid attention to copyright debates in recent years, you’ve probably seen advocates for more restrictive copyright laws claim that “counterfeiting and piracy” cost the US economy as much as $250 billion. When pressed, those who make these kinds of claims are inevitably vague about exactly where these figures come from. For example, I contacted Thomas Sydnor, the author of the paper I linked above, and he was able to point me to a 2002 press release from the FBI, which claims that “losses to counterfeiting are estimated at $200-250 billion a year in U.S. business losses.”

There are a couple of things that are notable about this. In the first place, notice that the press release says counterfeiting, which is an entirely different issue from copyright infringement. Passing stronger copyright legislation in order to stop counterfeiting is a non-sequitur.

But the more serious issue is that the FBI can’t actually explain how it arrived at these figures. And indeed, it appears that nobody knows who came up with these figures and how they were computed. Julian Sanchez has done some sleuthing and found that these figures have literally been floating around inside the beltway for decades. Julian contacted the FBI, which wasn’t able to point to any specific source. Further investigation led him to a 1993 Forbes article:

Ars eagerly hunted down that issue and found a short article on counterfeiting, in which the reader is informed that “counterfeit merchandise” is “a $200 billion enterprise worldwide and growing faster than many of the industries it’s preying on.” No further source is given.

Quite possibly, the authors of the article called up an industry group like the IACC and got a ballpark guess. At any rate, there is nothing to indicate that Forbes itself had produced the estimate, Mr. Conyers’ assertion notwithstanding. What is very clear, however, is that even assuming the figure is accurate, it is not an estimate of the cost to the U.S. economy of IP piracy. It’s an estimate of the size of the entire global market in counterfeit goods. Despite the efforts of several witnesses to equate them, it is plainly not on par with the earlier calculation by the ITC that many had also cited.

It’s not surprising that no one is able to cite a credible source because the figure is plainly absurd. For example, the Institute for Policy Innovation, a group that pushes for more restrictive copyright law, has claimed that copyright infringement costs the economy $58.0 billion. As I’ve written before, these estimates vastly overstate losses because IPI used a dubious methodology that double- and triple-counts each lost sale. The actual figure—even accepting some of the dubious assumptions in the IPI estimate, is almost certainly less than $20 billion. But whether it’s $10, $20, or $58 billion, it’s certainly not $250 billion.

There are a couple of important lessons here. One concerns the importance of careful scholarship. Before citing any statistic, you should have a clear understanding of what that figure is measuring, who calculated it, and how. The fact that this figure has gotten repeated so many times inside the beltway suggests that the people using the figure have not been doing their homework. It’s not surprising that lobbyists cite the largest figures they can find, but public servants have a duty to be more skeptical.

The more important lesson is for the journalistic profession. Far too many reporters at reputable media outlets credulously repeat these figures in news stories without paying enough attention to where they come from. If a statistic is provided by a party with a vested interest in the subject of a story—if, say, a content industry group provides a statistic on the costs of piracy—reporters should double-check that figure against more reputable sources. And, sadly, a government agency isn’t always a reliable source. Agencies like the BLS and BEA who are in the business of collecting official statistics are generally reliable. But it’s not safe to assume that other agencies have done their homework. The FBI, for example, has made little effort to correct the record on the $250 billion figure, despite the fact that it is regularly cited as the source of the figure and despite the fact that it has admitted that it can’t explain where the figure comes from.

Julian gives all the gory details on the origins of the $250 billion figure. He also digs into the oft-repeated claim that piracy costs 750,000 jobs, which dates back even further (to 1986) and is no more credible. And he offers some interesting theoretical reasons to think that the costs of copyright infringement are much, much less than $250 billion.

Lessons from the Fall of NebuAd

With three Congressional hearings held within the past four months, U.S. legislators have expressed increased concern about the handling of private online information. As Paul Ohm mentioned yesterday, the recent scrutiny has focused mainly on the ability of ISPs to intercept and analyze the online traffic of its users– in a word, surveillance. One of the goals of surveillance for ISPs is to yield new sources of revenue; so when a Silicon Valley startup called NebuAd approached ISPs last spring with its behavioral advertising technology, many were quick to sign on. But by summer’s end, the company had lost all of its ISP partners, their CEO had resigned, and they announced their intention to pursue “more traditional” advertising channels.

How did this happen and what can we learn from this episode?

The trio of high-profile hearings in Congress brought the issue of ISP surveillence into the public spotlight. Despite no new privacy legislation even being proposed in the area, the firm sentiment among the Committees’ members, particularly Rep. Edward “When did you stop beating the consumer?” Markey (D-MA), was enough to spawn more negative PR than the partner ISPs could handle. The lesson here, as it often times is, is that regulation is not the only way, and rarely even the best way, of dealing with bad actors, especially in highly innovative sectors like Internet technology. Proposed regulation of third-party online advertising by the New York State Assembly last year, for example, would have placed an undue compliance burden on legitimate online businesses while providing few tangible privacy benefits. Proponents of net neutrality legislation may want to heed this episode as a cautionary tale, especially in light of Comcast’s recent shift to more reasonable traffic management techniques.

Behind the scenes, the work of investigative technologists was key in substantiating the extent of consumer harm that, I presume, caught the eye of Congress members and their staffers. A damaging report by technologist Robb Topolski, published a month before the first hearing, exposed much of NebuAd’s most egregious practices such as IP packet forgery. Such technical efforts are critical in unveiling opaque consumer harms that may be difficult for lay users to detect themselves. To return to net neutrality, ISP monitoring projects such as EFF’s Switzerland testing tool and others will be essential in keeping network management practices in check. (Incidentally and perhaps not coincidentally, Topolski was also the first to reveal Comcast’s use of TCP reset packets to kill BitTorrent connections.)

ISPs and other online service providers are pushing for industry self-regulation in behavioral advertising, but it is not at all clear whether self-regulation will be sufficient to protect consumer privacy. Indeed, even the FTC favors self-regulatory principles, but the question of what “opt-in” actually means will determine the extent of consumer protection. Self-regulation seems unlikely in any case to protect consumers from unwittingly “opting-in” to traffic monitoring. ISPs have a monetary incentive to enroll their customers into monitoring and standard tricks will probably get the job done. We all have experience signing fine-print contracts without reading them, clicking blindly through browser-based security warnings, or otherwise sacrificing our privacy for trivial rewards and discounts (or even just a bar of chocolate).

Interestingly enough, a parallel fight is being waged in Europe over the exact same issue but with starkly contrasting results. Although Phorm develops online surveillance technologies for targeted advertising similar to NebuAd’s, a UK regulator recently declared that Phorm’s technologies may be able to be introduced “in a lawful, appropriate and transparent fashion” given “the knowledge and agreement of the customer.” As a result, Phorm has continued its trials of their Internet surveillance technology on British Telecom subscribers.

Why these two storylines have diverged so significantly is not apparent to me. One thought is that Phorm got itself in front of the issue of business legitimacy– whereas U.S. regulators saw NebuAd as a rogue business from the start, Phorm has been an active participant on the IAB’s Behavioural Advertising Task Force to develop industry best practices. Another thought is that the fight over Phorm is far from over since the European Commission is continuing its own investigation under EU laws. I hope readers here, who are more informed than I am about the the regulatory landscape in the EU and UK, can provide additional hypotheses about why Phorm has, thus far, not suffered the same fate as NebuAd.

Opting In (or Out) is Hard to Do

Thanks to Ed and his fellow bloggers for welcoming me to the blog. I’m thrilled to have this opportunity, because as a law professor who writes about software as a regulator of behavior (most often through the substantive lenses of information privacy, computer crime, and criminal procedure), I often need to vet my theories and test my technical understanding with computer scientists and other techies, and this will be a great place to do it.

This past summer, I wrote an article (available for download online) about ISP surveillance, arguing that recent moves by NebuAd/Charter, Phorm, AT&T, and Comcast augur a coming wave of unprecedented, invasive deep-packet inspection. I won’t reargue the entire paper here (the thesis is no doubt much less surprising to the average Freedom to Tinker reader than to the average lawyer) but you can read two bloggy summaries I wrote here and here or listen to a summary I gave in a radio interview. (For summaries by others, see [1] [2] [3] [4]).

Two weeks ago, Verizon and AT&T told Congress that they would monitor for marketing purposes only users who had opted in. According to Verizon VP Tom Tauke, “[B]efore a company captures certain Internet-usage data for targeted or customized advertising purposes, it should obtain meaningful, affirmative consent from consumers.”

I applaud this announcement, but I’m curious how the ISPs will implement this promise. It seems like there are two architectural puzzles here: how does the user convey consent, and how does the provider distinguish between the packets of consenting and nonconsenting users? For an ISP, neither step is nearly as straightforward as it is for a web provider like Google, which can simply set and check cookies. For the first piece, I suppose a user can click a check box on a web-based form or respond to an e-mail, letting the ISP know he would like to opt in. These solutions seem clumsy, however, and ISPs probably want a system that is as seamless and easy to use as possible, to maximize the number of people opting in.

Once ISPs have a “white list” of users who have opted in, how do they turn this into on-the-fly discretionary packet sniffing? Do they map white-listed users to IP addresses and add these to a filter, or is there a risk that things will get out of sync during dhcp lease renewals? Can they use cookies, perhaps redirecting every http session to an ISP-run web server first using 301 http status codes? (This seems to be the way Phorm implements opt-out, according to Richard Clayton’s illuminating analysis.) Do any of these solutions scale for an ISP with hundreds of thousands of users?

And are things any easier if the ISP adopts an opt-out system instead?

Satellite Piracy, Mod Chips, and the Freedom to Tinker

Tom Lee makes an interesting point about the satellite case I wrote about on Saturday: the problem facing EchoStar and other satellite manufacturers is strikingly similar to the challenges that have been faced for many years by video game console manufacturers. There’s a grey market in “mod chips” for video game consoles. Typically, they’re sold in a form that only allows them to be used for legitimate purposes. But many users purchase the mod chips and then immediately download new software that allows them to play illicit copies of copyrighted video games. It’s unclear exactly how the DMCA applies in this kind of case.

But as Tom notes, this dilemma is likely to get more common over time. As hardware gets cheaper and more powerful, companies are increasingly going to build their products using off-the-shelf hardware and custom software. And that will mean that increasingly, the hardware needed to do legitimate reverse engineering will be identical to the hardware needed to circumvent copy protection. The only way to prevent people from getting their hands on “circumvention devices” will be to prevent them from purchasing any hardware capable of interoperating with a manufacturer’s product without its permission.

Policymakers, then, face a fundamental choice. We can have a society in which reverse engineering for legitimate purposes is permitted, at the cost of some amount of illicit circumvention of copy protection schemes. Or it can have a society in which any unauthorized tinkering with copy-protected technologies is presumptively illegal. This latter position has the consequence of making copy protection more than just a copyright enforcement device (and a lousy one at that). It gives platform designers de facto control over who may build hardware devices that interoperable with their own. Thus far, Congress and the courts have chosen this latter option. You can probably infer from this blog’s title where many of its contributors stand.