August 24, 2016

Archives for September 2009


Breaking Vanish: A Story of Security Research in Action

Today, seven colleagues and I released a new paper, “Defeating Vanish with Low-Cost Sybil Attacks Against Large DHTs“. The paper’s authors are Scott Wolchok (Michigan), Owen Hofmann (Texas), Nadia Heninger (Princeton), me, Alex Halderman (Michigan), Christopher Rossbach (Texas), Brent Waters (Texas), and Emmett Witchel (Texas).

Our paper is the next chapter in an interesting story about the making, breaking, and possible fixing of security systems.

The story started with a system called Vanish, designed by a team at the University of Washington (Roxana Geambasu, Yoshi Kohno, Amit Levy, and Hank Levy). Vanish tries to provide “vanishing data objects” (VDOs) that can be created at any time but will only be usable within a short time window (typically eight hours) after their creation. This is an unusual kind of security guarantee: the VDO can be read by anybody who sees it in the first eight hours, but after that period expires the VDO is supposed to be unrecoverable.

Vanish uses a clever design to do this. It takes your data and encrypts it, using a fresh random encryption key. It then splits the key into shares, so that a quorum of shares (say, seven out of ten shares) is required to reconstruct the key. It takes the shares and stores them at random locations in a giant worldwide system called the Vuze DHT. The Vuze DHT throws away items after eight hours. After that the shares are gone, so the key cannot be reconstructed, so the VDO cannot be decrypted — at least in theory.

What is this Vuze DHT? It’s a worldwide peer-to-peer network, containing a million or so computers, that was set up by Vuze, a company that uses the BitTorrent protocol to distribute (licensed) video content. Vuze needs a giant data store for its own purposes, to help peers find the videos they want, and this data store happens to be open so that Vanish can use it. The million-computer extent of the Vuze data store was important, because it gave the Vanish designers a big haystack in which to hide their needles.

Vanish debuted on July 20 with a splashy New York Times article. Reading the article, Alex Halderman and I realized that some of our past thinking about how to extract information from large distributed data structures might be applied to attack Vanish. Alex’s student Scott Wolchok grabbed the project and started doing experiments to see how much information could be extracted from the Vuze DHT. If we could monitor Vuze and continuously record almost all of its contents, then we could build a Wayback Machine for Vuze that would let us decrypt VDOs that were supposedly expired, thereby defeating Vanish’s security guarantees.

Scott’s experiments progressed rapidly, and by early August we were pretty sure that we were close to demonstrating a break of Vanish. The Vanish authors were due to present their work in a few days, at the Usenix Security conference in Montreal, and we hoped to demonstrate a break by then. The question was whether Scott’s already heroic sleep-deprived experimental odyssey would reach its destination in time.

We didn’t want to ambush the Vanish authors with our break, so we took them aside at the conference and told them about our preliminary results. This led to some interesting technical discussions with the Vanish team about technical details of Vuze and Vanish, and about some alternative designs for Vuze and Vanish that might better resist attacks. We agreed to keep them up to date on any new results, so they could address the issue in their talk.

As it turned out, we didn’t establish a break before the Vanish team’s conference presentation, so they did not have to modify their presentation much, and Scott finally got to catch up on his sleep. Later, we realized that evidence to establish a break had actually been in our experimental logs before the Vanish talk, but we hadn’t been clever enough to spot it at the time. Science is hard.

Some time later, I ran into my ex-student Brent Waters, who is now on the faculty at the University of Texas. I mentioned to Brent that Scott, Alex and I had been studying attacks on Vanish and we thought we were pretty close to making an attack work. Amazingly, Brent and some Texas colleagues (Owen Hoffman, Christopher Rossbach, and Emmett Witchel) had also been studying Vanish and had independently devised attacks that were pretty similar to what Scott, Alex, and I had.

We decided that it made sense to join up with the Texas team, work together on finishing and testing the attacks, and then write a joint paper. Nadia Heninger at Princeton did some valuable modeling to help us understand our experimental results, so we added her to the team.

Today we are releasing our joint paper. It describes our attacks and demonstrates that the attacks do indeed defeat Vanish. We have a working system that can decrypt Vanishing data objects (made with the original version of Vanish) after they are supposedly unrecoverable.

Our paper also discusses what went wrong in the original Vanish design. The people who designed Vanish are smart and experienced, but they obviously made some kind of mistake in their original work that led them to believe that Vanish was secure — a belief that we now know is incorrect. Our paper talks about where we think the Vanish authors went wrong, and what security practitioners can learn from the Vanish experience so far.

Meanwhile, the Vanish authors went back to the drawing board and came up with a bunch of improvements to Vanish and Vuze that make our attacks much more expensive. They wrote their own paper about their experience with Vanish and their new modifications to it.

Where does this leave us?

For now, Vanish should be considered too risky to rely on. The standard for security is not “no currently demonstrated attacks”, it is “strong evidence that the system resists all reasonable attacks”. By updating Vanish to resist our attacks, the Vanish authors showed that their system is not a dead letter. But in my view they are still some distance from showing that Vanish is secure . Given the complexity of underlying technologies such as Vuze, I wouldn’t be surprised if more attacks turn out to be possible. The latest version of Vanish might turn out to be sound, or to be unsound, or the whole approach might turn out to be flawed. It’s too early to tell.

Vanish is an interesting approach to a real problem. Whether this approach will turn out to work is still an open question. It’s good to explore this question — and I’m glad that the Vanish authors and others are doing so. At this point, Vanish is of real scientific interest, but I wouldn’t rely on it to secure my data.

[Update (Sept. 30, 2009): I rewrote the paragraphs describing our discussions with the Vanish team at the conference. The original version may have given the wrong impression about our intentions.]


Android Open Source Model Has a Short Circuit

[Update: Google subsequently worked out a mechanism that allows Cyanogen and others to distribute their mods separate from the Google Apps.]

Last year, Google entered the mobile phone market with a Linux-based mobile operating system. The company brought together device manufacturers and carriers in the Open Handset Alliance, explaining that, “Together we have developed Android™, the first complete, open, and free mobile platform.” There has been considerable engagement from the open source developer community, as well as significant uptake from consumers. Android may have even been instrumental in motivating competing open platforms like LiMo. In addition to the underlying open source operating system, Google chose to package essential (but proprietary) applications with Android-based handsets. These applications include most of the things that make the handsets useful (including basic functions to sync with the data network). This two-tier system of rights has created a minor controversy.

A group of smart open source developers created a modified version of the Android+Apps package, called Cyanogen. It incorporated many useful and performance-enhancing updates to the Android OS, and included unchanged versions of the proprietary Apps. If Cyanogen hadn’t included the Apps, the package would have been essentially useless, given that Google doesn’t appear to provide a means to install the Apps on a device that has only a basic OS. As Cyanogen gained popularity, Google decided that it could no longer watch the project distribute their copyright-protected works. The lawyers at Google decided that they needed to send a Cease & Desist letter to the Cyanogen developer, which caused him to take the files off of his site and spurred backlash from the developer community.

Android represents a careful balance on the part of Google, in which the company seeks to foster open platforms but maintain control over its proprietary (but free) services. Google has stated as much, in response to the current debate. Android is an exciting alternative to the largely closed-source model that has dominated the mobile market to date. Google closely integrated their Apps with the operating system in a way that makes for a tremendously useful platform, but in doing so hampered the ability of third-party developers to fully contribute to the system. Perhaps the problem is simply that they did not choose the right location to draw the line between open vs. closed source — or free-to-distribute vs. not.

The latter distinction might offer a way out of the conundrum. Google could certainly grant blanket rights to third-parties to redistribute unchanged versions of their Apps. This might compromise their ability to make certain business arrangements with carriers or handset providers in which they package the software for a fee. That may or may not be worth it from their business perspective, but they could have trouble making the claim that Android is a “complete, open, and free mobile platform” if they don’t find a way to make it work for developers.

This all takes place in the context of a larger debate over the extent to which mobile platforms should be open — voluntarily or via regulatory mandate. Google and Apple have been arguing via letters to the FCC about whether or not Apple should allow the Google Voice application in the iPhone App Store. However, it is yet to be determined whether the Commission has the jurisdiction and political will to do anything about the issue. There is a fascinating sideshow in that particular dispute, in which AT&T has made the very novel claim that Google Voice violates network neutrality (well, either that or common carriage — they’ll take whichever argument they can win). Google has replied. This is a topic for another day, but suffice to say the clear regulatory distinctions between telephone networks, broadband, and devices have become muddied.

(Cross-posted to Managing Miracles)


The Markey Net Neutrality Bill: Least Restrictive Network Management?

It’s an exciting time in the net neutrality debate. FCC Chairman Jules Genachowski’s speech on Monday promised a new FCC proceeding that will aim to create a formal rule to replace the Commission’s existing policy statement.

Meanwhile, net neutrality advocates in Congress are pondering new legislation for two reasons: First, there is a debate about whether the FCC currently has enough authority to enforce a net neutrality rule. Second, regardless of whether the Commission has such authority today or doesn’t, some would rather see net neutrality rules etched into statute than leave them to the uncertainties of the rulemaking process under this and future Commissions.

One legislative proposal comes from Rep. Ed Markey and colleagues. Called the Internet Freedom Preservation Act of 2009, its current draft is available on the Free Press web site.

I favor the broad goals that motivate this bill — an Internet that remains friendly to innovation and broadly available. But I personally believe the current draft of this bill would be a mistake, because it embodies a very optimistic view of the FCC’s ability to wield regulatory authority and avoid regulatory capture, not only under the current administration but also over the long-run future. It puts a huge amount of statutory weight behind the vague-till-now idea of “reasonable network management” — something that the FCC’s policy statement (and many participants in the debate) have said ISPs should be permitted to do, but whose meaning remains unsettled. Indeed, Ed raised questions back in 2006 about just how hard it might be to decide what this phrase should mean.

The section of the Markey bill that would be labeled as section 12 (d) in statute says that a network management practice

. . . is a reasonable practice only if it furthers a critically important interest, is narrowly tailored to further that interest, and is the means of furthering that interest that is the least restrictive, least discriminatory, and least constricting of consumer choice available.

This language — particularly the trio of “leasts” — puts the FCC in a position to intervene if, in the Commission’s judgment, any alternative course of action would have been better for consumers than the one an ISP actually took. Normally, to call something “reasonable” means that it is within the broad range of possibilities that might make sense to an imagined “reasonable person.” This bill’s definition of “reasonable” is very different, since on its terms there is no scope for discretion within reasonableness — the single best option is the only one deemed reasonable by the statute.

The bill’s language may sound familiar — it is a modified form of the judicial “strict scrutiny” standard the courts use to review government action when the state uses a suspect classification (such as race) or burdens a fundamental right (such as free speech in certain contexts). In those cases, the question is whether or not a “compelling governmental interest” justifies the policy under review. Here, however, it’s not totally clear whose interest, in what, must be compelling in order for a given network management practice to count as reasonable. We are discussing the actions of ISPs, who are generally public companies– do their interests in profit maximization count as compelling? Shareholders certainly think so. What about their interests in R&D? Or, does the statute mean to single out the public’s interest in the general goods outlined in section 12 (a), such as “protect[ing] the open and interconnected nature of broadband networks” ?

I fear the bill would spur a food fight among ISPs, each of whom could complain about what the others were doing. Such a battle would raise the probability that those ISPs with the most effective lobbying shops will prevail over those with the most attractive offerings for consumers, if and when the two diverge.

Why use the phrase “reasonable network management” to describe this exacting standard? I think the most likely answer is simply that many participants in the net neutrality debate use the phrase as a shorthand term for whatever should be allowed — so that “reasonable” turns out to mean “permitted.”

There is also an interesting secondary conversation to be had here about whether it’s smart to bar in statue, as the Markey bill would, “. . .any offering that. . . prioritizes traffic over that of other such providers,” which could be read to bar evenhanded offers of prioritized packet routing to any customer who wants to pay a premium, something many net neutrality advocates (including, e.g. Prof. Lessig) have said they think is fine.

My bottom line is that we ought to speak clearly. It might or might not make sense to let the FCC intervene whenever it finds ISPs’ network management to be less than perfect (I think it would not, but recognize the question is debatable). But whatever its merits, a standard like that — removing ISP discretion — deserves a name of its own. Perhaps “least restrictive network management” ?

Cross-posted at the Yale ISP Blog.


Netflix's Impending (But Still Avoidable) Multi-Million Dollar Privacy Blunder

In my last post, I had promised to say more about my article on the limits of anonymization and the power of reidentification. Although I haven’t said anything for a few weeks, others have, and I especially appreciate posts by Susannah Fox, Seth Schoen, and Nate Anderson. Not only have these people summarized my article well, they have also added a lot of insightful commentary, and I commend these three posts to you.

Today brings news relating to one of the central examples in my paper: Netflix has announced plans to commit a privacy blunder that could cost it millions of dollars in fines and civil damages.

In my article, I focus on Netflix’s 2006 decision to release millions of records containing the movie rating preferences of “anonymized” users to the public, in order to fuel a crowd-sourcing competition called the Netflix Prize. The Netflix Prize has been a huge win for Netflix’s public relations, but it has also been a win for academics, who have used the data to improve the science of guessing human behavior from past preferences.

The Netflix Prize was also a watershed event for reidentification research because Arvind Narayanan and Vitaly Shmatikov of U. Texas revealed that they could reidentify some of the “anonymized” users with ease, proving that we are more uniquely tied to our movie rating preferences than intuition would suggest. In my paper, I argue that we should worry about this privacy breach even if we don’t think movie ratings are terribly sensitive, because it can be used to enable other, more terrifying privacy breaches.

I never argue, however, that Netflix deserves punishment or sanction for having released this data. In my opinion, Netflix acted pretty responsibly. It consulted with computer scientists in a (failed) attempt to anonymize successfully. It tried perturbing the data in order to make reidentification harder. And other experts seem to have been surprised by how easy it was for Narayanan and Shmatikov to reidentify. Even with the benefit of hindsight, I find nothing to blame in how Netflix handled the privacy implications of what it did.

Although I give Netflix a pass for its past privacy breach, I am astonished to learn from the New York Times that the company plans a second act:

The new contest is going to present the contestants with demographic and behavioral data, and they will be asked to model individuals’ “taste profiles,” the company said. The data set of more than 100 million entries will include information about renters’ ages, gender, ZIP codes, genre ratings and previously chosen movies. Unlike the first challenge, the contest will have no specific accuracy target. Instead, $500,000 will be awarded to the team in the lead after six months, and $500,000 to the leader after 18 months.

Netflix should cancel this new, irresponsible contest, which it has dubbed Netflix Prize 2. Researchers have known for more than a decade that gender plus ZIP code plus birthdate uniquely identifies a significant percentage of Americans (87% according to Latanya Sweeney’s famous study.) True, Netflix plans to release age not birthdate, but simple arithmetic shows that for many people in the country, gender plus ZIP code plus age will narrow their private movie preferences down to at most a few hundred people. Netflix needs to understand the concept of “information entropy”: even if it is not revealing information tied to a single person, it is revealing information tied to so few that we should consider this a privacy breach.

I have no doubt that researchers will be able to use the techniques of Narayanan and Shmatikov, together with databases revealing sex, zip code, and age, to tie many people directly to these supposedly anonymized new records.

Because of this, if it releases the data, Netflix might be breaking the law. The Video Privacy Protection Act (VPPA), 18 USC 2710 prohibits a “video tape service provider” (a broadly defined term) from revealing “personally identifiable information” about its customers. Aggrieved customers can sue providers under the VPPA and courts can order “not less than $2500” in damages for each violation. If somebody brings a class action lawsuit under this statute, Netflix might face millions of dollars in damages.

Additionally, the FTC might also decide to fine Netflix for violating its privacy policy as an unfair business practice.

Either a lawsuit under the VPPA or an FTC investigation would turn, in large part, on one sentence in Netflix’s privacy policy: “We may also disclose and otherwise use, on an anonymous basis, movie ratings, consumption habits, commentary, reviews and other non-personal information about customers.” If sued or investigated, Netflix will surely argue that its acts are immunized by the policy, because the data is disclosed “on an anonymous basis.” While this argument might have carried the day in 2006, before Narayanan and Shmatikov conducted their study, the argument is much weaker in 2009, now that Netflix has many reasons to know better, including in part, my paper and the publicity surrounding it. A weak argument is made even weaker if Netflix includes the kind of data–ZIP code, age, and gender–that we have known for over a decade fails to anonymize.

The good news is Netflix has time to avoid this multi-million dollar privacy blunder. As far as I can tell, the Netflix Prize 2 has not yet been launched.

Dear Netflix executives: Don’t do this to your customers, and don’t do this to your shareholders. Cancel the Netflix Prize 2, while you still have the chance.


Improving the Government's User Interface

The White House’s attempts to gather input from citizens have hit some bumps, wrote Anand Giridharadas recently in the New York Times. This administration has done far more than its predecessors to let citizens provide input directly to government via the Internet, but they haven’t always received the input they expected. Giridharadas writes:

During the transition, the administration created an online “Citizen’s Briefing Book” for people to submit ideas to the president…. They received 44,000 proposals and 1.4 million votes for those proposals. The results were quietly published, but they were embarrassing…

In the middle of two wars and an economic meltdown, the highest-ranking idea was to legalize marijuana, an idea nearly twice as popular as repealing the Bush tax cuts on the wealthy. Legalizing online poker topped the technology ideas, twice as popular as nationwide wi-fi. Revoking the Church of Scientology’s tax-exempt status garnered three times more votes than raising funding for childhood cancer.

Once in power, the White House crowdsourced again. In March, its Office of Science and Technology Policy hosted an online “brainstorm” about making government more transparent. Good ideas came; but a stunning number had no connection to transparency, with many calls for marijuana legalization and a raging (and groundless) debate about the authenticity of President Obama’s birth certificate.

It’s obvious what happened: relatively small groups of highly motivated people visited the site, and their input outweighed the discussion of more pressing national issues. This is not a new phenomenon — there’s a long history of organized groups sending letters out of proportion with their numbers.

Now, these groups obviously have the right to speak, and the fact that some groups proved to be better organized and motivated than others is useful information for policymakers to have. But if that is all that policymakers learn, we have lost an important opportunity. Government needs to hear from these groups, but it needs to hear from the rest of the public too.

It’s tempting to decide that this is inevitable, and that online harvesting of public opinion will have little value. But I think that goes too far.

What the administration’s experience teaches, I think, is that measuring public opinion online is difficult, and that the most obvious measurement methods can run into trouble. Instead of giving up, the best response is to think harder about how to gather information and how to analyze the information that is available. What works for a small, organized group, or even a political campaign, won’t necessarily work for the United States as a whole. What we need are new interfaces, new analysis methods, and experiments to reveal what tends to work.

Designing user interfaces is almost always harder than it looks. Designing the user interface of government is an enormous challenge, but getting it right can yield enormous benefits.


NY Times Should Report on NY Times Ad Malware

Yesterday morning, while reading the New York Times online, I was confronted with an attempted security attack, apparently delivered through an advertisement. A window popped up, mimicking an antivirus scanner. After “scanning” my computer, it reported finding viruses and invited me to download a free antivirus scanner. The displays implied, without quite saying so, that the messages came from my antivirus vendor and that the download would come from there too. Knowing how these things work, I recognized it right away as an attack, probably carried by an ad. So I didn’t click on anything, and I’m fairly certain my computer wasn’t infected.

I wasn’t the only person who saw this attack. The Times posted a brief note on its site yesterday, and followed up today with a longer blog post.

What is interesting about the Times’s response is that it consists of security warnings, rather than journalism. Security warnings are good as far as they go; the Times owed that much to its users, at least. But it’s also newsworthy that a major, respected news site was facilitating cybercrime, even unintentionally. Somebody should report on this story — and who better than the Times itself?

It’s probably an interesting story, involving the ugly underside of the online ad business. Most likely, ad space in the Times was sold and, presumably, resold to an actual attacker; or a legitimate ad placement service was penetrated. Either way, other people are at risk of the same attack. Even better, the story opens issues such as the difficulties of securing the web, what vendors are doing to improve matters, what the bad buys are trying to achieve, and what happens to the victims.

An enterprising technology reporter might find a fascinating story here — and it’s right under the noses of the Times staff. Let’s hope they jump on it.

UPDATE (Sept. 15): As Barry points out in the comments below, the Times wrote a good article the day after this post appeared. It turns out that the booby-trapped ad was not sold through an ad network, as one might have expected. Instead, the ad space was sold directly by the Times, to a party who was pretending to be Vonage. The perpetrators ran Vonage ads for a while, then switched over to serving the malicious ads.


Finnish Court Orders Re-Vote After E-Voting Snafu

The Supreme Administrative Court of Finland has ruled that three municipal elections, the first in Finland to use electronic voting, must be redone because of voting machine problems. (English summary; ruling in Finnish)

The troubles started with a usability problem, which caused 232 voters (about 2% of voters) to leave the voting booth without fully casting their ballots. Electronic Frontiers Finland explains what went wrong:

It seems that the system required the voter to insert a smart card to identify the voter, type in their selected candidate number, then press “ok”, check the candidate details on the screen, and then press “ok” again. Some voters did not press “ok” for the second time, but instead removed their smart card from the voting terminal prematurely, causing their ballots not to be cast.

This usability issue was exacerbated by Ministry of Justice instructions, which specifically said that in order to cancel the voting process, the user should click on “cancel” and after that, remove the smart card. Thus, some voters did not realise that their vote had not been registered.

If you want to see what this looks like for a voter, check out the online demo of the voting process, from the Finnish Ministry of Justice (in English).

Well designed voting systems tend to have a prominent, clearly labeled control or action that the voter uses to officially cast his or her vote. This might be a big red “CAST VOTE” button. The Finnish system mistakenly used the same “OK” button used previously in the process, making voter mistakes more likely. Adding to the problem, the voter’s smart card was protruding from the front of the machine, making it all too easy for a voter to grab the card and walk away.

No voting machine can stop a “fleeing voter” scenario, where a voter simply walks away during the voting process (we conventionally say “fleeing” even if the voter leaves by mistake), but some systems are much better than others in this respect. Diebold’s touchscreen voting machines, for all their faults, got this design element right, pulling the voter’s smart card all of the way into the machine and ejecting it only when the voter was supposed to leave — thus turning the voter’s desire to return the smart card into a countermeasure against premature voter departure, rather than a cause of it. (ATM machines often use this same trick of holding the card inside the machine to stop the user from grabbing the card and walking away at the wrong time.) Some older lever machines use an even simpler method against fleeing voters: the same big red handle that casts the ballot also opens the curtains so the voter can leave.

I’d be curious to know what the rules are about fleeing voters in Finland. I know that New Jersey procedures say that if a voter leaves without performing the final step of pushing the “Cast Vote” button, poll workers are supposed to push the button on the voter’s behalf (without looking at the voter’s choices). Crucially, the design of the New Jersey voting machine (for all its faults) makes it almost certain that such a non-cast ballot will be discovered promptly — the machine makes a noise when the ballot is cast, and the machine will complain if the poll worker tries to enable the next voter’s ballot before the previous voter’s ballot has been cast.

It seems likely that the Finnish machine, in addition to its usability problems that led to fleeing voters, had other design/process problems that made a non-completed ballot less noticeable to poll workers. (I don’t know this for sure; the answer isn’t in any English-language document I have seen.)

Fortunately, the damage was not as bad as it might have been, because the e-voting system was used in only three municipalities, as a pilot program, rather than nationwide. Presumably, nationwide use of the flawed system is now unlikely.


Consolidation in E-Voting Market: ES&S Buys Premier

Yesterday Diebold sold its e-voting division, known as Premier Election Systems, to ES&S, one of Premier’s competitors. The price was low: about $5 million.

ES&S is reportedly the largest e-voting company, and Premier was the second-largest, so the deal represents a substantial consolidation in the market. The odds of one major e-voting company breaking from the pack and embracing up-to-date security engineering are now even slimmer than before. Premier had seemed like the company most likely to change its ways.

The sale represents the end of an embarrassing era for Diebold. The company must have had high hopes when it first bought a small e-voting company, but the new Diebold e-voting division never approached the parent companies standards for security and product quality. Over time the small e-voting division became an embarrassment, and the parent company distanced itself by renaming the division from Diebold to Premier and publicizing the division’s independence. Now Diebold is finally rid of its e-voting division and can return to doing what it does relatively well.


Finding and Fixing Errors in Google's Book Catalog

There was a fascinating exchange about errors in Google’s book catalog over at the Language Log recently. We rarely see such an open and constructive discussion of errors in large data sets, so this is an unusual opportunity to learn about how errors arise and what can be done about them.

The exchange started with Geoffrey Nunberg pointing to many errors in the metadata associated with Google’s book search project. (Here “metadata” refers to the kind of information that would have been on a card in the card catalog of an traditional library: a book’s date of publication, subject classification, an so on.) Some of the errors are pretty amusing, including Dickens writing books before he was born, a Bob Dylan biography published in the nineteenth century, Moby Dick classified under “computers”. Nunberg called this a “train wreck” and blamed Google’s overaggressive use of computer analysis to extract bibliographic information from scanned images.

Things really got interesting when Google’s Jon Orwant replied (note that the red text starting “GN” is Nunberg’s response to Orwant), with an extraordinarily open and constructive discussion of how the errors described by Nunberg arose, and the problems Google faces in trying to ensure accuracy of a huge dataset drawn from diverse sources.

Orwant starts, for example, by acknowledging that Google’s metadata probably contains millions of errors. But he asserts that that is to be expected, at least at first: “we’ve learned the hard way that when you’re dealing with a trillion metadata fields, one-in-a-million errors happen a million times over.” If you take catalogs from many sources and aggregate them into a single meta-catalog — more or less what Google is doing — you’ll inherit all the errors of your sources, unless you’re extraordinarily clever and diligent in comparing different sources to sniff out likely errors.

To make things worse, the very power and flexibility of a digital index can raise the visibility of the errors that do exist, by making them easy to find. Want to find all of the books, anywhere in the world, written by Charles Dickens and (wrongly thought to be) published before 1850? Just type a simple query. Google’s search technology did a lot to help Nunberg find errors. But it’s not just error-hunters who will find more errors — if a more powerful metadata search facility is more useful, researchers will rely on it more, and will therefore be tripped up by more errors.

What’s most interesting to me is a seeming difference in mindset between critics like Nunberg on the one hand, and Google on the other. Nunberg thinks of Google’s metadata catalog as a fixed product that has some (unfortunately large) number of errors, whereas Google sees the catalog as a work in progress, subject to continual improvement. Even calling Google’s metadata a “catalog” seems to connote a level of completion and immutability that Google might not assert. An electronic “card catalog” can change every day — a good thing if the changes are strict improvements such as error fixes — in a way that a traditional card catalog wouldn’t.

Over time, the errors Nunberg reported will be fixed, and as a side effect some errors with similar causes will be fixed too. Whether that is good enough remains to be seen.


When spammers try to go legitimate

I hate to sound like a broken record, complaining about professional mail distribution / spam-houses that are entirely unwilling to require their customers to follow a strict opt-in discipline. But I’m going to complain again and I’m going to name names.

Today, I got a spam touting a Citrix product (“Free virtualization training for you and your students!”). This message arrived in my mailbox with an unsubscribe link hosted by which bounced me back to a page at Citrix. The Citrix page then asks me for assorted personal information (name, email, country, employer). There was also a mailto link from xmr3 allowing me to opt-out.

At no time did I ever opt into any communication from Citrix. I’ve never done business with them. I don’t know anybody who works there. I could care less about their product.

What’s wrong here? A seemingly legitimate company is sending out spam to people who have never requested anything from them. They’re not employing any of the tactics that are normally employed by spammers to hide themselves. They’re not advertising drugs for sexual dysfunction or replicas of expensive watches. Maybe they got my email by surfing through faculty web pages. Maybe they got my email from some conference registration list. They’ve used a dubious third-party to distribute the spam who provides no method for indicating that their client is violating their terms of service (nor can their terms of service be found anywhere on their home page).

Based on this, it’s easy to advocate technical countermeasures (e.g., black-hole treatment for and or improvements to laws (the message appears to be superficially compliant with the CAN-SPAM act, but a detailed analysis would take more time than it’s worth). My hope is that we can maybe also apply some measure of shame. Citrix, as a company, should be embarrassed and ashamed to advertise itself this way. If it ever became culturally acceptable for companies to do this sort of thing, then the deluge of “legitimate” spam will be intolerable.