April 19, 2014

avatar

ISS Caught in the Middle in Cisco Security Flap

The cybersecurity world is buzzing with news about Cisco’s attempt to silence Michael Lynn’s discussion of a serious security flaw in the company’s product. Here’s the chronology, which I have pieced together from news reports (so the obvious caveats apply):

Michael Lynn worked for ISS, a company that sells security scanning software. In the course of his work, he found a serious security flaw in IOS, the operating system that runs on Cisco’s routers. (Routers are specialized computers that shunt Internet packets from link to link, getting them gradually from source to destination. Cisco is the leading maker of routers.)

It has long been believed that a buffer overflow bug (the most common types of security bug) in IOS could be exploited by a remote party to crash the router, but not to seize control of it. What Lynn discovered is a way for an attacker to leverage a buffer overflow bug in IOS into full control over the router. Buffer overflow bugs are common, and Cisco routers handle nearly all Internet traffic, so this is a big problem.

Lynn was planning to discuss this in a presentation Wednesday at the Black Hat conference. At the last minute Cisco convinced ISS (Lynn’s employer) to cancel the talk. Cisco employees ripped Lynn’s paper out of every copy of the already-printed conference proceedings, and ISS ordered Lynn to talk about another topic during his already-scheduled slot in the Black Hat conference schedule.

Lynn quit his ISS job and gave a presentation about the Cisco flaw.

Cisco ran to court, asking for an injunction barring Lynn from further disclosing the information. They argued that the information was a trade secret and Lynn had obtained it illegally by reverse engineering.

The parties have now agreed that Lynn will destroy any documents or files he has on the topic, and will refrain from disclosing the information to anyone. The Black Hat organizers will destroy their videotape of Lynn’s presentation.

What distinguishes this from the standard “vendor tries to silence security researcher” narrative is the role of ISS. Recall that Lynn did his research as an ISS employee. This kind of research is critical to ISS’s business – it has to know about flaws before it can help protect its customers from them. Which means that ISS can’t be happy with the assertion that the research done in ISS’s lab was illegal.

So it looks like all of the parties lose. Cisco failed to cover up its security vulnerability, and only drew more attention with the legal threats. Lynn is out of a job. And ISS is the big loser, with its research enterprise potentially at risk.

The public, on the other hand, got useful information about the (in)security of the Internet infrastructure. Despite Cisco’s legal action, the information is out there – Lynn’s PowerPoint presentation is already available at Cryptome.

[Updated at 11:10 AM with minor modification to the description of what Lynn discovered, and to add the last sentence about the information reaching the public via Cryptome.]

Update (1:10 PM): The FBI is investigating whether Lynn committed a crime by giving his talk. The possible crime, apparently, was the alleged disclosure of ISS trade secrets.

avatar

U.S. Computer Science Malaise

There’s a debate going on now among U.S. computer science researchers and educators, about whether the U.S. as a nation is serious about maintaining its lead in computer science. We have been the envy of the world, drawing most of the worlds’ best and brightest in the field to our country, and laying the foundations of a huge industry that has fostered wealth and national power. But there is a growing sense within the field that all of this may be changing. This sense of malaise is a common topic around faculty water coolers across the country, and in speeches by industry figures like Bill Gates and Vint Cerf.

Whatever the cause – and more on that below – there two main symptoms. First is a sharp decrease in funding for computer science research, especially in strategic areas such as cybersecurity. For example, DARPA, the Defense Department research agency that funded the early Internet and other breakthroughs, has cut its support for university computer science research by more than 40% in the last three years, and has redirected the remaining funding toward short-term advanced development efforts. Corporate research is not picking up the slack.

The second symptom, which in my view is more worrisome, is the sharp decrease in the number of students majoring in computer science. One reputable survey found a 60% drop in the last four years. One would have expected a drop after the dotcom crash – computer science enrollments have historically tracked industry business cycles – but this is a big drop! (At Princeton, we’ve been working hard to make our program more compelling, so we have seen a much smaller decrease.)

All this despite fundamentals that seem sound. Our research ideas seem as strong as ever (though research is inherently a hit-and-miss affair), and the job market for our graduates is still very strong, though not as overheated as a few years ago. Our curricula aren’t perfect but are better than ever. So what’s the problem?

The consensus seems to be that computer science has gotten a bad rap as a haven for antisocial, twinkie-fed nerds who spend their nights alone in cubicles wordlessly writing code, and their days snoring and drooling on office couches. Who would want to be one of them? Those of us in the field know that this stereotype is silly; that computer scientists do many things beyond coding; that we work in groups and like to have fun; and that nowadays computer science plays a role in almost every field of human endeavor.

Proposed remedies abound, most of them attempts to show people who computer scientists really are and what we really do. Stereotypes take a long time to overcome, but there’s no better time than the present to get started.

UPDATE (July 28): My colleagues Sanjeev Arora and Bernard Chazelle have a thoughtful essay on this issue in the August issue of Communications of the ACM.

avatar

Privacy, Price Discrimination, and Identification

Recently it was reported that Disney World is fingerprinting its customers. This raised obvious privacy concerns. People wondered why Disney would need that information, and what they were going to do with it.

As Eric Rescorla noted, the answer is almost surely price discrimination. Disney sells multi-day tickets at a discount. They don’t want people to buy (say) a ten-day ticket, use it for two days, and then resell the ticket to somebody else. Disney makes about $200 more by selling five separate two-day tickets than by selling a single ten-day ticket. To stop this, they fingerprint the users of such tickets and verify that the fingerprint associated with a ticket doesn’t change from day to day.

Price discrimination often leads to privacy worries, because some price discrimination strategies rely on the ability to identify individual customers so the seller knows what price to charge them. Such privacy worries seem to be intensifying as technology advances, since it is becoming easier to keep records about individual customers, easier to get information about customers from outside sources, and easier to design and manage complex price discrimination strategies.

On the other hand, some forms of price discrimination don’t depend on identifying customers. For example, early-bird discounts at restaurants cause customers to self-select into categories based on willingness to pay (those willing to come at an inconvenient time to get a lower price vs. those not willing) without needing to identify individuals.

Disney’s type of price discrimination falls into a middle ground. They don’t need to know who you are; all they need to know is that you are the same person who used the ticket yesterday. I think it’s possible to build a fingerprint-based system that stores just enough information to verify that a newly-presented fingerprint is the same one seen before, but without storing the fingerprint itself or even information useful in reconstructing or forging it. That would let Disney get what it needs to prevent ticket resale, without compromising customers’ fingerprints.

If this is possible, why isn’t Disney doing it? I can only guess, but I can think of two reasons. First, in designing identity-based systems, people seem to gravitate to designs that try to extract a “true identity”, despite the fact that this is more privacy-compromising and is often unnecessary. Second, if Disney sees customer privacy mainly as a public-relations issue, then they don’t have much incentive to design a more privacy-protective system, when ordinary customers can’t easily tell the difference.

Researchers have been saying for years that identification technologies can be designed cleverly to minimize unneeded information flows; but this suggestion hasn’t had much effect. Perhaps bad publicity over information leaks will cause companies to be more careful.

avatar

Thee and Ay

It’s not often that you learn something about yourself from a stranger’s blog. But that’s what happened to me on Friday. I was sifting through a list of new links to this blog (thanks to Technorati), and I found an entry on a blog called Serendipity, about the way I pronounce the word “the”. It turns out that my pronunciation of “the” is inconsistent, in an interesting way. In fact, in a single eight-minute public talk, I pronounce “the” in four different ways.

(Could there possibly be a less enticing premise for a blog entry than how the blog’s author pronounces the word “the”? Well, I think the details turn out to be interesting. And it’s my blog.)

Here’s the background. The article “the” in English is pronounced in two different ways, unreduced (“thee”), and reduced (“thuh”). The standard is to use the unreduced form when the next word starts with a vowel sound (“thee elephant”), and the reduced form when the next word starts with a consonant sound (“thuh dog”).

After Mark Liberman discussed this on the Language Log, readers pointed out that George W. Bush sometimes pronounces ‘a’ as the unreduced “ay” before a consonant. Bush did this a few times in his speech nominating John Roberts to the Supreme Court. Roberts also used one “thee” and one “ay” before consonants in the ensuing Q&A session.

Then Chris Waigl remembered, somehow, that she had heard me do something similar in a recorded talk. So she dug up an eight-minute recording of me speaking at the 2002 Berkeley DRM conference, and analyzed each use of “a” and “the”. She even color-coded the transcript.

It turns out that I pronounced “the” before a consonant four different ways. Sometimes I used “thee”, sometimes I used “thuh”, sometimes I used “thee” and corrected myself to “thuh”, and sometimes I used “thuh” and corrected myself to “thee”.

Why do I do this? I have no idea. I have been listening to myself ever since I read this, and I do indeed mix reduced and unreduced “the” and “a” before consonants. I haven’t caught myself correcting one to the other, but then again I probably wouldn’t notice if I did.

And now I’m listening to every speaker I hear, to see whether they do it too. Do you?

avatar

Harry Potter and the Half-Baked Plan

Despite J.K. Rowling’s decision not to offer the new Harry Potter book in e-book format, it took less than a day for fans to scan the book and assemble an unauthorized electronic version, which is reportedly circulating on the Internet.

If Rowling thought that her decision against e-book release would prevent infringement, then she needs to learn more about Muggle technology. (It’s not certain that her e-book decision was driven by infringement worries. Kids’ books apparently sell much worse as e-books than comparable adult books do, so she might have thought there would be insufficient demand for the e-book. But really – insufficient demand for Harry Potter this week? Not likely.)

It’s a common mistake to think that digital distribution leads to infringement, so that one can prevent infringement by sticking with analog distribution. Hollywood made this argument in the broadcast flag proceeding, saying that the switch to digital broadcasting of television would make the infringement problem so much worse – and the FCC even bought it.

As Harry Potter teaches us, what enables online infringement is not digital release of the work, but digital redistribution by users. And a work can be redistributed digitally, regardless of whether it was originally released in digital or analog form. Analog books can be scanned digitally; analog audio can be recorded digitally; analog video can be camcorded digitally. The resulting digital copies can be redistributed.

(This phenomenon is sometimes called the “analog hole”, but that term is misleading because the copyability of analog information is not an exception to the normal rule but a continuation of it. Objects made of copper are subject to gravity, but we don’t call that fact the “copper hole”. We just call it gravity, and we know that all objects are subject to it. Similarly, analog information is subject to digital copying because all information is subject to digital copying.)

If anything, releasing a work a work in digital form will reduce online infringement, by giving people who want a digital copy a way to pay for it. Having analog and digital versions that offer different value propositions to customers also enables tricky pricing strategies that can capture more revenue. Copyright owners can lead the digital parade or sit on the sidelines and watch it go by; but one way or another, there is going to be a parade.

avatar

Who'll Stop the Spam-Bots?

The FTC has initiated Operation Spam Zombies, a program that asks ISPs to work harder to detect and isolate spam-bots on their customers’ computers. Randy Picker has a good discussion of this.

A bot is a malicious, long-lived software agent that sits on a computer and carries out commands at the behest of a remote badguy. (Bots are sometimes called zombies. This makes for more colorful headlines, but the cognoscenti prefer “bot”.) Bots are surprisingly common; perhaps 1% of computers on the Internet are infected by bots.

Like any successful parasite, a bot tries to limit its impact on its host. A bot that uses too many resources, or that too obviously destabilizes its host system, is more likely to be detected and eradicated by the user. So a clever bot tries to be unobtrusive.

One of the main uses of bots is for sending spam. Bot-initiated spam comes from ordinary users’ machines, with only a modest volume coming from each machine; so it is difficult to stop. Nowadays the majority of spam probably comes from bots.

Spam-bots exhibit the classic economic externality of Internet security. A bot on your machine doesn’t bother you much. It mostly harms other people, most of whom you don’t know; so you lack a sufficient incentive to find and remove bots on your system.

What the FTC hopes is that ISPs will be willing to do what users aren’t. The FTC is urging ISPs to monitor their networks for telltale spam-bot activity, and then to take action, up to and including quarantining infected machines (i.e., cutting off or reducing their network connectivity).

It would be good if ISPs did more about the spam-bot problem. But unfortunately, the same externality applies to ISPs as to users. If an ISP’s customer hosts a spam-bot, most the spam sent by the bot goes to other ISPs, so the harm from that spam-bot falls mostly on others. ISPs will have an insufficient incentive to fight bots, just as users do.

A really clever spam-bot could make this externality worse, by making sure not to direct any spam to the local ISP. That would reduce the local ISP’s incentive to stop the bot to almost zero. Indeed, it would give the ISP a disincentive to remove the bot, since removing the bot would lower costs for the ISP’s competitors, leading to tougher price competition and lower profits for the ISP.

That said, there is some hope for ISP-based steps against bot-spam. There aren’t too many big ISPs, so they may be able to agree to take steps against bot-spam. And voluntary steps may help to stave off unpleasant government regulation, which is also in the interest of the big ISPs.

There are interesting technical issues here too. If ISPs start monitoring aggressively for bots, the bots will get stealthier, kicking off an interesting arms race. But that’s a topic for another day.

avatar

What is Spyware?

Recently the Anti-Spyware Coalition released a document defining spyware and related terms. This is an impressive-sounding group, convened by CDT and including companies like HP, Microsoft, and Yahoo.

Here is their central definition:

Spyware and Other Potentially Unwanted Technologies

Technologies implemented in ways that impair users’ control over:

  • Material changes that affect their user experience, privacy, or system security
  • User of their system resources, including what programs are installed on their computers
  • Collection, use and distribution of their personal or otherwise sensitive information

These are items that users will want to be informed about, and which the user, with appropriate authority from the owner of the system, should be able to easily remove or disable.

What’s interesting about this definition is that it’s not exactly a definition – it’s a description of things that users won’t like, along with assertions about what users will want, and what users should be able to do. How is it that this impressive group could only manage an indirect, somewhat vague definition for spyware?

The answer is that spyware is a surprisingly slippery concept.

Consider a program that lurks on your computer, watching which websites you browse and showing you ads based on your browsing history. Such a program might be spyware. But if your gave your informed consent to the program’s installation and operation, then public policy shouldn’t interfere. (Note: informed consent means that the consequences of accepting the program are conveyed to you fully and accurately.) So behaviors like monitoring and ad targeting aren’t enough, by themselves, to make a program spyware.

Now consider the same program, which comes bundled with a useful program that you want for some other purpose. The two programs are offered only together, you have to agree to take them both in order to get either one, and there is no way to uninstall one without uninstalling the other too. You give your informed consent to the bundle. (Bundling can raise antitrust problems under certain conditions, but I’ll ignore that issue here.) The company offering you the useful program is selling it for a price that is paid not in dollars but in allowing the adware to run. That in itself is no reason for public policy to object.

What makes spyware objectionable is not the technology, but the fact that it is installed without informed consent. Spyware is not a particular technology. Instead, it is any technology that is delivered via particular business practices. Understanding this is the key to regulating spyware.

Sometimes the software is installed with no consent at all. Installing and running software on a user’s computer, without seeking consent or even telling the user, must be illegal under existing laws such as the Computer Fraud and Abuse Act. There is no need to change the law to deal with this kind of spyware.

Sometimes “consent” is obtained, but only by deceiving the user. What the user gets is not what he thinks he agreed to. For example, the user might be shown a false or strongly misleading description of what the software will do; or important facts, such as the impossibility of uninstalling a program, might be withheld from the user. Here the issue is deception. As I understand it, deceptive business practices are generally illegal. (If spyware practices are not illegal, we may need to expand the legal rules against business deception.) What we need from government is vigilant enforcement against companies that use deceptive business practices in the installation of their software.

That, I think, is about as far as the law should go in fighting spyware. We may get more anti-spyware laws anyway, as Congress tries to show that it is doing something about the problem. But when it comes to laws, more is not always better.

The good news is that we probably don’t need complicated new laws to fight spyware. The laws we have can do enough – or at least they can do as much as the law can hope to do.

(If you’re not running an antispyware tool on your computer, you should be. There are several good options. Spybot Search & Destroy is a good free spyware remover for Windows.)

avatar

HD-DVD Requires Digital Imprimatur

Last week I wrote about the antitrust issues raised by the use of encryption to “protect” content. Here’s a concrete example.

HD-DVD, one of the two candidates for the next-gen DVD format, uses a “content protection” technology called AACS. And AACS, it turns out, requires a digital imprimatur on any content before it can be published.

(The imprimatur – the term is Latin for “let it be printed” – was an early technology of censorship. The original imprimatur was a stamp of approval granted by a Catholic bishop to certify that a work was free from doctrinal or moral error. In some times and places, it was illegal to print a work that didn’t have an imprimatur. Today, the term refers to any system in which a central entity must approve works before they can be published.)

The technical details are in the AACS Pre-recorded Video Book Specification. The digital imprimatur is called a “content certificate” (see p. 5 for overview), and is created “at a secure facility operated by [the AACS organization]” (p. 8 ). It is forbidden to publish any work without an imprimatur, and player devices are forbidden to play any work that lacks an imprimatur.

Like the original imprimatur, the AACS one can be revoked retroactively. AACS calls this “content revocation”. Every disc that is manufactured is required to carry an up-to-date list of revoked works. Player devices are required to keep track of which works have been revoked, and to refuse to play revoked works.

The AACS documents avoid giving a rationale for this feature. The closest they come to a rationale is a statement that the system was designed so that “[c]ompliant players can authenticate that content came from an authorized, licensed replicator” (p. 1). But the system as described does not seem designed for that goal – if it were, the disc would be signed (and the signature possibly revoked) by the replicator, not by the central AACS organization. Also, the actual design replaces “can authenticate” by “must authenticate, and must refuse to play if authentication fails”.

The goal of HD-DVD is to become the dominant format for release of movies. If this happens, the HD-DVD/AACS imprimatur will be ripe for anticompetitive abuses. Who will decide when the imprimatur will be used, and how? Apparently it will be the AACS organization. We don’t know how that organization is run, but we know that its founding members are Disney, IBM, Intel, Microsoft, Panasonic, Sony, Toshiba, and Warner Brothers. A briefing on the AACS site explains the “AACS Structure” by listing the founders.

I hope the antitrust authorities are watching this very closely. I hope, too, that consumers are watching and will vote with their dollars against this kind of system.

avatar

Controlling Software Updates

Randy Picker questions part of the computer science professors’ Grokster brief (of which I was a co-signer), in which we wrote:

Even assuming that Respondents have the right and ability to deliver such software to end users, there can be no way to ensure that software updates are installed, and stay installed. End users ultimately have control over which software is on their computers. If an end user does not want a software update, there is no way to make her take it.

This point mattered because Hollywood had suggested that Grokster should have used its software-update facility to deploy filtering software. (Apparently there is some dispute over whether Grokster had such a facility. I don’t know who is right on that factual question.)

Picker wonders whether ordinary users can really exercise this control in practice. As he notes, the user can disconnect from the net, but that’s too high a price for most people to pay. So how can users prevent updates?

The easiest method is simply to write-protect the program’s files or directories, so that they can’t be changed. Alternatively, the user can make a backup copy of the software (perhaps by copying it to another directory) and restore the backup when an update is installed.

Standard system security tools are also useful for controlling automatic updates. Autonomously self-updating programs look a lot like malicious code – the program code changes on its own (like a virus infection); the program makes network connections to odd places at odd times (like spyware); the program downloads and installs code without asking the user (like a malicious bot). Security tools specialize in identifying and blocking such behaviors, and the tools are reasonably configurable. Personal firewalls, for example, can block a program from making unapproved network connections. Some firewalls even do this by default.

Finally, a skilled person can figure out how to patch the program to disable the auto-update feature. He can then encapsulate this knowledge in a simple tool, so that other users can disable their auto-update by downloading the tool and double-clicking it. (This tool may violate copyright by modifying the program; but if we trusted users to obey copyright law we wouldn’t be having this conversation.)

The bottom line is that in computer security, possession is nine-tenths of control. Whoever has physical access to a device can control what it does. Whoever has physical control of a computer can control what software is installed on it. And users have physical control of their PCs.

A followup question is whether you can program the software to shut itself off if the user blocks updates for too long. As far as I know, nobody is claiming that Grokster had such a capability, but in principle a P2P system could be designed to (try to) work that way. This raises interesting issues too, but I’m approaching my word count limit so I’ll have to address them another day.

avatar

Michigan Email Registry as a Tax on Bulk Emailers

I wrote on Friday about the new registry of kids’ email addresses being set up by the state of Michigan. I wasn’t impressed. A commenter pointed out an important fact I missed: emailers have to pay a fee of $0.007 to screen each address against the list.

(One of the occupational hazards of blogging is the risk of posting without meticulous research. Fortunately such oversights can be corrected, as I’m doing here. My apologies to readers who were temporarily misled.)

I still worry that the list will leak information about kids’ email addresses to emailers. The fee will raise the cost of fishing expeditions designed to guess which addresses are on the list, to the point where it probably won’t be worthwhile for an emailer to launch blind guessing attacks. But emailers will still learn the status of addresses that are already on their lists.

The main effect of the fee is to turn the whole program into a tax on bulk emailing. The tax operates even if only a few kids’ addresses are registered, so parents worried about leaking their kids’ addresses can safely decline to register them. So let’s look at this as a tax scheme rather than a child protection program.

It’s an oddly structured tax, charging a bulk emailer $0.007 for each email address he mails to within a thirty-day period. (Emailers must re-check their lists every thirty days to avoid a violation.) And it only applies to bulk emailers who promote products that kids aren’t allowed to own. That includes some products whose promotion the state is particularly eager to control (e.g., drugs and gambling) as well as some products that are innocuous but inappropriate for kids (e.g., vehicles).

Why isn’t this structured simply as a tax on bulk email? We’d have to ask a free speech lawyer to be sure, but I wonder whether a tax on speech, and especially a tax that applies only to some kinds of speech, would be constitutionally suspect. Connecting it to the state interest in protecting kids from harmful commercial speech may provide some cover. (I could be off base here. If so, I’m sure commenters will set me straight.)

The tax seems unlikely to generate much revenue for the state. Including administrative costs, it may cost the state money. Presumably the goal is to change emailers’ incentives.

The incentive effect on outlaw spammers will be zero. They’ll just ignore this law, and add it to the list of laws they are already violating.

Gray hat spammers, who operate aboveboard and call themselves legitimate businesspeople, will see their costs increase. The tax will impose a fixed cost per address on their list, but independent of the number of messages sent to that address within a month. Adding to fixed costs will tend to cause consolidation in the gray hat spammer business – if two spammers share a list, they’ll only have to pay the tax once. It’s getting harder and harder to be a gray hat spammer already; this will only squeeze the gray hats further.

With the tax angle added, the Michigan program might turn out to be good policy. But I still wouldn’t register my own kid.