March 28, 2024

Archives for July 2005

Who'll Stop the Spam-Bots?

The FTC has initiated Operation Spam Zombies, a program that asks ISPs to work harder to detect and isolate spam-bots on their customers’ computers. Randy Picker has a good discussion of this.

A bot is a malicious, long-lived software agent that sits on a computer and carries out commands at the behest of a remote badguy. (Bots are sometimes called zombies. This makes for more colorful headlines, but the cognoscenti prefer “bot”.) Bots are surprisingly common; perhaps 1% of computers on the Internet are infected by bots.

Like any successful parasite, a bot tries to limit its impact on its host. A bot that uses too many resources, or that too obviously destabilizes its host system, is more likely to be detected and eradicated by the user. So a clever bot tries to be unobtrusive.

One of the main uses of bots is for sending spam. Bot-initiated spam comes from ordinary users’ machines, with only a modest volume coming from each machine; so it is difficult to stop. Nowadays the majority of spam probably comes from bots.

Spam-bots exhibit the classic economic externality of Internet security. A bot on your machine doesn’t bother you much. It mostly harms other people, most of whom you don’t know; so you lack a sufficient incentive to find and remove bots on your system.

What the FTC hopes is that ISPs will be willing to do what users aren’t. The FTC is urging ISPs to monitor their networks for telltale spam-bot activity, and then to take action, up to and including quarantining infected machines (i.e., cutting off or reducing their network connectivity).

It would be good if ISPs did more about the spam-bot problem. But unfortunately, the same externality applies to ISPs as to users. If an ISP’s customer hosts a spam-bot, most the spam sent by the bot goes to other ISPs, so the harm from that spam-bot falls mostly on others. ISPs will have an insufficient incentive to fight bots, just as users do.

A really clever spam-bot could make this externality worse, by making sure not to direct any spam to the local ISP. That would reduce the local ISP’s incentive to stop the bot to almost zero. Indeed, it would give the ISP a disincentive to remove the bot, since removing the bot would lower costs for the ISP’s competitors, leading to tougher price competition and lower profits for the ISP.

That said, there is some hope for ISP-based steps against bot-spam. There aren’t too many big ISPs, so they may be able to agree to take steps against bot-spam. And voluntary steps may help to stave off unpleasant government regulation, which is also in the interest of the big ISPs.

There are interesting technical issues here too. If ISPs start monitoring aggressively for bots, the bots will get stealthier, kicking off an interesting arms race. But that’s a topic for another day.

What is Spyware?

Recently the Anti-Spyware Coalition released a document defining spyware and related terms. This is an impressive-sounding group, convened by CDT and including companies like HP, Microsoft, and Yahoo.

Here is their central definition:

Spyware and Other Potentially Unwanted Technologies

Technologies implemented in ways that impair users’ control over:

  • Material changes that affect their user experience, privacy, or system security
  • User of their system resources, including what programs are installed on their computers
  • Collection, use and distribution of their personal or otherwise sensitive information

These are items that users will want to be informed about, and which the user, with appropriate authority from the owner of the system, should be able to easily remove or disable.

What’s interesting about this definition is that it’s not exactly a definition – it’s a description of things that users won’t like, along with assertions about what users will want, and what users should be able to do. How is it that this impressive group could only manage an indirect, somewhat vague definition for spyware?

The answer is that spyware is a surprisingly slippery concept.

Consider a program that lurks on your computer, watching which websites you browse and showing you ads based on your browsing history. Such a program might be spyware. But if your gave your informed consent to the program’s installation and operation, then public policy shouldn’t interfere. (Note: informed consent means that the consequences of accepting the program are conveyed to you fully and accurately.) So behaviors like monitoring and ad targeting aren’t enough, by themselves, to make a program spyware.

Now consider the same program, which comes bundled with a useful program that you want for some other purpose. The two programs are offered only together, you have to agree to take them both in order to get either one, and there is no way to uninstall one without uninstalling the other too. You give your informed consent to the bundle. (Bundling can raise antitrust problems under certain conditions, but I’ll ignore that issue here.) The company offering you the useful program is selling it for a price that is paid not in dollars but in allowing the adware to run. That in itself is no reason for public policy to object.

What makes spyware objectionable is not the technology, but the fact that it is installed without informed consent. Spyware is not a particular technology. Instead, it is any technology that is delivered via particular business practices. Understanding this is the key to regulating spyware.

Sometimes the software is installed with no consent at all. Installing and running software on a user’s computer, without seeking consent or even telling the user, must be illegal under existing laws such as the Computer Fraud and Abuse Act. There is no need to change the law to deal with this kind of spyware.

Sometimes “consent” is obtained, but only by deceiving the user. What the user gets is not what he thinks he agreed to. For example, the user might be shown a false or strongly misleading description of what the software will do; or important facts, such as the impossibility of uninstalling a program, might be withheld from the user. Here the issue is deception. As I understand it, deceptive business practices are generally illegal. (If spyware practices are not illegal, we may need to expand the legal rules against business deception.) What we need from government is vigilant enforcement against companies that use deceptive business practices in the installation of their software.

That, I think, is about as far as the law should go in fighting spyware. We may get more anti-spyware laws anyway, as Congress tries to show that it is doing something about the problem. But when it comes to laws, more is not always better.

The good news is that we probably don’t need complicated new laws to fight spyware. The laws we have can do enough – or at least they can do as much as the law can hope to do.

(If you’re not running an antispyware tool on your computer, you should be. There are several good options. Spybot Search & Destroy is a good free spyware remover for Windows.)

HD-DVD Requires Digital Imprimatur

Last week I wrote about the antitrust issues raised by the use of encryption to “protect” content. Here’s a concrete example.

HD-DVD, one of the two candidates for the next-gen DVD format, uses a “content protection” technology called AACS. And AACS, it turns out, requires a digital imprimatur on any content before it can be published.

(The imprimatur – the term is Latin for “let it be printed” – was an early technology of censorship. The original imprimatur was a stamp of approval granted by a Catholic bishop to certify that a work was free from doctrinal or moral error. In some times and places, it was illegal to print a work that didn’t have an imprimatur. Today, the term refers to any system in which a central entity must approve works before they can be published.)

The technical details are in the AACS Pre-recorded Video Book Specification. The digital imprimatur is called a “content certificate” (see p. 5 for overview), and is created “at a secure facility operated by [the AACS organization]” (p. 8 ). It is forbidden to publish any work without an imprimatur, and player devices are forbidden to play any work that lacks an imprimatur.

Like the original imprimatur, the AACS one can be revoked retroactively. AACS calls this “content revocation”. Every disc that is manufactured is required to carry an up-to-date list of revoked works. Player devices are required to keep track of which works have been revoked, and to refuse to play revoked works.

The AACS documents avoid giving a rationale for this feature. The closest they come to a rationale is a statement that the system was designed so that “[c]ompliant players can authenticate that content came from an authorized, licensed replicator” (p. 1). But the system as described does not seem designed for that goal – if it were, the disc would be signed (and the signature possibly revoked) by the replicator, not by the central AACS organization. Also, the actual design replaces “can authenticate” by “must authenticate, and must refuse to play if authentication fails”.

The goal of HD-DVD is to become the dominant format for release of movies. If this happens, the HD-DVD/AACS imprimatur will be ripe for anticompetitive abuses. Who will decide when the imprimatur will be used, and how? Apparently it will be the AACS organization. We don’t know how that organization is run, but we know that its founding members are Disney, IBM, Intel, Microsoft, Panasonic, Sony, Toshiba, and Warner Brothers. A briefing on the AACS site explains the “AACS Structure” by listing the founders.

I hope the antitrust authorities are watching this very closely. I hope, too, that consumers are watching and will vote with their dollars against this kind of system.

Controlling Software Updates

Randy Picker questions part of the computer science professors’ Grokster brief (of which I was a co-signer), in which we wrote:

Even assuming that Respondents have the right and ability to deliver such software to end users, there can be no way to ensure that software updates are installed, and stay installed. End users ultimately have control over which software is on their computers. If an end user does not want a software update, there is no way to make her take it.

This point mattered because Hollywood had suggested that Grokster should have used its software-update facility to deploy filtering software. (Apparently there is some dispute over whether Grokster had such a facility. I don’t know who is right on that factual question.)

Picker wonders whether ordinary users can really exercise this control in practice. As he notes, the user can disconnect from the net, but that’s too high a price for most people to pay. So how can users prevent updates?

The easiest method is simply to write-protect the program’s files or directories, so that they can’t be changed. Alternatively, the user can make a backup copy of the software (perhaps by copying it to another directory) and restore the backup when an update is installed.

Standard system security tools are also useful for controlling automatic updates. Autonomously self-updating programs look a lot like malicious code – the program code changes on its own (like a virus infection); the program makes network connections to odd places at odd times (like spyware); the program downloads and installs code without asking the user (like a malicious bot). Security tools specialize in identifying and blocking such behaviors, and the tools are reasonably configurable. Personal firewalls, for example, can block a program from making unapproved network connections. Some firewalls even do this by default.

Finally, a skilled person can figure out how to patch the program to disable the auto-update feature. He can then encapsulate this knowledge in a simple tool, so that other users can disable their auto-update by downloading the tool and double-clicking it. (This tool may violate copyright by modifying the program; but if we trusted users to obey copyright law we wouldn’t be having this conversation.)

The bottom line is that in computer security, possession is nine-tenths of control. Whoever has physical access to a device can control what it does. Whoever has physical control of a computer can control what software is installed on it. And users have physical control of their PCs.

A followup question is whether you can program the software to shut itself off if the user blocks updates for too long. As far as I know, nobody is claiming that Grokster had such a capability, but in principle a P2P system could be designed to (try to) work that way. This raises interesting issues too, but I’m approaching my word count limit so I’ll have to address them another day.

Michigan Email Registry as a Tax on Bulk Emailers

I wrote on Friday about the new registry of kids’ email addresses being set up by the state of Michigan. I wasn’t impressed. A commenter pointed out an important fact I missed: emailers have to pay a fee of $0.007 to screen each address against the list.

(One of the occupational hazards of blogging is the risk of posting without meticulous research. Fortunately such oversights can be corrected, as I’m doing here. My apologies to readers who were temporarily misled.)

I still worry that the list will leak information about kids’ email addresses to emailers. The fee will raise the cost of fishing expeditions designed to guess which addresses are on the list, to the point where it probably won’t be worthwhile for an emailer to launch blind guessing attacks. But emailers will still learn the status of addresses that are already on their lists.

The main effect of the fee is to turn the whole program into a tax on bulk emailing. The tax operates even if only a few kids’ addresses are registered, so parents worried about leaking their kids’ addresses can safely decline to register them. So let’s look at this as a tax scheme rather than a child protection program.

It’s an oddly structured tax, charging a bulk emailer $0.007 for each email address he mails to within a thirty-day period. (Emailers must re-check their lists every thirty days to avoid a violation.) And it only applies to bulk emailers who promote products that kids aren’t allowed to own. That includes some products whose promotion the state is particularly eager to control (e.g., drugs and gambling) as well as some products that are innocuous but inappropriate for kids (e.g., vehicles).

Why isn’t this structured simply as a tax on bulk email? We’d have to ask a free speech lawyer to be sure, but I wonder whether a tax on speech, and especially a tax that applies only to some kinds of speech, would be constitutionally suspect. Connecting it to the state interest in protecting kids from harmful commercial speech may provide some cover. (I could be off base here. If so, I’m sure commenters will set me straight.)

The tax seems unlikely to generate much revenue for the state. Including administrative costs, it may cost the state money. Presumably the goal is to change emailers’ incentives.

The incentive effect on outlaw spammers will be zero. They’ll just ignore this law, and add it to the list of laws they are already violating.

Gray hat spammers, who operate aboveboard and call themselves legitimate businesspeople, will see their costs increase. The tax will impose a fixed cost per address on their list, but independent of the number of messages sent to that address within a month. Adding to fixed costs will tend to cause consolidation in the gray hat spammer business – if two spammers share a list, they’ll only have to pay the tax once. It’s getting harder and harder to be a gray hat spammer already; this will only squeeze the gray hats further.

With the tax angle added, the Michigan program might turn out to be good policy. But I still wouldn’t register my own kid.