November 29, 2024

Controlling Software Updates

Randy Picker questions part of the computer science professors’ Grokster brief (of which I was a co-signer), in which we wrote:

Even assuming that Respondents have the right and ability to deliver such software to end users, there can be no way to ensure that software updates are installed, and stay installed. End users ultimately have control over which software is on their computers. If an end user does not want a software update, there is no way to make her take it.

This point mattered because Hollywood had suggested that Grokster should have used its software-update facility to deploy filtering software. (Apparently there is some dispute over whether Grokster had such a facility. I don’t know who is right on that factual question.)

Picker wonders whether ordinary users can really exercise this control in practice. As he notes, the user can disconnect from the net, but that’s too high a price for most people to pay. So how can users prevent updates?

The easiest method is simply to write-protect the program’s files or directories, so that they can’t be changed. Alternatively, the user can make a backup copy of the software (perhaps by copying it to another directory) and restore the backup when an update is installed.

Standard system security tools are also useful for controlling automatic updates. Autonomously self-updating programs look a lot like malicious code – the program code changes on its own (like a virus infection); the program makes network connections to odd places at odd times (like spyware); the program downloads and installs code without asking the user (like a malicious bot). Security tools specialize in identifying and blocking such behaviors, and the tools are reasonably configurable. Personal firewalls, for example, can block a program from making unapproved network connections. Some firewalls even do this by default.

Finally, a skilled person can figure out how to patch the program to disable the auto-update feature. He can then encapsulate this knowledge in a simple tool, so that other users can disable their auto-update by downloading the tool and double-clicking it. (This tool may violate copyright by modifying the program; but if we trusted users to obey copyright law we wouldn’t be having this conversation.)

The bottom line is that in computer security, possession is nine-tenths of control. Whoever has physical access to a device can control what it does. Whoever has physical control of a computer can control what software is installed on it. And users have physical control of their PCs.

A followup question is whether you can program the software to shut itself off if the user blocks updates for too long. As far as I know, nobody is claiming that Grokster had such a capability, but in principle a P2P system could be designed to (try to) work that way. This raises interesting issues too, but I’m approaching my word count limit so I’ll have to address them another day.

Michigan Email Registry as a Tax on Bulk Emailers

I wrote on Friday about the new registry of kids’ email addresses being set up by the state of Michigan. I wasn’t impressed. A commenter pointed out an important fact I missed: emailers have to pay a fee of $0.007 to screen each address against the list.

(One of the occupational hazards of blogging is the risk of posting without meticulous research. Fortunately such oversights can be corrected, as I’m doing here. My apologies to readers who were temporarily misled.)

I still worry that the list will leak information about kids’ email addresses to emailers. The fee will raise the cost of fishing expeditions designed to guess which addresses are on the list, to the point where it probably won’t be worthwhile for an emailer to launch blind guessing attacks. But emailers will still learn the status of addresses that are already on their lists.

The main effect of the fee is to turn the whole program into a tax on bulk emailing. The tax operates even if only a few kids’ addresses are registered, so parents worried about leaking their kids’ addresses can safely decline to register them. So let’s look at this as a tax scheme rather than a child protection program.

It’s an oddly structured tax, charging a bulk emailer $0.007 for each email address he mails to within a thirty-day period. (Emailers must re-check their lists every thirty days to avoid a violation.) And it only applies to bulk emailers who promote products that kids aren’t allowed to own. That includes some products whose promotion the state is particularly eager to control (e.g., drugs and gambling) as well as some products that are innocuous but inappropriate for kids (e.g., vehicles).

Why isn’t this structured simply as a tax on bulk email? We’d have to ask a free speech lawyer to be sure, but I wonder whether a tax on speech, and especially a tax that applies only to some kinds of speech, would be constitutionally suspect. Connecting it to the state interest in protecting kids from harmful commercial speech may provide some cover. (I could be off base here. If so, I’m sure commenters will set me straight.)

The tax seems unlikely to generate much revenue for the state. Including administrative costs, it may cost the state money. Presumably the goal is to change emailers’ incentives.

The incentive effect on outlaw spammers will be zero. They’ll just ignore this law, and add it to the list of laws they are already violating.

Gray hat spammers, who operate aboveboard and call themselves legitimate businesspeople, will see their costs increase. The tax will impose a fixed cost per address on their list, but independent of the number of messages sent to that address within a month. Adding to fixed costs will tend to cause consolidation in the gray hat spammer business – if two spammers share a list, they’ll only have to pay the tax once. It’s getting harder and harder to be a gray hat spammer already; this will only squeeze the gray hats further.

With the tax angle added, the Michigan program might turn out to be good policy. But I still wouldn’t register my own kid.

Encryption and Copying

Last week I criticized Richard Posner for saying that labeling content and adding filtering to P2P apps would do much to reduce infringement on P2P net. In responding to comments, Judge Posner unfortunately makes a very similar mistake:

Several pointed out correctly that tags on software files, indicating that the file is copyrighted, can probably be removed; and this suggests that only encryption, preventing copying, is likely to be effective in protecting the intellectual property rights of the owner of the copyright.

The error is rooted in the phrase “encryption, preventing copying”. Encryption does nothing to prevent copying – nor is it intended to. Encrypted data can be readily copied. Once decrypted, the plaintext data can again be readily copied. Encryption prevents one and only one thing – decryption without knowledge of the secret key.

It’s easy to see, then, why encryption has so little value in preventing infringement. You can ship content to customers in encrypted form, and the content won’t be decrypted in transit. But if you want to play the content, you have to decrypt it. And this means two things. First, the decrypted content will exist on the customer’s premises, where it can be readily copied. Second, the decryption key (and any other knowledge needed to decrypt) will exist on the customer’s premises, where it can be reverse-engineered. Either of these facts by itself would allow decrypted content to leak onto the Internet. So it’s not surprising that every significant encryption-based anticopying scheme has failed.

We need to recognize that these are not failures of implementation. Nor do they follow from the (incorrect) claim that every code can be broken. The problem is more fundamental: encryption does not stop copying.

Why do copyright owners keep building encryption-based systems? The answer is not technical but legal and economic. Encryption does not prevent infringement, but it does provide a basis for legal strategems. If content is encrypted, then anyone who wants to build a content-player device needs to know the decryption key. If you make the decryption key a trade secret, you can control entry to the market for players, by giving the key only to acceptable parties who will agree to your licensing terms. This ought to raise antitrust concerns in some cases, but the antitrust authorities have not shown much interest in scrutinizing such arrangements.

To his credit, Judge Posner recognizes the problems that result from anticompetitive use of encryption technology.

But this in turn presents the spectre of overprotection of [copyright owners’] rights. Copyright is limited in term and, more important (given the length of the term), is limited in other ways as well, such as by the right to make one copy for personal use and, in particular, the right of “fair use,” which permits a significant degree of unauthorized copying. To the extent that encryption creates an impenetrable wall to copying, it eliminates these limitations on copyright. In addition, encryption efforts generate countervailing circumvention efforts, touching off an arms race that may create more costs than benefits.

Once we recognize this landscape, we can get down to the hard work of defining a sensible policy.

RIAA Saber-Rattling against Antispoofing Technologies?

The RIAA has fired a shot across the bow of P2P companies whose products incorporate anti-spoofing technologies, according to a story (subscribers only) in Friday’s National Journal Tech Daily, by Sarah Lai Stirland. The statement came at a Washington panel on the implications of the Grokster decision.

“There’s definitely a lot of spoofing going on on the networks, and nobody thinks that that’s not fair game,” said Cary Sherman, president of the Recording Industry Association of America, on Friday. “Some networks actually put out some anti-spoofing filters to enable people to get around the spoofs, and that may well be a sign of intent.”

The comment came in answer to a question about the kinds of lawsuits that might be brought in the wake of the high court’s decision.

What Sherman is suggesting is that if a P2P vendor includes anti-spoofing technology in their product, that action demonstrates an intent to facilitate infringement, making the vendor liable as an indirect infringer under Grokster.

Perhaps Sherman is asserting that anti-spoofing technologies lack substantial noninfringing uses, and so do not qualify for the Sony Betamax safe harbor. This is wrong in general. It’s well known that some of the files on P2P systems are of low audio or video quality, or are mislabelled altogether. This is true of both infringing and non-infringing files. A technology that can predict which files will have low quality, or which users will be sources of low quality files, will help users find what they want. Spoof files are just low quality files that are inserted deliberately, so technologies that reject low-quality files will tend to reject spoof files, and vice versa.

Of course some particular vendor might introduce such a filter for bad reasons, because they want to abet infringement. But one cannot infer such intent merely from the presence of the filter.

One popular interpretation of Grokster is that the Court said a company’s overall business practices, rather than its technology, will determine its liability. That seems to follow from the Court’s refusal to revise the Sony Betamax rule. And yet Sherman’s complaint here is all about technology choices. Is this the precursor to lawsuits against undesired technologies?

Michigan Debuts Counterproductive Do-Not-Spam List for Kids

The state of Michigan has a new registry of kids’ email addresses in the state. Parents can put their kids’ addresses on the list. It’s illegal to send to addresses on the list any email solicitations for products that kids aren’t allowed to buy (alcohol, guns, gambling, vehicles, etc.). The site has been accepting registrations since July 1, and emailers must comply starting August 1.

This is a kids’ version of the Do-Not-Email list that the Federal Trade Commission considered last year. The FTC decided, wisely, not to proceed with its list. (Disclosure: I worked with the FTC as a consultant on this issue.) What bothered the FTC (and should have bothered Michigan) about this issue is the possibility that unscrupulous emailers will use the list as a source of addresses to target with spam. In the worst case, signing up for the list could make your spam problem worse, not better.

The Michigan system doesn’t just give the list to emailers – that would be a disaster – but instead provides a service that allows emailers to upload their mailing lists to a state-run server that sends the list back after removing any registered addresses. (Emailers who are sufficiently trusted by the state can apparently get a list of hashed addresses, allowing them to scrub their own lists.)

The problem is that an emailer can compare his initial list against the scrubbed version. Any address that is on the former but not the latter must be the address of a registered kid. By this trick the emailer can build a list of kids’ email addresses. The state may outlaw this, but it seems hard to stop it from happening, especially because the state appears to require emailers everywhere in the world to scrub their lists.

If I lived in Michigan, I wouldn’t register my kid’s address.

UPDATE (July 13): A commenter points out that the Michigan program imposes a charge of $0.007 per address on emailers. I missed this fact originally, and it changes the analysis significantly. See my later post for details.