November 30, 2024

Michigan Email Registry as a Tax on Bulk Emailers

I wrote on Friday about the new registry of kids’ email addresses being set up by the state of Michigan. I wasn’t impressed. A commenter pointed out an important fact I missed: emailers have to pay a fee of $0.007 to screen each address against the list.

(One of the occupational hazards of blogging is the risk of posting without meticulous research. Fortunately such oversights can be corrected, as I’m doing here. My apologies to readers who were temporarily misled.)

I still worry that the list will leak information about kids’ email addresses to emailers. The fee will raise the cost of fishing expeditions designed to guess which addresses are on the list, to the point where it probably won’t be worthwhile for an emailer to launch blind guessing attacks. But emailers will still learn the status of addresses that are already on their lists.

The main effect of the fee is to turn the whole program into a tax on bulk emailing. The tax operates even if only a few kids’ addresses are registered, so parents worried about leaking their kids’ addresses can safely decline to register them. So let’s look at this as a tax scheme rather than a child protection program.

It’s an oddly structured tax, charging a bulk emailer $0.007 for each email address he mails to within a thirty-day period. (Emailers must re-check their lists every thirty days to avoid a violation.) And it only applies to bulk emailers who promote products that kids aren’t allowed to own. That includes some products whose promotion the state is particularly eager to control (e.g., drugs and gambling) as well as some products that are innocuous but inappropriate for kids (e.g., vehicles).

Why isn’t this structured simply as a tax on bulk email? We’d have to ask a free speech lawyer to be sure, but I wonder whether a tax on speech, and especially a tax that applies only to some kinds of speech, would be constitutionally suspect. Connecting it to the state interest in protecting kids from harmful commercial speech may provide some cover. (I could be off base here. If so, I’m sure commenters will set me straight.)

The tax seems unlikely to generate much revenue for the state. Including administrative costs, it may cost the state money. Presumably the goal is to change emailers’ incentives.

The incentive effect on outlaw spammers will be zero. They’ll just ignore this law, and add it to the list of laws they are already violating.

Gray hat spammers, who operate aboveboard and call themselves legitimate businesspeople, will see their costs increase. The tax will impose a fixed cost per address on their list, but independent of the number of messages sent to that address within a month. Adding to fixed costs will tend to cause consolidation in the gray hat spammer business – if two spammers share a list, they’ll only have to pay the tax once. It’s getting harder and harder to be a gray hat spammer already; this will only squeeze the gray hats further.

With the tax angle added, the Michigan program might turn out to be good policy. But I still wouldn’t register my own kid.

Encryption and Copying

Last week I criticized Richard Posner for saying that labeling content and adding filtering to P2P apps would do much to reduce infringement on P2P net. In responding to comments, Judge Posner unfortunately makes a very similar mistake:

Several pointed out correctly that tags on software files, indicating that the file is copyrighted, can probably be removed; and this suggests that only encryption, preventing copying, is likely to be effective in protecting the intellectual property rights of the owner of the copyright.

The error is rooted in the phrase “encryption, preventing copying”. Encryption does nothing to prevent copying – nor is it intended to. Encrypted data can be readily copied. Once decrypted, the plaintext data can again be readily copied. Encryption prevents one and only one thing – decryption without knowledge of the secret key.

It’s easy to see, then, why encryption has so little value in preventing infringement. You can ship content to customers in encrypted form, and the content won’t be decrypted in transit. But if you want to play the content, you have to decrypt it. And this means two things. First, the decrypted content will exist on the customer’s premises, where it can be readily copied. Second, the decryption key (and any other knowledge needed to decrypt) will exist on the customer’s premises, where it can be reverse-engineered. Either of these facts by itself would allow decrypted content to leak onto the Internet. So it’s not surprising that every significant encryption-based anticopying scheme has failed.

We need to recognize that these are not failures of implementation. Nor do they follow from the (incorrect) claim that every code can be broken. The problem is more fundamental: encryption does not stop copying.

Why do copyright owners keep building encryption-based systems? The answer is not technical but legal and economic. Encryption does not prevent infringement, but it does provide a basis for legal strategems. If content is encrypted, then anyone who wants to build a content-player device needs to know the decryption key. If you make the decryption key a trade secret, you can control entry to the market for players, by giving the key only to acceptable parties who will agree to your licensing terms. This ought to raise antitrust concerns in some cases, but the antitrust authorities have not shown much interest in scrutinizing such arrangements.

To his credit, Judge Posner recognizes the problems that result from anticompetitive use of encryption technology.

But this in turn presents the spectre of overprotection of [copyright owners’] rights. Copyright is limited in term and, more important (given the length of the term), is limited in other ways as well, such as by the right to make one copy for personal use and, in particular, the right of “fair use,” which permits a significant degree of unauthorized copying. To the extent that encryption creates an impenetrable wall to copying, it eliminates these limitations on copyright. In addition, encryption efforts generate countervailing circumvention efforts, touching off an arms race that may create more costs than benefits.

Once we recognize this landscape, we can get down to the hard work of defining a sensible policy.

RIAA Saber-Rattling against Antispoofing Technologies?

The RIAA has fired a shot across the bow of P2P companies whose products incorporate anti-spoofing technologies, according to a story (subscribers only) in Friday’s National Journal Tech Daily, by Sarah Lai Stirland. The statement came at a Washington panel on the implications of the Grokster decision.

“There’s definitely a lot of spoofing going on on the networks, and nobody thinks that that’s not fair game,” said Cary Sherman, president of the Recording Industry Association of America, on Friday. “Some networks actually put out some anti-spoofing filters to enable people to get around the spoofs, and that may well be a sign of intent.”

The comment came in answer to a question about the kinds of lawsuits that might be brought in the wake of the high court’s decision.

What Sherman is suggesting is that if a P2P vendor includes anti-spoofing technology in their product, that action demonstrates an intent to facilitate infringement, making the vendor liable as an indirect infringer under Grokster.

Perhaps Sherman is asserting that anti-spoofing technologies lack substantial noninfringing uses, and so do not qualify for the Sony Betamax safe harbor. This is wrong in general. It’s well known that some of the files on P2P systems are of low audio or video quality, or are mislabelled altogether. This is true of both infringing and non-infringing files. A technology that can predict which files will have low quality, or which users will be sources of low quality files, will help users find what they want. Spoof files are just low quality files that are inserted deliberately, so technologies that reject low-quality files will tend to reject spoof files, and vice versa.

Of course some particular vendor might introduce such a filter for bad reasons, because they want to abet infringement. But one cannot infer such intent merely from the presence of the filter.

One popular interpretation of Grokster is that the Court said a company’s overall business practices, rather than its technology, will determine its liability. That seems to follow from the Court’s refusal to revise the Sony Betamax rule. And yet Sherman’s complaint here is all about technology choices. Is this the precursor to lawsuits against undesired technologies?

Michigan Debuts Counterproductive Do-Not-Spam List for Kids

The state of Michigan has a new registry of kids’ email addresses in the state. Parents can put their kids’ addresses on the list. It’s illegal to send to addresses on the list any email solicitations for products that kids aren’t allowed to buy (alcohol, guns, gambling, vehicles, etc.). The site has been accepting registrations since July 1, and emailers must comply starting August 1.

This is a kids’ version of the Do-Not-Email list that the Federal Trade Commission considered last year. The FTC decided, wisely, not to proceed with its list. (Disclosure: I worked with the FTC as a consultant on this issue.) What bothered the FTC (and should have bothered Michigan) about this issue is the possibility that unscrupulous emailers will use the list as a source of addresses to target with spam. In the worst case, signing up for the list could make your spam problem worse, not better.

The Michigan system doesn’t just give the list to emailers – that would be a disaster – but instead provides a service that allows emailers to upload their mailing lists to a state-run server that sends the list back after removing any registered addresses. (Emailers who are sufficiently trusted by the state can apparently get a list of hashed addresses, allowing them to scrub their own lists.)

The problem is that an emailer can compare his initial list against the scrubbed version. Any address that is on the former but not the latter must be the address of a registered kid. By this trick the emailer can build a list of kids’ email addresses. The state may outlaw this, but it seems hard to stop it from happening, especially because the state appears to require emailers everywhere in the world to scrub their lists.

If I lived in Michigan, I wouldn’t register my kid’s address.

UPDATE (July 13): A commenter points out that the Michigan program imposes a charge of $0.007 per address on emailers. I missed this fact originally, and it changes the analysis significantly. See my later post for details.

Chess Computer Crushes Elite Human Player

Last week Hydra, a chess-playing computer, completed its rout of Michael Adams, the seventh-ranked human player in the world. Hydra won five of six games, and Adams barely escaped with a draw in the other game. ChessBase has the details, including a page where you can play through the six games.

It’s time to admit that computers play better chess than people.

This may seem inevitable in hindsight, but for the longest time people insisted that human chess players had something special which computers could never duplicate. That was true, up to a point. Computers have never succeeded at approaching chess the way people do. The best human players make subtle, intuitive judgments that are probably based on pattern-matching deep in their neural circuitry. Often an elite player cannot verbalize how he knows that one configuration of pieces is dangerous when another nearly identical configuration is not. He just knows. He does calculate in the “if he does this, I’ll do that, then he’ll do this, …” fashion, but only when necessary.

Every attempt to transplant human “intelligence” into a chess computer has failed miserably. Computers understand very little about chess. They rely instead on rudimentary judgment about chess positions, coupled with prodigious calculation, looking ahead at hundred of millions or billions of possible board positions.

Chess players classify game situations into two categories, “tactical” and “positional”. Tactical situations feature direct, violent clashes between pieces, and call mostly for calculation, with intuition as a backstop. Positional situations are slow and subtle, requiring deep judgments and long maneuvers. Everybody expected computers to excel at tactics. The big surprise is that the computer approach seems to work well in positional situations too. Somehow, calculation can substitute for judgment, even when conditions seem to require judgment.

This is not to say that it’s easy to create a chess computer that plays as well as Hydra. Quite the contrary. Great effort has been spent on perfecting computer chess algorithms. That effort has gone not to teaching computers about chess, but to improving the algorithms for deciding when to cut off calculations and when to calculate more deeply. Indeed, algorithmic improvements have been a much bigger factor even than Moore’s Law over the years.

Chess computers have succeeded by ignoring what human chessplayers do best, and doing instead what computers do best. And what computers do best is to run programs written by very clever human programmers.