October 15, 2024

Judge Strikes Down COPA

Last week a Federal judge struck down COPA, a law requiring adult websites to use age verification technology. The ruling by Senior Judge Lowell A. Reed Jr. held COPA unconstitutional because it is more restrictive of speech (but no more effective) than the alternative of allowing private parties to use filtering software.

This is the end of a long legal process that started with the passage of COPA in 1999. The ACLU, along with various authors and publishers, immediately filed suit challenging COPA, and Judge Reed struck down the law. The case was appealed up to the Supreme Court, which generally supported Judge Reed’s ruling but remanded the case back to him for further proceedings because enough time had passed that the technological facts might have changed. Judge Reed held another trial last fall, at which I testified. Now he has ruled, again, that COPA is unconstitutional.

The policy issue behind COPA is how to keep kids from seeing harmful-to-minors (HTM) material. Some speech is legally obscene, which means it is so icky that it does not qualify for First Amendment free speech protection. HTM material is not obscene – adults have a legally protected right to read it – but is icky enough that kids don’t have a right to see it. In other words, there is a First Amendment right to transmit HTM material to adults but not to kids.

Congress has tried more than once to pass laws keeping kids away from HTM material online. The first attempt, the Communications Decency Act (CDA), was struck down by the Supreme Court in 1997. When Congress responded by passing COPA in 1999, it used the Court’s CDA ruling as a roadmap in writing the new law, in the hope that doing so would make COPA consistent with free speech.

Unlike the previous CDA ruling, Judge Reed’s new COPA ruling doesn’t seem to give Congress a roadmap for creating a new statute that would pass constitutional muster. COPA required sites publishing HTM material to use age screening technology to try to keep kids out. The judge compared COPA’s approach to an alternative in which individual computer owners had the option of using content filtering software. He found that COPA’s approach was more restrictive of protected speech and less effective in keeping kids away from HTM material. That was enough to make COPA, as a content-based restriction on speech, unconstitutional.

Two things make the judge’s ruling relatively roadmap-free. First, it is based heavily on factual findings that Congress cannot change – things like the relative effectiveness of filtering and the amount of HTM material that originates overseas beyond the effective reach of U.S. law. (Filtering operates on all material, while COPA’s requirements could have been ignored by many overseas sites.) Second, the alternative it offers requires only voluntary private action, not legislation.

Congress has already passed laws requiring schools and libraries to use content filters, as a condition of getting Federal funding and with certain safeguards that are supposed to protect adult access. The courts have upheld such laws. It’s not clear what more Congress can do. Judge Reed’s filtering alternative is less restrictive because it is voluntary, so that computers that aren’t used by kids, or on which parents have other ways of protecting kids against HTM material, can get unfiltered access. An adult who wants to get HTM material will be able to get it.

Doubtless Congress will make noise about this issue in the upcoming election year. Protecting kids from the nasty Internet is too attractive politically to pass up. Expect hearings to be held and bills to be introduced; but the odds that we’ll get a new law that makes much difference seem pretty low.

Testifying at E-Voting Hearing

I’m testifying about the Holt e-voting bill this morning, at a hearing of the U.S. House of Representatives, Committee on House Administrion, Subcommittee on Elections. I haven’t found a webcast URL, but you can read my written testimony.

OLPC: Too Much Innovation?

The One Laptop Per Child (OLPC) project is rightly getting lots of attention in the tech world. The idea – putting serious computing and communication technologies into the hands of kids all over the world – could be transformative, if it works.

Recently our security reading group at Princeton studied BitFrost, the security architecture for OLPC. After the discussion I couldn’t help thinking that BitFrost seemed too innovative.

“Too innovative?” you ask. What’s wrong with innovation? Let me explain. Though tech pundits often praise “innovation” in the abstract, the fact is that most would-be innovations fail. In engineering, most new ideas either don’t work or aren’t really an improvement over the status quo. Sometimes the same “new” idea pops up over and over, reinvented each time by someone who doesn’t know about the idea’s past failures.

In the long run, failures are weeded out and the few successes catch on, so the world gets better. But in the short run most innovations fail, which makes the urge to innovate dangerous.

Fred Brooks, in his groundbreaking The Mythical Man-Month, referred to the second-system effect:

An architect’s first work is apt to be spare and clean. He knows he doesn’t know what he’s doing, so he does it carefully and with great restraint.

As he designs the first work, frill after frill and embellishment after embellishment occur to him. These get stored away to be used “next time.” Sooner or later the first system is finished, and the architect, with firm confidence and a demonstrated mastery of that class of systems, is ready to build a second system.

This second is the most dangerous system a man ever designs. When he does his third and later ones, his prior experiences will confirm each other as to the general characteristics of such systems, and their differences will identify those parts of his experience that are particular and not generalizable.

The general tendency is to over-design the second system, using all the ideas and frills that were cautiously sidetracked on the first one. The result, as Ovid says, is a “big pile.”

The danger, in the second sytem, is the desire to reinvent everything, to replace the flawed but serviceable approaches of the past. The third-system designer, having learned his (or her – things have changed since Brooks wrote) lesson, knows to innovate only in the lab, or in a product only where innovation is necessary.

But here’s the OLPC security specification (lines 115-118):

What makes the OLPC XO laptops radically different is that they represent the first time that all these security measures have been carefully put together on a system slated to be introduced to tens or hundreds of millions of users.

OLPC needs to be innovative in some areas, but I don’t think security is one of them. Sure, it would be nice to have a better security model, but until we know that model is workable in practice, it seems risky to try it out on millions of kids.

Viacom, YouTube, and Privacy

Yesterday’s top tech policy story was the copyright lawsuits filed by Viacom, the parent company of Comedy Central, MTV, and Paramount Pictures, against YouTube and its owner Google. Viacom’s complaint accuses YouTube of direct, contributory, and vicarious copyright infringement, and inducing infringement. The complaint tries to paint YouTube as a descendant of Napster and Grokster.

Viacom argues generally that YouTube should have done more to help it detect and stop infringement. Interestingly, Viacom points to the privacy features of YouTube as part of the problem, in paragraph 43 of the complaint:

In addition, YouTube is deliberately interfering with copyright owners’ ability to find infringing videos even after they are added to YouTube’s library. YouTube offers a feature that allows users to designate “friends” who are the only persons allowed to see videos they upload, preventing copyright owners from finding infringing videos with this limitation…. Thus, Plaintiffs cannot necessarily find all infringing videos to protect their rights through searching, even though that is the only avenue YouTube makes available to copyright owners. Moreover, YouTube still makes the hidden infringing videos available for viewing through YouTube features like the embed, share, and friends functions. For example, many users are sharing full-length copies of copyrighted works and stating plainly in the description “Add me as a friend to watch.”

Users have many good reasons to want to limit access to noninfringing uploaded videos, for example to make home movies available to family members but not to the general public. It would be a shame, and YouTube would be much less useful, if there were no way to limit access. Equivalently, if any copyright owner could override the limits, there would be no privacy anymore – remember that we’re all copyright owners.

Is Viacom really arguing that YouTube shouldn’t let people limit access to uploaded material? Viacom doesn’t say this directly, though it is one plausible reading of their argument. Another reading is that they think YouTube should have an extra obligation to police and/or filter material that isn’t viewable by the public.

Either way, it’s troubling to see YouTube’s privacy features used to attack the site’s legality, when we know those features have plenty of uses other than hiding infringement. Will future entrepreneurs shy away from providing private communication, out of fear that it will be used to brand them as infringers? If the courts aren’t careful, that will be one effect of Viacom’s suit.

Protect E-Voting — Support H.R. 811

After a long fight, we have reached the point where a major e-voting reform bill has a chance to become U.S. law. I’m referring to HR 811, sponsored by my Congressman, Rush Holt, and co-sponsored by many others. After reading the bill carefully, and discussing with students and colleagues the arguments of its supporters and critics, I am convinced that it is a very good bill that deserves our support.

The main provisions of the bill would require e-voting technologies to have a paper ballot that is (a) voter-verified, (b) privacy-preserving, and (c) durable. Paper ballots would be hand-recounted, and compared to the electronic count, at randomly-selected precincts after every election.

The most important decision in writing such a bill is which technologies should be categorically banned. The bill would allow (properly designed) optical scan systems, touch-screen systems with a suitable paper trail, and all-paper systems. Paperless touchscreens and lever machines would be banned.

Some activists have argued that the bill doesn’t go far enough. A few say that all use of computers in voting should be banned. I think that’s a mistake, because it sacrifices the security benefits computers can provide, if they’re used well.

Others argue that touch-screen voting machines should be banned even if they have good paper trails. I think that goes too far. Touchscreens can be a useful part of a good voting system, if they’re used in the right context and with a good paper trail. We shouldn’t let the worst of today’s insecure paperless touchscreens – machines that should never have been certified in the first place, and anyway would be banned by the Holt Bill for lacking a suitable paper ballot – sour us on the better uses of touchscreens that are possible.

One of the best parts of the bill is its random audit requirement, which selects 3% of precincts (or more in close races) at which the paper ballots will be hand counted and compared to the electronic records. This serves two useful purposes: detecting error or fraud that might have affected the election result, and providing a routine quality-control check on the vote-counting process. This part of the bill reflects a balance between the states’ freedom to run their own elections and the national interest in sound election management.

On the whole this is a good, strong bill. I support it, and I urge you to support it too.