I haven’t been posting much lately, due to a high-intensity project that has sucked up all of my time. But now that’s over, so I should return to normal posting pace soon.
Why Aren't Virus Attacks Worse?
Dan Simon notes a scary NYT op-ed, “Terrorism and the Biology Lab,” by Henry C. Kelly. Kelly argues convincingly that ordinary molecular biology students will soon be able to make evil bio-weapons. Simon points out the analogy to computer viruses, which are easily made and easily released. If serious bio-weapons become as common as computer viruses, we are indeed in deep trouble.
Eric Rescorla responds by noting that the computer viruses we actually see do relatively little damage, at least compared to what they might have done. Really malicious viruses, that is, ones engineered to do maximum damage, are rare. What we see instead are viruses designed to get attention and to show that the author could have done damage. The most likely explanation is that the authors of well-known viruses have written them as a sort of (twisted) intellectual exercise rather than out of spite. [By the way, don’t miss the comments on Eric’s post.]
This reminds me of a series of conversations I had a few years ago with a hotshot mo-bio professor, about the national-security implications of bio-attacks versus cyber-attacks. I started out convinced that the cyber-attack threat, while real, was overstated; but bio-attacks terrified me. He had the converse view, that bio-attacks were possible but overhyped, while cyber-attacks were the real nightmare scenario. Each of us tried to reassure the other that really large-scale malicious attacks of the type we knew best (cyber- for me, bio- for him) were harder to carry out, and less likely, than commonly believed.
It seems to me that both of us, having spent many days in the lab, understood how hard it really is to make a novel, sophisticated technology work as planned. Since nightmare attacks are, by definition, novel and sophisticated and thus not fully testable in advance, the odds are pretty high that something would go “wrong” for the attacker. With a better understanding of how software can go wrong, I fully appreciated the cyber-attacker’s problem; and with a better understanding of how bio-experiments can go wrong, my colleague fully appreciated the bio-attacker’s problem. If there is any reassurance here, it is in the likelihood that any would-be attacker will miss some detail and his attack will fizzle.
Aimster Loses
As expected, the Seventh Circuit Court of Appeals has upheld a lower court’s temporary injunction against the Aimster file-sharing service. The Court’s opinion was written by Judge Richard Posner.
I noted three interesting things in the opinion. First, the court seemed unimpressed with Aimster’s legal representation. At several points the opinion notes arguments that Aimster could have made but didn’t, or important factual questions on which Aimster failed to present any evidence. For example, Aimster apparently never presented evidence that its system is ever used for noninfringing purposes.
Second, the opinion states, in a surprisingly offhand manner, that it’s illegal to fast-forward through the commercials when you’re replaying a taped TV show. “Commercial-skipping … amounted to creating an unauthorized derivative work … namely a commercial-free copy that would reduce the copyright owner’s income from his original program…”
Finally, the opinion makes much of the fact that Aimster traffic uses end-to-end encryption so that the traffic cannot be observed by anybody, including Aimster itself. Why did Aimster choose a design that prevented Aimster itself from seeing the traffic? The opinion assumes that Aimster did this because it wanted to remain ignorant of the infringing nature of the traffic. That may well be the real reason for Aimster’s use of encryption.
But there is another good reason to use end-to-end encryption in such a service. Users might want to transfer sensitive but noninfringing materials. If so, they would want those transfers to be protected by encryption. The transfer could in principle be decrypted and then reencrypted at an intermediate point such as an Aimster server. This extra decryption would indeed allow Aimster to detect infringement, but it would have several enginering disadvantages, including the extra processing time required to do the extra decryption and reencryption, and the risk of the data being compromised in case of a break-in at the server. The opinion hints at all of this; but apparently Aimster did not offer arguments on this point.
The opinion says, instead, that a service provider has a limited duty to redesign a service to prevent infringement, where the cost of adopting a different design must be weighted against the amount of infringement that it would prevent.
Even when there are noninfringing uses of an Internet file-sharing service, moreover, if the infringing uses are substantial then to avoid liability as a contributory infringer the provider of the service must show that it would have been dispropotionately costly for him to eliminate or at least reduce substantially the infringing uses.
And so one more reading of the Supreme Court’s Sony Betamax decision is now on the table.
P2P Evolution to Accelerate
The Washington Post online has a nice summary/directory of articles on the RIAA’s upcoming crackdown on peer-to-peer file sharers. The crackdown seems like a risky move, but it seems the industry can’t think of anything else to do about their P2P problem.
When the industry sued Napster into oblivion, Napster was replaced, hydra-like, by a newer generation of P2P systems that are apparently resistant to the tactics that took down Napster.
The RIAA’s new crackdown, if it works, will most likely cause yet another step in the evolution of P2P systems. P2P systems that provide only weak anonymity protection for their users will fade away, replaced by a new generation of P2P technology that resists the RIAA’s new tactics.
The RIAA’s new tactic is to join a P2P network semi-anonymously, and then to pierce the anonymity of people who are offering files. There are two countermeasures that can frustrate this tactic, and the use of these countermeasures is already starting to grow slowly.
The first countermeasure is to provide stronger anonymity protection for users, to prevent investigators from so easily unmasking users who are sharing files.
The second countermeasure is share files only among small friends-and-family groups, making it difficult for investigators to join the group. If every P2P user is a member of a few of these overlapping small groups, then files can still diffuse from place to face fairly quickly.
All of this must look pretty unfair from the RIAA’s point of view. No matter how strong the RIAA’s legal and ethical arguments against file sharing are, people will continue to share files as long as they view it as a basically benign activity. It seems to me that only a change in public attitudes, or a change in the basic legal structure of copyright, can solve the file sharing problem.
A Modest Proposal
Now that the Supreme Court has ruled that Congress can condition Federal funding for libraries on the libraries’ use of censorware (i.e., that a law called CIPA is consistent with the constitution), it’s time to take a serious look at the deficiencies of censorware, and what can be done about them.
Suppose you’re a librarian who wants to comply with CIPA, but otherwise you want your patrons to have access to as much material on the Net as possible. From your standpoint, the popular censorware products have four problems. (1) They block some unobjectionable material. (2) They fail to block some material that is obscene or harmful to minors. (3) They try to block material that Congress does not require to be blocked, such as certain political speech. (4) They don’t let you find out what they block.
(1) and (2) are just facts of life – no technology can eliminate these problems. But (3) and (4) are solvable – it’s possible to build a censorware program that doesn’t try to block anything except as required by the law, and it’s possible for a program’s vendor to reveal what their product blocks. But of course it’s unlikely that the main censorware vendors will give you (3) or (4).
So why doesn’t somebody create an open-source censoreware program that is minimally compliant with CIPA? This would give librarians a better option, and it would put pressure on the existing vendors to narrow their blocking lists and to say what they block.
I can understand why people would have been hesitant to create such a program in the past. Most people who want to minimize the intrusiveness of censorware have thus far done so by not using censorware; so there hasn’t been much of a market for a narrowly tailored product. But that may change as librarians are forced to use censorware.
Also, censorware opponents have found the lameness and overbreadth of existing censorware useful, especially in court. But now, in libraries at least, that usefulness is mostly past, and it’s time to figure out how to cope with CIPA in the least harmful way. More librarian-friendly censorware seems like a good start.
[Note: I must admit that I’m not entirely convinced by my own argument here. But I do think it has some merit and deserves discussion, and nobody else seemed to be saying it. So let the flaming begin!]