November 21, 2024

Microsoft Ruling Released Early

Update (8:42 PM): The item below, which I am leaving here only to maintain a complete record, was INCORRECT. It was based on an inaccurate report from a reader, which was discovered when I asked the reader a few more questions. At this point, although the ruling was put on the Court’s website early, there is no evidence that the Court’s email was also released early.

======

[INCORRECT ITEM:]

Earlier I wrote about Friday’s Microsoft ruling being available at a hidden URL on the Court’s site at 2:40 PM, about two hours before the official release time.

Reader [name deleted] reports receiving the Court’s emailed release of the ruling at about 3:15 PM, more than an hour before the scheduled release. (I received it about about 5:00 PM, but the message was listed as sent at 3:15 PM.)

Previous rulings in the case had been released after the stock market closed on a Friday, and this ruling was announced to follow that schedule. It’s not clear why it was released early. It seems unlikely that the judge changed her mind about when to release it. Perhaps the plan was to release it at 4:30, but once it was clear that the information had leaked from the website, somebody decided to release the email.

Any other theories?

Wiley's Super-Worm

Brandon Wiley writes about the possibility of a “super-worm” that would use sophisticated methods to infect a large fraction of Internet hosts, and to maintain and evolve the infection over time. This is scary stuff. I have two comments to add.

First, the worst case is probably even worse than Wiley suggests. His paper may only scratch the surface of what a really sophisticated bad guy could do.

Second, Wiley’s paper points out the double-edged nature of basic security technology. The methods we use to protect ourselves against attacks – encryption, redundancy, decentralization, code patching – are the same methods that Wiley’s bad guy would use to protect himself against our counterattacks. To counterattack, we would need to understand the flaws in these methods, and to know how to attack them. If we ban or stigmatize discussion of these flaws, we put ourselves at risk.

Slate: Nigerian Scam Emails Explained

Brendan Koerner at Slate explains why we’re all getting so many Nigerian scam emails. Most of them really do come from Nigeria, though the rest of their story is of course fictional.

Discovery vs. Creation

Last week I had yet another DMCA debate, this time at the Chicago International Intellectual Property Conference. Afterward, I had an interesting conversation with Kathy Strandburg of DePaul Law School, about the different mindsets of DMCA supporters and opponents.

DMCA supporters seem to think of security technology as reflecting the decisions of its creators, while opponents (including me) think of technological progress in terms of discovery.

Two examples may help illustrate this distinction. First, consider the inclusion of a spell checker in Microsoft Word. This is a decision that Microsoft made. There is no law of nature saying that word processors must include spell checkers, but Microsoft evaluated the pros and cons and then decided to do it that way.

Second, consider Einstein’s statement that E equals MC-squared. Einstein didn’t decide that E should equal MC-squared, he discovered it. E had always been equal to MC-squared, and it would continue to do so regardless of what Einstein said or did. He didn’t create that fact; he was simply the first one to figure out that it was true.

I tend to think of computer security as a process of discovery. If I figure out that a certain system is insecure, that is a discovery. I didn’t make the system insecure; it was always insecure and all I did was to point out that fact. Nothing I did could make such a system secure, just as nothing Einstein did could have made E equal MC-cubed.

DMCA supporters sometimes seem to think of computer security as being the result of a collective decision by experts. It is as if we all, by simply acting as though a system were secure, could make it really be secure. If you think this way, then deciding to make a widely deployed technology insecure would indeed be a stupid and wasteful decision, and it might sensibly be banned. But if you think this way, then in my view you don’t really understand computer security.

When you’re making a film, or writing a song, or drafting a statute, or negotiating a contract, you’re making decisions. It might be natural for people who make films, songs, statutes, or contracts to try to apply their understanding of their own fields to the world of technology. They can decide that such an approach makes sense; but ultimately they will discover that it does not.

One More on Biometrics

Simson Garfinkel offers a practical perspective on biometrics, at CSO Magazine.