December 4, 2024

Archives for 2003

Swarthmore Students Re-Publish Diebold Memos

A group of Swarthmore students has published a damning series of internal memos from electronic-voting vendor Diebold. The memos appear to document cavalier treatment of security issues by Diebold, and the use of non-certified software in real elections. Diebold, claiming that the students are infringing copyright, has sent a series of DMCA takedown letters to Swarthmore. Swarthmore is apparently shutting off the Internet access of students who publish the memos. The students are responding by finding new places to report the memos, setting up what Ernest Miller calls a “whack-a-mole game”. (See, for example, posts from Ernest Miller and Aaron Swartz.)

Here is my question for the lawyers: Is this really copyright infringement? I know that copyright attaches even to pedestrian writings like business memos. But don’t the students have some kind of fair use argument? It seems to me that their purpose is noncommercial; and it can hardly be said that they are depriving Diebold of the opportunity to sell the memos to the public. So the students would seem to have a decent argument on at least two of the four fair-use factors. So it might be fair use.

Even if the students are breaking the law, what Diebold is doing in trying to suppress the memos certainly doesn’t further the goals underlying copyright law. A trade secret argument from Diebold would seem to make more sense here, although the students would seem to have a free-speech counterargument, bolstered by the strong public interest in knowing how our votes are counted.

Can any of my lawyer readers (or fellow bloggers) help clear up these issues?

Rescorla on Airport ID Checks

Eric Rescorla, at Educated Guesswork, notes a flaw in the security process at U.S. airports – the information used to verify a passenger’s ID is not the same information used to look them up in a suspicious-persons database.

Let’s say that you’re a dangerous Canadian terrorist, bearing the clearly suspicious name “Guy Lafleur”. Now, the American government is aware of your activities and puts you on the CAPPS blacklist to stop you from boarding the plane. Further, let’s assume that you’re too incompetent to get a fake ID….

You have someone who’s not on the blacklist buy you a ticket under an innocuous assumed name, say “Babe Ruth”. This is perfectly legitimate and quite easy to do…. Then, the day before the flight you go onto the web and get your boarding pass. You print out two copies, one with your real name and one with the innocuous fake name. Remember, it’s just a web page, so it’s easy to modify When you go to the airport, you show the security agent your “Guy Lafleur” boarding pass and your real ID. He verifies that they match but doesn’t check the watchlist, because his only job is to verify that you have a valid-looking boarding pass and that it matches your ID. Then, when you go to board the plane, you give the gate agent your real boarding pass. Since they don’t check ID, you can just walk onboard.

What’s happened is that whoever designed this system violated a basic security principle that’s one of the first things protocol designers learn: information you’re using to make a decision has to be the information you verify. Unfortunately, that’s not the case here. The identity that’s being verified is what’s written on a piece of paper and the identity that’s being used to check the watchlist is in some computer database which isn’t tied to the paper in any way other than your computer and printer, which are easy to subvert.

In a later post, he discusses some ways to fix the problem.

Warning Fatigue

One of the many problems facing security engineers is warning fatigue – the tendency of users who have seen too many security warnings to start ignoring the warnings altogether. Good designers think carefully about every warning they display, knowing that each added warning will dilute the warnings that were already there.

Warning fatigue is a significant security problem today. Users are so conditioned to warning boxes that they click them away, unread, as if instinctively swatting a fly.

Which brings us to H.R. 2752, the “Author, Consumer, and Computer Owner Protection and Security (ACCOPS) Act of 2003”, introduced in the House of Representatives in July, and discussed by Declan McCullagh in his latest column. The bill would require a security warning, and user consent, before allowing the download of any “software that, when installed on the user’s computer, enables 3rd parties to store data on that computer, or use that computer to search other computers’ contents over the Internet.”

Most users already know that downloading software is potentially risky. Most users are already accustomed to swatting away warning boxes telling them so. One more warning is unlikely to deter the would-be KaZaa downloader.

This is especially true given that the same warning would have to be placed on many other types of programs that meet the bill’s criteria, including operating systems and web browsers. The ACCOPS warning will be just another of those dialog boxes that nobody reads.

Reading the Broadcast Flag Rules

With the FCC apparently about to announce Broadcast Flag rules, there has been a flurry of letters to the FCC and legislators about the harm such rules would do. The Flag is clearly a bad idea. It will raise the price of digital TV decoders; and it will retard innovation in decoder design; but it won’t make a dent in infringement. It’s also pretty much inevitable that the FCC will issue rules anyway – and soon.

It’s worth noting, though, that we don’t know exactly what the FCC’s rules will say, and that the details can make a big difference. When the FCC does issue its rules, we’ll need to read them carefully to see exactly how much harm they will do.

Here is my guide to what to look for in the rules:

First, look at the criteria that an anti-copying technology must meet to be on the list of approved technologies. Must a technology give copyright owners control over all uses of content; or is a technology allowed support legal uses such as time-shifting; or is it required to support such uses?

Second, look at who decides which technologies can be on the approved list. Whoever makes this decision will control entry into the market for digital TV decoders. Is this up to the movie and TV industries; or does an administrative body like the FCC decide; or is each vendor responsible for determining whether their own technology meets the requirements?

Third, see whether the regulatory process allows for the possibility that no suitable anti-copying technology exists. Will the mandate be delayed if no strong anti-copying technology exists; or do the rules require that some technology be certified by a certain date, even if none is up to par?

Finally, look at which types of devices are subject to design mandates. To be covered, must a device be primarily designed for decoding digital TV; or is it enough for it to be merely capable of doing so? Do the mandates apply broadly to “downstream devices”? And is something a “downstream device” based on what it is primarily designed to do, or on what it is merely capable of doing?

This last issue is the most important, since it defines how broadly the rule will interfere with technological progress. The worst-case scenario is an overbroad rule that ends up micro-managing the design of general-purpose technologies like personal computers and the Internet. I know the FCC means well, but I wish I could say I was 100% sure that they won’t make that mistake.

Recommended Reading

Ernest Miller, who has written lots of great stuff for LawMeme, now has his very own blog at importance.typepad.com.