August 29, 2016

Archives for October 2003


WaPo Confused On CD-DRM

Today’s Washington Post runs an odd, self-rebutting story about the sales of the copy-protected Anthony Hamilton CD – the same CD that Alex Halderman wrote about, leading to SunnComm’s on-again, off-again lawsuit threat.

The article begins by saying that the CD’s sales had an unusually small post-release drop-off in sales. Sales fell 23% in the first week, where 40-60% is more typical. There are several reasons this might have happened: the album was heavily promoted, it was priced at $13.98, and it had good word of mouth. But the article tries to argue that the SunnComm DRM technology was a big part of the cause.

The article proceeds to rebut its own argument, by undercutting any mechanism by which the DRM could have reduced copying. Did the DRM keep the music off peer-to-peer networks? No. “Songs from Hamilton’s CD appeared on unauthorized song-sharing Internet services, such as Kazaa, before the release date…” Did the DRM keep people from making CD-to-CD copies? No. “Though buyers of the Hamilton CD are allowed to make three copies, nothing prevents them from copying the copied CDs”

Was the DRM unobtrusive? Here the reporter seems to misread one of the Amazon reviews, implying that the reviewer preferred DRM to non-DRM discs:

“I give this CD four stars only because of the copyright protection,” wrote one reviewer. “This CD didn’t play too well on my computer until I downloaded some kind of license agreement, and was connected to the Internet. Otherwise, it’s very good.”

It should be clear enough from this quote (and if you’re not sure, go read the full review on Amazon) that this reviewer saw the DRM as a negative. And at least two other reviewers at Amazon say flatly that the CD did not work in their players.

The topper, though, is the last paragraph, which shows a reporter or editor asleep at the switch:

A Princeton University graduate student distributed a paper on the Internet shortly after the CD’s release demonstrating, he argued, how the copy-protection could be broken. But Jacobs, who initially threatened to sue the student before backing off, said his technology is meant to thwart casual copying, not determined hackers.

What’s with the “he argued”? The claims in the student’s paper are factual in nature, and could easily have been checked. SunnComm even admits that the claims are accurate.

And how can the reporter let pass the statement by Jacobs implying that only “determined hackers” would be able to thwart the technology? We’re talking about pressing the shift key, which is hardly beyond the capabilities of casual users.

We’ve come to expect this kind of distortion from SunnComm’s press releases. Why are we reading it in the Washington Post?


DMCA Exemptions Granted, Problems Remain

The U.S. Copyright Office has issued its report, creating exemptions to the DMCA’s anti-circumvention provisions for the next three years. The exemptions allow people to circumvent access control technologies under certain closely constrained conditions. The exemption rulemaking, which happens every three years, was created by Congress as a kind of safety valve, intended to keep the DMCA from stifling fair use too severely.

This time around, exemptions were granted for (1) access to the “block-lists” of censorware products, and (2) works protected by various types of broken or obsolete access control mechanisms.

My own exemption request, asking for exemptions for information security researchers, was denied as expected.

It is abundantly clear by now that the DMCA has had a chilling effect on legitimate research related to access control technologies. When researchers ask Washington for a solution to this problem, they have so far gotten a Catch-22 answer. When we ask Congress do to something, we are told to seek an exemption in the Copyright Office rulemaking. But when we petitioned the Copyright Office for an exemption in the 2000 rulemaking, we were told that the Copyright Office did not have the power to grant the kind of exemption we had requested.

So this time, I wrote an exemption request that was designed to end the Catch-22 – to entice the Copyright Office to either (a) grant an exemption for researchers, or (b) state flatly that Congress had not given it the power to grant any kind of useful research exemption. As I read the Copyright Office’s findings (see pages 14-15 of the short version, or pages 86-89 of the extended dance version; they designate my request as number 3), they have essentially said (b) – exemptions of the type I requested “cannot be considered.”


Broadcast Flag Confusion

In today’s New York Times, Stephen Labaton reports on the continuing controversy over the FCC’s impending Broadcast Flag rules. In the midst of a back-and-forth about the rules, Labaton writes this:

An F.C.C. official said, for instance, that the broadcast flag could contain software code that was recognized by computer routers in a way that the program would self-destruct after passing through three routers while being e-mailed by a user.

Somebody is really confused here about how the Internet works. Maybe it’s the reporter, or maybe it’s the FCC source, or maybe (God forbid) both.

If this statement bears any connection to reality, it’s cause for serious worry. I can’t think of any way of translating the statement into a technically coherent form that doesn’t involve the FCC redesigning the basic workings of the Internet.

UPDATE (8:55 PM): Seth Schoen has solved the mystery; see his comment. The mystery sentence looks like a very confused attempt to explain the fact that DTCP-over-IP sets the Time-To-Live field on its IP packets equal to three.


Remote Controls for Traffic Lights

Many cities have installed systems that let emergency vehicles control traffic lights via infrared remote controls, thereby getting to the scene of an emergency more quickly. This is good.

Yesterday’s Detroit News, in a story by Jodi Upton, reports on the availability of remote controls that allow ordinary citizens to control the same traffic lights. Now traffic engineers worry that selfish people will use the remotes to disrupt the flow of traffic.

As Eric Rescorla notes, this could have been avoided by using cryptography in the design of the original system. Instead, we’re likely to see a crackdown on the distribution of the remote controls, and the predictable black market in the banned devices.

This seems like a classic example of the harm caused by deploying a technology without considering how it might be abused. It would be interesting to know why this happened. Did the vendor not stop to think about the potential for abuse? Did they think that nobody would ever figure out how to abuse the system? Did they fail to realize that anti-abuse technologies were available? I wish I knew.



As an experiment, and in the hopes of defraying the cost of running this site, I have started sticking ads onto this site’s individual entry pages. The service uses some kind of algorithm, based on the pages’ content, to decide which ads to place on each page.

Please let me know what you think.


Swarthmore Bans Indirect Links

Ernest Miller reports that Swarthmore now is yanking the Net connections of students who linking to a page that links to a page containing the infamous Diebold memos.

So Swarthmore students can’t make a two-hop link to the memos (i.e., a link to a link to the memos). Can they make a three-hop link, say by linking to Ernest Miller’s report? Can they make a four-hop link, say by linking to the page you are reading right now? Can they make a five-hop link, say by linking to my personal home page? Maybe some enterprising Swarthmore student will do an experiment to find out.

UPDATE (1:40 PM, Oct. 27): James Grimmelmann at LawMeme says that Diebold’s own home page contains a five-hop link.


Swarthmore Students Re-Publish Diebold Memos

A group of Swarthmore students has published a damning series of internal memos from electronic-voting vendor Diebold. The memos appear to document cavalier treatment of security issues by Diebold, and the use of non-certified software in real elections. Diebold, claiming that the students are infringing copyright, has sent a series of DMCA takedown letters to Swarthmore. Swarthmore is apparently shutting off the Internet access of students who publish the memos. The students are responding by finding new places to report the memos, setting up what Ernest Miller calls a “whack-a-mole game”. (See, for example, posts from Ernest Miller and Aaron Swartz.)

Here is my question for the lawyers: Is this really copyright infringement? I know that copyright attaches even to pedestrian writings like business memos. But don’t the students have some kind of fair use argument? It seems to me that their purpose is noncommercial; and it can hardly be said that they are depriving Diebold of the opportunity to sell the memos to the public. So the students would seem to have a decent argument on at least two of the four fair-use factors. So it might be fair use.

Even if the students are breaking the law, what Diebold is doing in trying to suppress the memos certainly doesn’t further the goals underlying copyright law. A trade secret argument from Diebold would seem to make more sense here, although the students would seem to have a free-speech counterargument, bolstered by the strong public interest in knowing how our votes are counted.

Can any of my lawyer readers (or fellow bloggers) help clear up these issues?


Rescorla on Airport ID Checks

Eric Rescorla, at Educated Guesswork, notes a flaw in the security process at U.S. airports – the information used to verify a passenger’s ID is not the same information used to look them up in a suspicious-persons database.

Let’s say that you’re a dangerous Canadian terrorist, bearing the clearly suspicious name “Guy Lafleur”. Now, the American government is aware of your activities and puts you on the CAPPS blacklist to stop you from boarding the plane. Further, let’s assume that you’re too incompetent to get a fake ID….

You have someone who’s not on the blacklist buy you a ticket under an innocuous assumed name, say “Babe Ruth”. This is perfectly legitimate and quite easy to do…. Then, the day before the flight you go onto the web and get your boarding pass. You print out two copies, one with your real name and one with the innocuous fake name. Remember, it’s just a web page, so it’s easy to modify When you go to the airport, you show the security agent your “Guy Lafleur” boarding pass and your real ID. He verifies that they match but doesn’t check the watchlist, because his only job is to verify that you have a valid-looking boarding pass and that it matches your ID. Then, when you go to board the plane, you give the gate agent your real boarding pass. Since they don’t check ID, you can just walk onboard.

What’s happened is that whoever designed this system violated a basic security principle that’s one of the first things protocol designers learn: information you’re using to make a decision has to be the information you verify. Unfortunately, that’s not the case here. The identity that’s being verified is what’s written on a piece of paper and the identity that’s being used to check the watchlist is in some computer database which isn’t tied to the paper in any way other than your computer and printer, which are easy to subvert.

In a later post, he discusses some ways to fix the problem.


Warning Fatigue

One of the many problems facing security engineers is warning fatigue – the tendency of users who have seen too many security warnings to start ignoring the warnings altogether. Good designers think carefully about every warning they display, knowing that each added warning will dilute the warnings that were already there.

Warning fatigue is a significant security problem today. Users are so conditioned to warning boxes that they click them away, unread, as if instinctively swatting a fly.

Which brings us to H.R. 2752, the “Author, Consumer, and Computer Owner Protection and Security (ACCOPS) Act of 2003”, introduced in the House of Representatives in July, and discussed by Declan McCullagh in his latest column. The bill would require a security warning, and user consent, before allowing the download of any “software that, when installed on the user’s computer, enables 3rd parties to store data on that computer, or use that computer to search other computers’ contents over the Internet.”

Most users already know that downloading software is potentially risky. Most users are already accustomed to swatting away warning boxes telling them so. One more warning is unlikely to deter the would-be KaZaa downloader.

This is especially true given that the same warning would have to be placed on many other types of programs that meet the bill’s criteria, including operating systems and web browsers. The ACCOPS warning will be just another of those dialog boxes that nobody reads.


Reading the Broadcast Flag Rules

With the FCC apparently about to announce Broadcast Flag rules, there has been a flurry of letters to the FCC and legislators about the harm such rules would do. The Flag is clearly a bad idea. It will raise the price of digital TV decoders; and it will retard innovation in decoder design; but it won’t make a dent in infringement. It’s also pretty much inevitable that the FCC will issue rules anyway – and soon.

It’s worth noting, though, that we don’t know exactly what the FCC’s rules will say, and that the details can make a big difference. When the FCC does issue its rules, we’ll need to read them carefully to see exactly how much harm they will do.

Here is my guide to what to look for in the rules:

First, look at the criteria that an anti-copying technology must meet to be on the list of approved technologies. Must a technology give copyright owners control over all uses of content; or is a technology allowed support legal uses such as time-shifting; or is it required to support such uses?

Second, look at who decides which technologies can be on the approved list. Whoever makes this decision will control entry into the market for digital TV decoders. Is this up to the movie and TV industries; or does an administrative body like the FCC decide; or is each vendor responsible for determining whether their own technology meets the requirements?

Third, see whether the regulatory process allows for the possibility that no suitable anti-copying technology exists. Will the mandate be delayed if no strong anti-copying technology exists; or do the rules require that some technology be certified by a certain date, even if none is up to par?

Finally, look at which types of devices are subject to design mandates. To be covered, must a device be primarily designed for decoding digital TV; or is it enough for it to be merely capable of doing so? Do the mandates apply broadly to “downstream devices”? And is something a “downstream device” based on what it is primarily designed to do, or on what it is merely capable of doing?

This last issue is the most important, since it defines how broadly the rule will interfere with technological progress. The worst-case scenario is an overbroad rule that ends up micro-managing the design of general-purpose technologies like personal computers and the Internet. I know the FCC means well, but I wish I could say I was 100% sure that they won’t make that mistake.