November 26, 2024

Broadcast Flag Scorecard

Before the FCC issued its Broadcast Flag Order, I wrote a post on “Reading the Broadcast Flag Rules”, in which I recommended reading the eventual Order carefully since “the details can make a big difference.” I pointed to four specific choices the FCC had to make.

Let’s look at how the FCC chose. For each of the four issues I identified, I’ll quote in italics my previous posting, and then I’ll summarize the FCC’s action.

First, look at the criteria that an anti-copying technology must meet to be on the list of approved technologies. Must a technology give copyright owners control over all uses of content; or is a technology allowed [to] support legal uses such as time-shifting; or is it required to support such uses?

The Order says that technologies must prevent “indiscriminate redistribution”, but it isn’t precise about what that term means. The precise scope of permissible redistribution is deferred to a later rulemaking. There is also some language expressing a desire not to control copying within the home, but that desire may not be backed by formal requirement.

Verdict: This issue is still unresolved; perhaps the later rulemaking will clarify it.

Second, look at who decides which technologies can be on the approved list. Whoever makes this decision will control entry into the market for digital TV decoders. Is this up to the movie and TV industries; or does an administrative body like the FCC decide; or is each vendor responsible for determining whether their own technology meets the requirements?

This issue was deferred to a later rulemaking process, so we don’t know what the final answer will be. The FCC does appear to understand the danger inherent in letting the entertainment industry control the list.

The Order does establish an interim approval mechanism, in which the FCC makes the final decisions, after a streamlined filing and counter-filing process by the affected parties.

Verdict: This issue was deferred until later, but the FCC seems to be leaning in the right direction.

Third, see whether the regulatory process allows for the possibility that no suitable anti-copying technology exists. Will the mandate be delayed if no strong anti-copying technology exists; or do the rules require that some technology be certified by a certain date, even if none is up to par?

The Order doesn’t address this issue head-on. It does say that to qualify, a technology need only resist attacks by ordinary users using widely available tools. This decision, along with the lack of precision about the scope of home copying that will be allowed, makes it easier to find a compliant technology later.

Verdict: This issue was not specifically addressed; it may be clarified in the later rulemaking.

Finally, look at which types of devices are subject to design mandates. To be covered, must a device be primarily designed for decoding digital TV; or is it enough for it to be merely capable of doing so? Do the mandates apply broadly to “downstream devices”? And is something a “downstream device” based on what it is primarily designed to do, or on what it is merely capable of doing?

This last issue is the most important, since it defines how broadly the rule will interfere with technological progress. The worst-case scenario is an overbroad rule that ends up micro-managing the design of general-purpose technologies like personal computers and the Internet. I know the FCC means well, but I wish I could say I was 100% sure that they won’t make that mistake.

The Order regulates Digital TV demodulators, as well as Transport Stream Processors (which take the demodulated signal and separate it into its digital audio, video, and metadata components).

The impact on general-purpose computers is a bit hard to determine. It appears that if a computer contains a DTV receiver card, the communications between that card and the rest of the computer would be regulated. This would then impact the design of any applications or device drivers that handle the DTV stream coming from the card.

Verdict: The FCC seems to have been trying to limit the negative impact of the Order by limiting its scope, but some broad impacts seem to be inevitable side-effects of mandating any kind of Flag.

Bottom line: The FCC’s order will be harmful; but it could have been much, much worse.

The Broadcast Flag, and Threat Model Confusion

The FCC has mandated “broadcast flag” technology, which will limit technical options for the designers of digital TV tuners and related products. This is intended to reduce online redistribution of digital TV content, but it is likely to have little or no actual effect on the availability of infringing content on the Net.

The FCC is committing the classic mistake of not having a clear threat model. As I explained in more detail in a previous post, a “threat model” is a clearly defined explanation of what a security system is trying to prevent, and of the capabilities and motives of the people who are trying to defeat it. For a system like the broadcast flag, there are two threat models to choose from. Either you are trying to keep the average consumer from giving content to his friends and neighbors (the “casual copying” threat model), or you are trying to keep the content off of Internet distributions systems like KaZaa (the “Napsterization” threat model). You choose a threat model, and then you design a technology that prevents the threat you have chosen.

If you choose the casual copying model, your DRM technology needs to be strong enough to resist attack by average consumers only; but your technology will not address the Napsterization threat. If you choose the Napsterization threat model, then you have to be able to stop every would-be infringer from ripping the content, because if even one person manages to rip the content and upload it onto the Net, the content becomes available to everybody who wants it.

The FCC seems to be trying to have it both ways. They have mandated technologies that are only strong enough to prevent casual copying by typical consumers. But at the same time, they somehow expect those technologies to prevent Napsterization. This incoherence is evident throughout the FCC’s broadcast flag order. At several points the two incompatible threat models appear in the same paragraph; here is an example:

19. We recognize the concerns of commenters regarding potential vulnerabilities in a flag-based protection system. We are equally mindful of the fact that it is difficult if not impossible to construct a content protection scheme that is impervious to attack or circumvention. We believe, however, that the benefits achieved by creation of a flag-based system – creating a “speed bump” mechanism to prevent indiscriminate redistribution of broadcast content and ensure the continued availability of high value content to broadcast outlets – outweighs the potential vulnerabilities cited by commenters….

(emphasis added) The error here should be clear – a “speed bump” cannot prevent “indiscriminate redistribution”.

(I’ll have more to say about the broadcast flag in subsequent posts.)

Election Day

It’s Election Day, and residents here in Mercer County may have cast our last votes on the big old battleship-gray lever voting machines. Next election, we’re supposed to be using a new all-electronic system, without any of the necessary safeguards such as a voter-verifiable paper trail or public inspection of software code.

WaPo Confused On CD-DRM

Today’s Washington Post runs an odd, self-rebutting story about the sales of the copy-protected Anthony Hamilton CD – the same CD that Alex Halderman wrote about, leading to SunnComm’s on-again, off-again lawsuit threat.

The article begins by saying that the CD’s sales had an unusually small post-release drop-off in sales. Sales fell 23% in the first week, where 40-60% is more typical. There are several reasons this might have happened: the album was heavily promoted, it was priced at $13.98, and it had good word of mouth. But the article tries to argue that the SunnComm DRM technology was a big part of the cause.

The article proceeds to rebut its own argument, by undercutting any mechanism by which the DRM could have reduced copying. Did the DRM keep the music off peer-to-peer networks? No. “Songs from Hamilton’s CD appeared on unauthorized song-sharing Internet services, such as Kazaa, before the release date…” Did the DRM keep people from making CD-to-CD copies? No. “Though buyers of the Hamilton CD are allowed to make three copies, nothing prevents them from copying the copied CDs”

Was the DRM unobtrusive? Here the reporter seems to misread one of the Amazon reviews, implying that the reviewer preferred DRM to non-DRM discs:

“I give this CD four stars only because of the copyright protection,” wrote one reviewer. “This CD didn’t play too well on my computer until I downloaded some kind of license agreement, and was connected to the Internet. Otherwise, it’s very good.”

It should be clear enough from this quote (and if you’re not sure, go read the full review on Amazon) that this reviewer saw the DRM as a negative. And at least two other reviewers at Amazon say flatly that the CD did not work in their players.

The topper, though, is the last paragraph, which shows a reporter or editor asleep at the switch:

A Princeton University graduate student distributed a paper on the Internet shortly after the CD’s release demonstrating, he argued, how the copy-protection could be broken. But Jacobs, who initially threatened to sue the student before backing off, said his technology is meant to thwart casual copying, not determined hackers.

What’s with the “he argued”? The claims in the student’s paper are factual in nature, and could easily have been checked. SunnComm even admits that the claims are accurate.

And how can the reporter let pass the statement by Jacobs implying that only “determined hackers” would be able to thwart the technology? We’re talking about pressing the shift key, which is hardly beyond the capabilities of casual users.

We’ve come to expect this kind of distortion from SunnComm’s press releases. Why are we reading it in the Washington Post?

DMCA Exemptions Granted, Problems Remain

The U.S. Copyright Office has issued its report, creating exemptions to the DMCA’s anti-circumvention provisions for the next three years. The exemptions allow people to circumvent access control technologies under certain closely constrained conditions. The exemption rulemaking, which happens every three years, was created by Congress as a kind of safety valve, intended to keep the DMCA from stifling fair use too severely.

This time around, exemptions were granted for (1) access to the “block-lists” of censorware products, and (2) works protected by various types of broken or obsolete access control mechanisms.

My own exemption request, asking for exemptions for information security researchers, was denied as expected.

It is abundantly clear by now that the DMCA has had a chilling effect on legitimate research related to access control technologies. When researchers ask Washington for a solution to this problem, they have so far gotten a Catch-22 answer. When we ask Congress do to something, we are told to seek an exemption in the Copyright Office rulemaking. But when we petitioned the Copyright Office for an exemption in the 2000 rulemaking, we were told that the Copyright Office did not have the power to grant the kind of exemption we had requested.

So this time, I wrote an exemption request that was designed to end the Catch-22 – to entice the Copyright Office to either (a) grant an exemption for researchers, or (b) state flatly that Congress had not given it the power to grant any kind of useful research exemption. As I read the Copyright Office’s findings (see pages 14-15 of the short version, or pages 86-89 of the extended dance version; they designate my request as number 3), they have essentially said (b) – exemptions of the type I requested “cannot be considered.”