March 30, 2017

Archives for March 2011

Web Browsers and Comodo Disclose A Successful Certificate Authority Attack, Perhaps From Iran

Today, the public learned of a previously undisclosed compromise of a trusted Certificate Authority — one of the entities that issues certificates attesting to the identity of “secure” web sites. Last week, Comodo quietly issued a command via its certificate revocation servers designed to tell browsers to no longer accept 9 certificates. This is fairly standard practice, and certificates are occasionally revoked for a variety of reasons. What was unique about this case is that it was followed by very rapid updates by several browser manufacturers (Chrome, Firefox, and Internet Explorer) that go above and beyond the normal revocation process and hard-code the certificates into a “do not trust” list. Mozilla went so far as to force this update in as the final change to their source code before shipping their major new release, Firefox 4, yesterday.

This implied that the certificates were likely malicious, and may even been used by a third-party to impersonate secure sites. Here at Freedom to Tinker we have explored the several ways in which the current browser security system is prone to vulnerabilities [1, 2, 3, 4, 5, 6]

Clearly, something exceptional happened behind the scenes. Security hacker Jacob Appelbaum did some fantastic detective work using the EFF’s SSL Observatory data and discovered that all of the certificates in question originated from Comodo — perhaps from one of the many affiliated companies that issues certificates under Comodo’s authority via their “Registration Authority” (RA) program. Evidently, someone had figured out how to successfully attack Comodo or one of their RAs, or had colluded with them in getting some invalid certs.

Appelbaum’s pressure helped motivate a statement from Mozilla [and a follow-up] and a statement from Microsoft that gave a little bit more detail. This afternoon, Comodo released more details about the incident, including the domains for which rogue certificates were issued:,, (3 certs),,,, and “global trustee”. Comodo noted:

“The attack came from several IP addresses, but mainly from Iran.”


“this was likely to be a state-driven attack”

[Update: Someone claiming to be the hacker has posted a manifesto. At least one security researcher finds the claim to be credible.]

It is clear that the domains in question are among the most attractive targets for someone who wants to surveil the personal communications of many people online by inserting themselves as a “man in the middle.” I don’t have any deep insights on Comodo’s analysis of the attack’s origins, but it seems plausible. (I should note that although Comodo claims that only one of the certificates was “seen live on the Internet”, their mechanism for detecting this relies on the attacker not taking some basic precautions that would be well within the means and expertise of someone executing this attack.) [update: Jacob Appelbaum also noted this, and has explained the technical details]

What does this tell us about the current security model for web browsing? This instance highlights a few issues:

  • Too many entities have CA powers: As the SSL Observatory project helped demonstrate, there are thousands of entities in the world that have the ability to issue certificates. Some of these are trusted directly by browsers, and others inherit their authority. We don’t even know who many of them are, because such delegation of authority — either via “subordinate certificates” or via “registration authorities” — is not publicly disclosed. The more of these entities exist, the more vulnerabilities exist.
  • The current system does not limit damage: Any entity that can issue a certificate can issue a certificate for any domain in the world. That means that a vulnerability at one point is a vulnerability for all.
  • Governments are a threat: All the major web browsers currently trust many government agencies as Certificate Authorities. This often includes places like Tunisia, Turkey, UAE, and China, which some argue are jurisdictions hostile to free speech. Hardware products exist and are marketed explicitly for government surveillance via a “man in the middle” attack.
  • Comodo in particular has a bad track record with their RA program: The structure of “Registration Authorities” has led to poor or nonexistant validation in the past, but Mozilla and the other browsers have so far refused to take any action to remove Comodo or put them on probation.
  • We need to step up efforts on a fix: Obviously the current state of affairs is not ideal. As Appelbaum notes, efforts like DANE, CAA, HASTLS, and Monkeysphere deserve our attention.

[Update: Jacob Appelbaum has posted his response to the Comodo announcement, criticizing some aspects of their response and the browsers.]

[Update: A few more details are revealed in this Comodo blog post, including the fact that “an attacker obtained the username and password of a Comodo Trusted Partner in Southern Europe.”]

[Update: Mozilla has made Appelbaum’s bug report publicly visible, along with the back-and-forth between him and Mozilla before the situation was made public. There are also some interesting details in the Mozilla bug report that tracked the patch for the certificate blacklist. There is yet another bug that contains the actual certificates that were issued. Discussion about what Mozilla should do in further response to this incident is proceeding in the newsgroup]

[Update: I talked about this issue on Marketplace Tech Report.]

You may also be interested in an October 22, 2010 event that we hosted on the policy and technology issues related to online trust (streaming video available):

Google Should Stand up for Fair Use in Books Fight

On Tuesday Judge Denny Chen rejected a proposed settlement in the Google Book Search case. My write-up for Ars Technica is here.

The question everyone is asking is what comes next. The conventional wisdom seems to be that the parties will go back to the bargaining table and hammer out a third iteration of the settlement. It’s also possible that the parties will try to appeal the rejection of the current settlement. Still, in case anyone at Google is reading this, I’d like to make a pitch for the third option: litigate!

Google has long been at the forefront of efforts to shape copyright law in ways that encourage innovation. When the authors and publishers first sued Google back in 2005, I was quick to defend the scanning of books under copyright’s fair use doctrine. And I still think that position is correct.

Unfortunately, in 2008 Google saw an opportunity to make a separate truce with the publishing industry that placed Google at the center of the book business and left everyone else out in the cold. Because of the peculiarities of class action law, the settlement would have given Google the legal right to use hundreds of thousands of “orphan” works without actually getting permission from their copyright holders. Competitors who wanted the same deal would have had no realistic way of doing so. Googlers are a smart bunch, and so they took what was obviously a good deal for them even though it was bad for fair use and online innovation.

Now the deal is no longer on the table, and it’s not clear if it can be salvaged. Judge Chin suggested that he might approve a new, “opt-in” settlement. But switching to an opt-in rule would undermine the very thing that made the deal so appealing to Google in the first place: the freedom to incorporate works whose copyright status was unclear. Take that away, and it’s not clear that Google Book Search can exist at all.

Moreover, I think the failure of the settlement may strengthen Google’s fair use argument. Fair use exists as a kind of safety valve for the copyright system, to ensure that it does not damage free speech, innovation, and other values. Although formally speaking judges are supposed to run through the famous four factor test to determine what counts as a fair use, in practice an important factor is whether the judge perceives the defendant as having acted in good faith. Google has now spent three years looking for a way to build its Book Search project using something other than fair use, and come up empty. This underscores the stakes of the fair use fight: if Judge Chen ruled against Google’s fair use argument, it would mean that it was effectively impossible to build a book search engine as comprehensive as the one Google has built. That outcome doesn’t seem consistent with the constitution’s command that copyright promote the progress of science and the useful arts.

In any event, Google may not have much choice. If it signs an “opt-in” settlement with the Author’s Guild and the Association of American Publishers, it’s likely to face a fresh round of lawsuits from other copyright holders who aren’t members of those organizations — and they might not be as willing to settle for a token sum. So if Google thinks its fair use argument is a winner, it might as well test it now before it’s paid out any settlement money. And if it’s not, then this business might be too expensive for Google to be in at all.

Seals on NJ voting machines, as of 2011

Part of a multipart series starting here.

During the NJ voting-machines trial, plaintiffs’ expert witness Roger Johnston testified that the State’s attempt to secure its AVC Advantage voting machines was completely ineffective: the seals were ill-chosen, the all-important seal use protocol was entirely missing, and anyway the physical design of this voting machine makes it practically impossible to secure using seals.

Of course, the plaintiffs’ case covered many things other than security seals. And even if the seals could work perfectly, how could citizens know that fraudulent vote-miscounting software hadn’t been perfectly sealed into the voting machine?

Still, it was evident from Judge Linda Feinberg’s ruling, in her Opinion of February 2010, that she took very seriously Dr. Johnston’s testimony about the importance of a seal use protocol. She ordered,


For a system of tamper-evident seals to provide effective protection seals must be consistently installed, they must be truly tamper-evident, and they must be consistently inspected. While the new seals proposed by the State will provide enhanced security and protection against intruders, it is critical for the State to develop a seal protocol, in writing, and to provide appropriate training for individuals charged with seal inspection. Without a seal-use protocol, use of tamper-evident seals significantly reduces their effectiveness.

The court directs the State to develop a seal-use protocol. This shall include a training curriculum and standardized procedures for the recording of serial numbers and maintenance of appropriate serial number records.

(With regard to other issues, she ordered improvements to the security of computers used to prepare ballot definitions and aggregate vote totals; criminal background checks for workers who maintain and transport voting machines; better security for voting machines when they are stored at polling places before elections; that election computers not be connected to the Internet; and better training for election workers in “protocols for the chain of custody and maintenance of election records.”)

Judge Feinberg gave the State until July 2010 to come up with a seal use protocol. The State missed this deadline, but upon being reminded of the deadline, they submitted to the Court some woefully inadequate sketches for such a protocol. The Court rejected these sketches, and told them to come up with a real protocol. In September 2010 they tried again with a lengthier document that was still short on specifics, and the Court again found this inadequate. In October 2010 they tried again, asking for another 12-month extension, which the judge granted. In addition they proposed some new seal protocols, but asked the Court not to show them to Plaintiffs’ experts–which is most unusual in the tradition of Anglo-American law, where the Court is supposed to hear from both sides before a finding of fact. By March 2011, Judge Feinberg has not yet decided whether the State has a seal use protocol in compliance with her Order.

I’ve been observing the New Jersey Division of Elections quite closely over the past few years, as this litigation has dragged on. In some things they do a pretty good job: they are competent at voter registration, and they do maintain enough polling places so that the lines don’t get long—and these are basics of election administration that we should not take for granted. But with regard to the security of their voting machines, they just don’t get it. These direct-recording electronic voting machines are inherently insecure, and in the period 2008-2010 they have applied no fewer than six different ad-hoc “patches” to try to secure these machines: four different seal regimes, followed by three different documents claiming to be seal use protocols.

Is the New Jersey Division of Elections deliberately stalling, preserving insecure elections by dragging this case out, always proposing too little, too late and always requesting another extension? Or do they just not care, so through their lack of attention they always propose too little, too late and always request another extension? Even if the Division of Elections could come up with a seal use protocol that the Court would accept, how could we believe that these Keystone Kops could have the follow-through, the “security culture”, to execute such a protocol in the decades to come?

These voting machines are inherently insecure. The State claims they could be made secure with good seals. That’s not true: even with perfect seals and a perfectly executed seal-use protocol, there is the danger of locking fraudulent software securely into the voting machine! But even on its own flawed terms–trying to solve the problem with seals insead of with an inherently auditable technology–the State is failing to execute.