December 15, 2024

DigiNotar Hack Highlights the Critical Failures of our SSL Web Security Model

This past week, the Dutch company DigiNotar admitted that their servers were hacked in June of 2011. DigiNotar is no ordinary company, and this was no ordinary hack. DigiNotar is one of the “certificate authorities” that has been entrusted by web browsers to certify to users that they are securely connecting to web sites. Without this certainty, users could have their communications intercepted by any nefarious entity that managed to insert itself in the network between the user and the web site they seek to reach.

It appears that DigiNotar did not deserve to be trusted with the responsibility to to issue certifying SSL certificates, because their systems allowed an outside hacker to break in and issue himself certificates for any web site domain he wished. He did so, for dozens of domain names. This included domains like *.google.com and www.cia.gov. Anyone with possession of these certificates and control over the network path between you and the outside world could, for example, view all of your traffic to Gmail. The attacker in this case seems to be the same person who similarly compromised certificate-issuing servers for the company Comodo back in March. He has posted a new manifesto, and he claims to have compromised four other certificate authorities. All signs point to the conclusion that this person is an Iranian national who supports the current regime, or is a member of the regime itself.

The Comodo breach was deeply troubling, and the DigiNotar compromise is far worse. First, this new break-in affected all of DigiNotar’s core certificate servers as opposed to Comodo’s more contained breach. Second, this afforded the attacker with the ability of issuing not only baseline “domain validated” certificates but also higher-security “extended validation” certificates and even special certificates used by the Dutch government to secure itself (see the Dutch government’s fact sheet on the incident). However, this damage was by no means limited to the Netherlands, because any certificate authority can issue certificates for any domain. The third difference when compared to the Comodo breach is that we have actual evidence of these certificates being deployed against users in the real world. In this case, it appears that they were used widely against Iranian users on many different Iranian internet service providers. Finally, and perhaps most damning for DigiNotar, the break-in was not detected for a whole month, and was then not disclosed to the public for almost two more months (see the timeline at the end of this incident report by Fox-IT). The public’s security was put at risk and browser vendors were prevented from implementing fixes because they were kept in the dark. Indeed, DigiNotar seems to have intended never to disclose the problem, and was only forced to do so after a perceptive Iranian Google user noticed that their connections were being hijacked.

The most frightening thing about this episode is not just that a particular certificate authority allowed a hacker to critically compromise its operations, or that the company did not disclose this to the affected public. More fundamentally, it reminds us that our web security model is prone to failure across the board. As I noted at the time of the Comodo breach:

I recently spoke on the subject at USENIX Security 2011 as part of the panel “SSL/TLS Certificates: Threat or Menace?” (video and audio here if you scroll down to Friday at 11:00 a.m., and slides here.)

Comments

  1. I’m putting in a vote for convergence.

    Also, this video may be of interest to some of you: http://youtube.com/watch?v=Z7Wl2FW2TcA It’s Moxie Marlinspike’s presentation at BlackHat ’11 and, in addition to some entertainingly delivered information on the Comodo attack, SSL’s history and internet security in general, Moxie gives a description of what he found out about the ‘Iranian’ hacker.

    I’m in no position to determine this individual/groups geographic origin, but the information Moxie presented is relevant and interesting.

  2. Why is the IP address displayed for the Yahoo server (i have Flagfox on my browser) coming up as if it is based in Iran?!?!?! Does this have to do with the recent cert hacks?!?!? Anyone else getting this??

  3. So Mozilla’s response to this discovery was quick and from what I know, effective – they quickly made an announcement, provided information about how to nix the root certificate for this CA, and a few days later released a software update which pulled the cert completely from their browser.

    As I understand it, this effectively makes their certs useless for a big chunk of web users out there that are using Firefox. If IE and Chrome had responded similarly, all their certs would have been rendered useless in a small time frame.

    The obvious side effect of this, to me, seems to be that DigiNotar is basically no longer a CA. Their entire business selling certificates is destroyed (along with the less tangible negative effect on the rest of their reputation in other businesses).

    This sounds like a pretty severe punishment for a CA – they are effectively Out of the Game, at least until such time as their certs are back in the major browsers.

    So my question is: is it really such a broken system if the penalties for failure are so significant? I guess I am assuming DigiNotar will basically be out of the CA business forever as a result of this, so I feel like the high level of accountability that CAs have in this system means that it is somewhat self-correcting.

    That is obviously based on a series of other assumptions – that the other major browser developers will ruthlessly pull DigiNotar certs – that might not be true.

    All that said, I totally agree there are too many entities issuing CAs, and the other summary dot points as well!

    • I know that Chrome has been updated to remove DigiNotar certs.
      http://googlechromereleases.blogspot.com/2011/09/stable-channel-update.html

      Microsoft are making suitable noises, too.
      http://www.microsoft.com/technet/security/advisory/2607712.mspx

      So, yes, it looks like the other browsers are taking similar steps. I agree, it looks like DigiNotar’s CA days are numbered.

      I believe that this is the first time that the Browser makers have “nuked a CA from orbit” like this. It should serve as a reminder to the others of what can happen. Whether they pay attention is another question.

      • It’s good that IE & Chrome & Firefox made a quick update. But there are plenty of users out there running old browsers, or who have auto-updates turned off. And those folks are still vulnerable, and will stay that way effectively forever.

        Also, what about browsers in mobile devices – I have no idea what set of CAs are recognized by a Blackberry or Android or iPhone? My Android hasn’t done an update that I’ve noticed, and given how long the lead time seems to be from when Google releases an update until it gets through Motorola and Verizon, it may be a while.

        There’s still a fundamental problem, and fixing it in the end-user devices isn’t very effective. It needs to be fixed in the infrastructure.

        • After posting the above comment, I ran across this article: http://www.securelist.com/en/blog/208193111/Why_Diginotar_may_turn_out_more_important_than_Stuxnet which says in part:

          Mobile devices — While browsers for desktops and laptops are receiving updates to blacklist these CAs it remains very quiet on the mobile front. This is especially worrisome as *.android.com is one of the targeted domains in this attack. Here’s a simple guideline: If a device can do email or web browsing then the CAs need to be revoked on that device.

          Apple — So far it’s not known if Apple is even planning on revoking these CAs. I don’t understand why Apple is keeping radio silence on this and quite frankly it’s unacceptable. Using third party web browsers/email clients is the way to go.

          So it looks like my hypothesis may be correct….

    • “So my question is: is it really such a broken system if the penalties for failure are so significant?”

      The major browsers responded quite satisfactorily, and my hope is that this will serve as a strong incentive for CAs to behave better. That being said, there have been several previous instances that led to no action on the part of the browsers. And even if they have the right incentives, the surface area of attack is so large that someone is bound to make a mistake somewhere (and they might not even notice it such that they can inform the browser vendors).

      And in any case, there was a three month period during which the CA trust model was completely compromised by this person. This could have been prevented or mitigated in various ways by improving or replacing our current PKI architecture, but as it stands we’re wide open to this happening again.

      And even the swift action by the major desktop browsers doesn’t fix this particular exploit for all users. For instance, the complicated process for pushing updates to Android, coupled with the lack of any user-customizable trust preferences, means that today and for the foreseeable future most Android users are still at risk from these known-bad certificates:
      http://code.google.com/p/android/issues/detail?id=11231#c92

      The critical failure of the model is that when one part of it fails, the whole system falls apart… and the likelihood of another such failure is uncomfortably high.

      • Jenora Feuer says

        Of course, another real worry here is that the wrong lesson will be learned.

        After all, this incident being made public has resulted in the CA essentially being removed as a valid CA to the point where nobody is likely to use them.

        When the penalties for failure are so significant, as the previous poster mentioned, the more likely result is not to improve security, but to attempt to hide the failures better. An attempt the people who broke into the system in the first place are often willing to help with.

        The next time this happens, it may be broken open even longer than three months. If the penalty for your failure being known is essentially complete irrelevance of that business unit, the company may literally have nothing else to lose by hiding it, no matter what sorts of legal penalties are set up. The business is dead anyway, might as well lie to keep it running as long as possible…

  4. Nicolas Christin says

    One possible solution is to only trust certificates when a majority of people believe they are trustworthy rather than taking the word of a given CA. This is roughly what is implemented through Perspectives, and its recent follow-up called Convergence. (Perspectives was developed at CMU; Convergence is developed by an independent researcher known under the alias Moxie Marlinspike, and is an extension of Perspectives.)

    It seems like it is a perfect time to accelerate deployment of such a model.

    • Yes, Perspectives/Convergence is interesting. EFF is planning to release a “distributed” version of their SSL Observatory aimed at making further improvements to this approach.

      However, I think that it’s hard to get this right from a user interface perspective… it’s also hard to know how to make it possible for “average” users to understand or express anything about the community of authorities that it trusts.

      I am more excited about efforts like DANE (which puts certificate information in signed DNS records) and HSTS (a trust-on-first-use model that could be modified to allow sites to indicate a whitelist of authoritative CAs for any given domain).

      • Instead of trying to have a global consensus about what name is served by what host, another approach is to have each individual web browser gain confidence in information based on what it has seen in the past. For example, browsers might raise a red flag if the CA information for google.com is different today than it was yesterday.

        http://blog.lexspoon.org/2011/04/i-like-latest-ideas-from-dan-wallach.html

        In addition to gaining information based on where you’ve personally gone in the past, it can also help to have information from your friends about what sites they have seen.

        http://www.waterken.com/dev/YURL/Definition/

        • What does a user do when an extension like Certificate Patrol tells them that the certificate has changed? We know from user studies that users ignore security errors. This is particularly true when there are many false positives, which is the case with extensions like Certificate Patrol because sites regularly change their certs (for roll-over and other purposes).

          Do you really think that users are likely to actively manage lists of trusted acquaintances? If that were the case, PGP would have already caught on (indeed, PGP/GPG keys have been supported in TLS for a while now).

          I put both of these ideas in the “good idea in principle, but not likely to be adopted” category.

  5. David Karger says

    This seems like a great opportunity to apply redundancy. How hard would it be adapt SSL to use multiple layers of encryption from several distinct providers?

    • The encryption did not fail… The connection to the interception point was perfectly encrypted (and the outbound connection to Google too.) The problem is that the browser believed that the intercepting machine was Google.

      <<insert CA race to the bottom rant here>>

      Here, because a CA failed, a hacker could generate a certificate and present that to a browser, saying “I’m Google, believe me, DigiNotar says so.” To get a browser to actually connect to that “fake server” requires either a DNS or a routing hack; but apparently that too was accomplished in Iran.
      (Is the Iran government to blame for the interception? Anonymous Iran? Who knows.)