March 26, 2017

Building a better CA infrastructure

As several Tor project authors, Ben Adida and many others have written, our certificate authority infrastructure has the flaw that any one CA, anywhere on the planet, can issue a certificate for any web site, anywhere else on the planet. This was tolerable when the only game in town was VeriSign, but now that’s just untenable. So what solutions are available?

First, some non-solutions: Extended validation certs do nothing useful. Will users be properly trained to look for the extra changes in browser behavior as to scream when they’re absent via a normal cert? Fat chance. Similarly, certificate revocation lists buy you nothing if you can’t actually download them (a notable issue if you’re stuck behind the firewall of somebody who wants to attack you).

A straightforward idea is to track the certs you see over time and generate a prominent warning if you see something anomalous. This is available as a fully-functioning Firefox extension, Certificate Patrol. This should be built into every browser.

In addition to your first-hand personal observations, why not leverage other resources on the network to make their own observations? For example, while Google is crawling the web, it can easily save SSL/TLS certificates when it sees them, and browsers could use a real-time API much like Google SafeBrowsing. A research group at CMU has already built something like this, which they call a network notary. In essence, you can have multiple network services, running from different vantage points in the network, all telling you whether the cryptographic credentials you got match what others are seeing. Of course, if you’re stuck behind an attacker’s firewall, the attacker will similarly filter out all these sites.

UPDATE: Google is now doing almost exactly what I suggested.

There are a variety of other proposals out there, notably trying to leverage DNSSEC to enhance or supplant the need for SSL/TLS certificates. Since DNSSEC provides more control over your DNS records, it also provides more control over who can issue SSL/TLS certificates for your web site. If and when DNSSEC becomes universally supported, this would be a bit harder for attacker firewalls to filter without breaking everything, so I certainly hope this takes off.

Let’s say that future browsers properly use all of these tricks and can unquestionably determine for you, with perfect accuracy, when you’re getting a bogus connection. Your browser will display an impressive error dialog and refuses to load the web site. Is that sufficient? This will certainly break all the hotel WiFi systems that want to redirect you to an internal site where they can charge you to access the network. (Arguably, this sort of functionality belongs elsewhere in the software stack, such as through IEEE 802.21, notably used to connect AT&T iPhones to the WiFi service at Starbucks.) Beyond that, though, should the browser just steadfastly refuse to allow the connection? I’ve been to at least one organization whose internal WiFi network insists that it proxy all of your https sessions and, in fact, issues fabricated certificates that you’re expected to configure your browser to trust. We need to support that sort of thing when it’s required, but again, it would perhaps best be supported by some kind of side-channel protocol extension, not by doing a deliberate MITM attack on the crypto protocol.

Corner cases aside, what if you’re truly in a hostile environment and your browser has genuinely detected a network adversary? Should the browser refuse the connection, or should there be some other option? And if so, what would that be? Should the browser perhaps allow the connection (with much gnashing of teeth and throbbing red borders on the window)? Should previous cookies and saved state be hidden away? Should web sites like Gmail and Facebook allow users to have two separate passwords, one for “genuine” login and a separate one for “Yes, I’m in a hostile location, but I need to send and receive email in a limited but still useful fashion?”

[Editor’s note: you may also be interested in the many prior posts on this topic by Freedom to Tinker contributors: 1, 2, 3, 4, 5, 6, 7, 8 — as well as the “Emerging Threats to Online Trust: The Role of Public Policy and Browser Certificates” event that CITP hosted in DC last year with policymakers, industry, and activists.]


  1. … for use on hostile networks, is probably one of the best purely-technological solutions I’ve heard to the issue, at least for certain classes of applications (e-mail might not even be the best example here.)

    for applications that really demand it — for online banking and the like — I personally think the idea of service providers distributing keys out-of-band is one that deserves more attention: imagine if, when you signed up for a bank and it sent you an ATM card, it also sent you a certificate for its HTTP server on a USB stick, with an installer that added it to your OS or browser’s certificate store, perhaps with an authentication system designed in to ensure that the device was authentic.

    this, at least, makes it difficult to scale the compromise of a CA to the point where it breaks the entire Internet, and aligns the trust model of HTTPS with a trust model people are used to: dealing with companies that they’re doing business with.

  2. Jim Lyon says:

    The current security infrastructure basically attempts to answer the question “is the site that presented this certificate authentic?” When you visit a site over and over, it ignores previous experience and attempts to answer the question just using the presented certificate.

    A much better approach is for a user’s computer system to remember some history. Every day, when you visit GMail (for example), your computer sees the same certificate. But suppose day, while you’re traveling in some authoritarian country, your computer received a different certificate when visiting GMail. This certificate is ostensibly valid, but rooted in a different CA than what you’ve seen before. If this occurs, you should be very suspicious, regardless of the “trusted” status of the certificate or CA.

    Similarly, if you’ve been visiting a site with a self-signed certificate for the past five years, and you’ve been getting good service from them, you probably have more trust in their identity from this untrusted certificate than you would if you suddenly saw a trusted cert.

    Experience is important — in most cases, even more important than credentials. If you’ve had a friend for years, and you meet someone else who presents lots of identity cards that prove that he’s your friend, you just won’t believe him. Experience will trump credentials, substantially ever time.

    This is not to argue that credentials in general, and CA’s in specific, are worthless. They are quite useful during the early phases of a relationship when you haven’t yet developed trust.

    • That’s precisely what Certificate Patrol is all about and why I endorsed it in the main body of the article. The only catch is if you’re starting behind a firewall where you never get a proper view of the Internet, that mechanism alone would be insufficient to help you. That’s why I’m intrigued by solutions based around DNSSEC, although I’m sure it’s far from being a panacea on its own, particularly if you allow for man-in-the-middle attacks where the attacker systematically resigns things that others signed, in an attempt to virtualize your view of the outside universe. (In other words, I don’t think any of this is going to offer effective countermeasures when you live in a totalitarian country.)

      • If we have a system behind a firewall, it is possible to have the firewall lie consistently to the system, such that the system does not know it is lied to. If we insert fake DNSSEC root certificates and fake CA root certificates all kinds of spoofs can be done.

        The only way around the issue is having alternative communication channels (CD-ROMS, USB-sticks) and then we can only detect manipulation. (Iff the software is doing proper checking!)

    • A much better approach is for a user’s computer system to remember some history. Every day, when you visit GMail (for example), your computer sees the same certificate. But suppose day, while you’re traveling in some authoritarian country, your computer received a different certificate when visiting GMail. This certificate is ostensibly valid, but rooted in a different CA than what you’ve seen before. If this occurs, you should be very suspicious, regardless of the “trusted” status of the certificate or CA.

      Say instead of the chain of trust looking like

      – A Systems Level 1 CA (root cert)
      – A Systems Level 2 CA

      it looks like

      – B’s Discount CA Level 1 CA (root cert)
      – B’s Discount CA RA
      – “A Systems Level 1 CA” cert issued by attacker (matches everything in the real cert except for serial number/fingerprint/key)
      – “A Systems Level 2 CA” cert issued by attacker (matches everything in the real cert except for serial number/fingerprint/key)

      how would you design the UI of a notary system’s client to pick up on this as a legitimately suspicious change and report it as such to the user (in a maximally effective way), but not alert the user when legitimately changes CAs?

      • …which doesn’t exist today.

        Soghoian and Stamm suggest other metrics for detecting fishy changes of CA… primarily a change of country of operations:

        Certified Lies: Detecting and Defeating Government Interception Attacks Against SSL (2010)

      • dwallach says:

        One possibility is to use the server’s existing public key (not any CA) to endorse the new public key alongside the new cert. That way, if a server (for whatever reason) really wants to switch to a new CA provider before the old cert has expired, the transition will be something that network notaries (and/or local policy engines) can use as a signal in what starts to look not entirely unlike a spam classification system. Some signals tell you a public key is valid while others tell you a public key is fishy.

        • while that does sound like a good solution to the technical problem associated with attacks of that form, I didn’t intend for the question to be about that. but ultimately, in the Comodo intrusion, nothing technical broke down (besides the RA’s SQL); what broke down is the non-technical process by which entities in the real world are bound to certificates. presumably you can’t design any notary system to be 100% resilient to false positives; at some point, you’re going to have to expose your suspicions to the user without silently taking action.

          how do you expose “normally we’d expect Company You’ve Never Heard Of to say that belongs to Example Bank, N.A. directly, but today, Company You’ve Never Heard Of [and, indeed, is being spoofed by an attacker] is being vouched for by someone different who still says that belongs to Example Bank, N.A.” to the user as an error worthy of suspicion?

  3. These are all good long term, but short term how about fixing CA incompetence. Why should anyone be issuing or certificates without checking with google and yahoo first? Why are they issuing them when the existing certificates have months or years left to go? The Comodo certificate debacle should have been detected the moment the domains were asked for on the basis that some italian reseller is not going to be the place where these domains get purchased.

    FWIW, my UK banks now use the IC in my debit card and a separate card reader to authenticate logon and all transactions that move money out. The logon is vulnerable to MIMA, but the transfer isn’t. Of course, this introduces a new failure mode that I have encoutered once: get your cards stolen and there is no way to xfer money out of your account and into another account for which you have cards. As a result my backup account (with a card secure at home) is now kept with a float enough to keep me going until the main a/c gets a new card.

  4. In addition to the antisecurity issues you mention, there’s the whole concept of browser-based signon, which makes little or no sense in the context of newer mobile devices. (Yes, I’m tired of connecting to a hotel or airport network, especially a “free” one, only to find out that neither my email nor any of my apps work until I fire up a browser and sign in. And sometimes not even then, because my local cache has been polluted by whatever the network said before it got the browser-based signin. Then there’s the matter of per-device charges, so that a hotel guest with a phone, a laptop and an e-book reader is supposed to fork over $40 a day…)

  5. Tony Finch says:

    The IETF “DNS-based Authentication of Names (DANE)” working group is developing a standard way to use DNSSEC to shore up TLS.

  6. Jim Lyon says:

    @Dan: “… use the server’s existing public key (not any CA) to endorse …”

    You’re definitely onto something here. The entire point of the CA infrastructure is to help you to determine whether a specific key belongs to a specific entity. If you’ve grown to trust a specific public key, it doesn’t matter whether the organization changes CA’s or not. Conversely, if you see a simultaneous change in both the public key and the CA presented by a web site, you should be very suspicious.

    I could easily see a world in which we expect changes in public keys to be accompanied by a certificate derived from the old public key, as a means of encouraging trust of the new PK.

    @Mathfox: If you spend your entire life behind the same lying firewall, I agree things are completely hopeless. So let’s restrict our focus to those who travel between such firewalls and the outside world. Doing a good job with this will probably add a small amount of discouragement to creating such firewalls in the first place.

  7. Lex Spoon says:

    For the second trick you describe, where you leverage Google’s view of the Internet, imagine if Google results gave you not just a list of URLs but also a list of certificates that Google had seen for the resulting pages. Then when the browser follows one of the links, it could verify that the site is certified by the same key that Google saw.

    This is the Y property, and it forms a much better basis for secure URLs than do certificate authorities.