April 23, 2024

Web Security Trust Models

[This is part of a series of posts on this topic: 1, 2, 3, 4, 5, 6, 7, 8.]

Last week, Ed described the current debate over whether Mozilla should allow an organization that is allegedly controlled by the Chinese government to be a default trusted certificate authority. The post prompted some very insightful feedback, including questions about alternative trust models. I will try to lay out the different types of models on a high level, and I encourage corrections or clarifications. It’s worth re-stating that what we’re talking about is how you as a web user know that who you are talking to is who they claim to be (if they are, then you can be confident that your other security measures like end-to-end encryption are working).

Flat and Inflexible
This is the model we use now. Your browser comes pre-loaded with a list of Certificate Authorities that it will trust to guarantee the authenticity of web sites you visit. For instance, Mozilla (represented by the little red dragon in the diagram) ships Firefox with a list of pre-approved CAs. Each browser vendor makes its own list (here is Mozilla’s policy for how to get added). The other major browsers use the same model and have themselves already allowed CNNIC to become trusted for their users. This is a flat model because each CA has just as much authority as the others, thus each effectively sits at the “root” of authority. Indeed any of the CAs can sign certificates for any entity in the world (hence the asterisk in each). They do not coordinate with each other, and can sign a certificate for an entity even if another CA has already done so. Furthermore, they can confer this god-like power on other entities without oversight or the prior knowledge of the end users or the entities being signed for.

This is also an inflexible model because there is no reasonable way to impose finer-grained control on the authority of the CAs. The standard used is called X.509. It doesn’t allow you to trust Verisign to a greater or lesser extent than the Chinese government — it is essentially all or nothing for each. You also can’t tell your browser to trust CNNIC only for sites in China (although domain name constraints do exist in the standard, they are not widely implemented). It is also inflexible because most browsers intentionally make it difficult for a user to change the certificate list. It might be possible to partially mitigate some of the CA/X.509 shortcomings by implementing more constraints, improving the user interface, adding “out of band” certificate checks (like Perspectives), or generating more paranoid certificate warnings (like Certificate Patrol).

Decentralized and Dependent
In the early days of the web, an alternative approach already existed. This model did away entirely with a default set of external trusted entities and gave complete control to the individual. The idea was that you would start by trusting only people you “knew” (smiley faces in the diagram) to begin to build a “web of trust.” You then extend this web by trusting those people to vouch for others that you haven’t met (kind of like a a secure virtual version of Goodfellas). This makes it a fundamentally decentralized model. There is nothing limiting certain entities from gaining the trust of many people and therefore becoming de facto Certificate Authorities. This has only happened within technically proficient communities, and in the case of USENIX they eventually discontinued the service.

So, this is a system that is highly dependent on having some connection with whoever you want to communicate with. It has enjoyed some limited success via the PGP family of standards, but mostly for applications such as email or in more constrained situations like inter/intra-enterprise security. It is possible that with the boon in online social networks there is a new opportunity to renew interest in a web-of-trust style security architecture. The approach seems less practical for general web security because it requires the user to have some existing trust relationship with a site before using it securely. It is not necessarily an impossible approach — and the mod_openpgp and mod_gnutls projects show some technical promise — but as a practical matter wide-scale adoption of a “web of trust” style security model for the web seems unlikely.

Hierarchical and Delegated
A third approach starts with a single highly trusted root and delegates authority recursively. Any authority can only issue certificates for itself or the entities that fall “underneath” it, thus limiting the god-like power of the flat model. This also pushes signing power closer to the authenticated sites themselves. It is possible that this authority could be placed directly in their hands, rather than requiring an external authority to approve of each new certificate or domain. Note that I am describing this in a very domain-centric way. If we are willing to fully buy into the domain hierarchy way of thinking about web security, there may be a viable implementation path for this model.

Perhaps the greatest example of this delegation approach to web governance is the existing Domain Name System. Decisions at the root of DNS are governed by the international non-profit ICANN, which assigns authority to Top Level Domains (eg: .com, .net, .cn) who then further delegate through a system of registrars. The biggest problem with tying site authentication to DNS is that DNS is deeply insecure. However, within the next year a more secure version of DNS, DNSSEC, is scheduled to be deployed at the DNS root. Any DNSSEC query can be verified by following the chain of authority back to the root, and any contents of the response can be guaranteed to be unaltered through that chain of trust. The question is whether this infrastructure can be the basis for distributing site certificates as well, which could form the basis for hierarchical site authenticity (which would also permit encryption of traffic). CNNIC happens to also be the registry for the .cn TLD, so in this case it would be restricted to creating certificates for .cn domains. This approach is advocated by Dan Kaminsky (interview, presentation) and Paul Vixie (here, here). I’ve also found posts by Eric Rescorla and Jason Roysdon informative.

If implemented via DNSSEC, this approach would thoroughly bind web site authentication to the DNS hierarchy, and the only assurance it would provide is that you are communicating with the person who registered the domain you are visiting. It would not provide any additional verification about who that person is, as Certificate Authorities theoretically could do (but practically don’t). Certificates were originally envisioned as a way to guarantee that a particular real-world entity was behind the site in question, but market pressures caused CAs cut corners on the verification process. Most CAs now offer “Domain Validation” (DV) certificates that are issued without any human intervention and simply verify that the person requesting the certificate has control of the domain in question. These certificates are treated no differently than more rigorously verified certificates, so for all intents and purposes the DNSSEC certificate delegation model would provide at least the services of the current CA model. One exception is Extended Validation certificates, which require the CA to perform more rigorous checks and cause the browser URL bar to take on a “green glow”. It should hover be noted that there are some security flaws with the current implementation.

[Update: I discuss the DNSSEC approach in more detail here]

Open Questions
Are there appropriate stopgap measures on the existing CA model that can limit authority of certain political entities? Are there viable user interface improvements? Are users aware enough of these issues to do anything meaningful with more information about certificates? Does the hierarchical model force us to trust ICANN, and do we? Does the DNS hierarchy appropriately allocate authority? Is domain name enough of a proxy for identity that a DNS-based system makes sense? Do we need better ways of independently validating a person’s identity and binding that to their public key? Even if an alternative model is better, how do we motivate adoption?

Comments

  1. Dan Kaminsky says

    EV has one job, and one job only: To prevent http://www.bank-of-america.com from being mistaken for http://www.bankofamerica.com. That’s it. That’s all it does. That’s all it was designed to do.

    Some idiot marketers screwed up royally and made bigger claims. Barth and Jackson put them in their place, and Sotirov and Zusman implemented the attacks based on what they wrote about.

    It’s actually a really fun exercise to try to figure out how to realistically block https:// (EV) sites from linking against https:// (DV) sites. Guess what — you can’t. You suddenly find yourself with an undeployable certificate.

    This is the problem with X.509. Everywhere you turn, deployment blocking friction. We have to get past this.

    • You could block non-EV sites by enforcing the usage of the EV certificate policy (2.16.840.1.114028.10.1.2) across each certification path required by a given page. For this to work, you’d have to allow any policy, since many (if not most) EV CAs have an any policy OID in their certificates. The enforcement would need to be processed from the trust anchor to the server certificate to be effective, which gets back to the lack of TA constraints.

      • Oops, there are lots of EV policy OIDs. Same approach could work, but would require a set larger than one OID. Requiring a policy and supplying an initial policy set would be effective, however.

        • Dan Kaminsky says

          Mixed content warnings were hard enough to deal with. If moving a site to EV meant tracking down every possible include on every possible page, sites just would not be moved to EV. The technology would have *zero* adoption.

          Phishing is a much bigger problem than DV leakage. It was right that they did not let the perfect be the enemy of the good. It was *ridiculous* that they oversold it (and kudos to Sotirov and Zusman for finding hard evidence of the CA’s doing just that).

          • Anonymous says

            For the initial deployment this is certainly true but at some point achieving EV only should be possible. It’d be a nice to have an option to require EV. If users had a means of forcing the issue, motivation to move to EV only may be generated.

  2. Dan Kaminsky says

    One of the big points in my 2008 DNS talk is that we are *already* depending on DNS for our security model. The Same Origin Policy which isolates web sites from one another effectively uses domain names as user principals. So, whether you like it or not, most of the web (and email, and database connections, and…well, everything) is already DNS dependent.

    I actually do like EV — they did a lot right with it. But there is nothing that prevents the certificate validated by DNSSEC, from also containing EV data. I like the added assurance too, and there’s no reason not to support it. But at the core, we need a system that scales trust as well as DNS scales connectivity. That system is DNSSEC.

  3. I believe that, even with the constraints of the public X.509 structure, it is possible to give the end user more control, along with a more nuanced view of trust. The flat/inflexible model could be retrofitted to be more distributed and flexible.

    For example:
    Implement granular filters at a per-root-CA level. E.g. trust CNNIC only for ServerAuth of servers in .cn domains. Since a TLD doesn’t provide enough granularity, this would ideally have an online/cached whitelist/blacklist capability, to allow subscriptions.
    Within the filters, provide the concept of partial, or suspicious trust. Provide a UI for non-technical users to see this. For example, browsers currently show trust of extended validation certificates in a different manner than regular certificates, or a three tier model (EV cert, regular SSL cert, no cert). This could be extended to multiple levels.
    Provide a way for technically sophisticated users and organizations to bundle sets of root CAs and filters. E.g. trust anything that root CA 1 vouches for, trust root CA 2 only for ServerAuth on .cn domains and suspiciously trust it for all other domains, trust but mark suspicious anything that root CA 3 vouches for…
    From there, I would expect that privacy oriented organizations would provide pre-made bundles, or that bundles could be built on-the-fly by answering a questionaire.

    To drive adoption, provide a ballot box when the user profile is first generated, much like the mechanism that IE uses to let users add alternate search engines.

    • Dan Kaminsky says

      So, everybody else gets to issue whatever certificates they want, but China gets restricted?

      Are you joking?

      This is ridiculous. In what universe is this China’s first CA certificate?

  4. Adam Shostack says

    What about key persistence as a way to provide “stopgap measures on the existing CA model that can limit authority of certain political entities?”

    If the key for google.com (signed by “verisign”) is suddenly replaced by one signed by CNNIC, then some users might be able to realize something is wrong.

    Of course, there are real issues in asking real people to understand what a cert is, what a CA is, but those issues are present & endemic today.

    Adam

    (Speaking only for myself)

    • That’s a possible incremental improvement to the current model. It’s part of the functionality of the Certificate Patrol extension that I referred to above. However, as you note, this assumes people have some minimal understanding of certificates and PKI… which is not a safe assumption. The Certificate Patrol page notes:

      Revealing this and other inner workings of X.509 to end users is deemed as being too difficult for them to handle. You however are an advanced user, who wants to keep track on when certificates are updated and make sure none of the many authorities you involuntarily need to trust to have a working web browsing experience, abuses your trust allowing someone to read into your HTTPS communications by means of a subtle man in the middle attack.

      The other concern is that users become numb to security warnings and just blindly click “ok.”

    • Dan Kaminsky says

      ==
      If the key for google.com (signed by “verisign”) is suddenly replaced by one signed by CNNIC, then some users might be able to realize something is wrong.
      ==

      First of all, we need to stop lying to ourselves that users can handle such things.

      Secondly, and more importantly, certificates aren’t actually always shared or static. If I have a thousand SSL accelerators, by both security theory *and* certificate licensing I often have a thousand certs.

      Yup.

  5. Crosbie Fitch says

    Here’s an introduction to a ‘better way of independently validating a person’s identity and binding that to their public key’: Ideating Identity.

  6. Anonymous, and prefer to remain so says

    The “Decentralized and Dependent” section should have mentioned the CAcert web of trust. Quite a few web sites use their certificates, and several Linux distributions include their root certificate, although CAcert haven’t yet managed to convince any major browser makers to do so.