December 21, 2024

Web Certification Fail: Bad Assumptions Lead to Bad Technology

It should be abundantly clear, from two recent posts here, that the current model for certifying the identity of web sites is deeply flawed. When you connect to a web site, and your browser displays an https URL and a happy lock or key icon indicating a secure connection, the odds that you’re connecting to an impostor site, despite your browser’s best efforts, are uncomfortably high.

How did this happen? The last two posts unpacked some of the detailed problems with the current system. Today I want to explore the root cause: today’s system is based on wildly unrealistic assumptions about organizations and trust.

The theory behind the system is simple. Browser vendors will identify a set of Certificate Authorities (CAs) who are trusted to certify identities. Browsers will automatically accept any identity certificate issued by any of the trusted CAs.

The first step in making this system work is identifying some CA who is trusted by everybody in the world.

If that last sentence didn’t strike you as odd, go back and read it again. That’s right, the system assumes that there is some party who is trusted by everyone in the world — a spectacularly naive assumption.

Network engineers like to joke about the “evil bit”, a hypothetical label put on each network packet, indicating whether the packet is evil. (See RFC 3514, Steve Bellovin’s classic parody standards document codifying the evil bit. I’ve always loved that the official Internet standards series accepts parody standards.) Well, the “trusted bit” for certificate authorities is pretty much as the same as the evil bit, only applied to organizations rather than network packets. Yet somehow we ended up with a design that relies on this “trusted bit”.

The reason, in part, is unclear thinking about institutional trust, abetted by the unclear language we often use in discussing trust online. For example, we tend to conflate two meanings of the word “trusted”. The first meaning of “trusted”, which is the everyday meaning, implies a judgment that a party is unlikely to misbehave. The second meaning of “trusted”, more common in military security settings, is a factual statement that someone is vulnerable to misbehavior by another. In an ideal world, we would make sure that someone was trusted in the first sense before they became trusted in the second sense, that is, we would make sure that someone was unlikely to misbehave before we we made ourselves vulnerable to their misbehavior. This isn’t easy to do — and we will forget entirely to do it if we confuse the two meanings of trusted.

The second linguistic problem is to use the passive-voice construction “A is trusted to do X” rather than the active form “B trusts A to do X.” The first form is problematic because it doesn’t say who is doing the trusting. Consider these two statements: (A) “CNNIC is a trusted certificate authority.” (B) “Everyone trusts CNNIC to be a certificate authority.” The first statement might sound plausible, but the second is obviously false.

If you try to explain to yourself why the existing web certification system is sound, while avoiding the two errors above (confusing two senses of “trusted”, and failing to say who is doing the trusting), you’ll see pretty quickly that the argument for the current system is tenuous at best. You’ll see, too, that we can’t fix the system by using different cryptography — what we need are new institutional arrangements.

Comments

  1. The relevancy of this to the average user is very limited. The average user doesn’t even know what the URL bar is for:

    http://www.readwriteweb.com/archives/facebook_wants_to_be_your_one_true_login.php#comments

    If so many people can mistake that page for Facebook, who can really tell the difference between http://www.freedom-to-tinker.com and http://www.freedom-totinker.com?

    SSL certs are largely a money making enterprise between browser makers and SSL cert sellers. Some evidence:

    1. Self-signed certs cause scary SSL warnings in ALL browsers–for example, in Google Chrome, a bright red screen. Is there really a need for such a warning? A page with a self-signed cert is at worst as secure as a regular non-ssl http page. Why isn’t there a bright warning red screen for all plain http pages? Why not treat self-signed certs the same way as they do plain http pages?

    2. What is the economic justification behind charging 3x more for a 3-year SSL cert than a 1-year SSL cert? The only difference is a few bits that say expires-in-2013 instead of expires-in-2011. This kind of pricing indicates strong monopoly power in the cert market.

    3. Why do browser and cert companies keep pushing the “Lock icon = secure site” paradigm? As we all know, it indicates very little about security; if your computer is already compromised, or the computer you are talking to is compromised, or the computer you’re talking to does something insecure with the information you send it, the SSL cert won’t do anything to help you.

    • Curt Sampson says

      Your point 1 is a very popular misconception, but it is dead wrong. When you accept a self-signed cert these days, there’s a reasonable probability that you’ve just given an attacker permission to monitor and change the data in your session.

      I summarize the problem in a recent RISKS digest post entitled Encryption Considered Harmful. (The name is a bit of a computing science in-joke in response to a poster who claimed that encryption without authentication was a desirable thing.)

      The lack of understanding of the need for authentication in secure communications appears to me to be one of the biggest problems the security community, and the security of those using computers for communication, faces right now.

    • Curt Sampson says

      I’m sorry, I misread your point 1 above. Treating sessions encrypted with self-signed certs as we do unencrypted sessions might be a reasonable approach, but I’m not sure the benefits are worth the downsides.

      First, it can’t be treated exactly as an unencrypted session; it needs to be treated as an unencrypted session started from an encrypted session. That is to say, the user needs to be warned that he’s now accessing accessing an “insecure” site. This is because the user, if he typed in an https URL, is expecting a secure site. As well, we really should at that point offer the user a chance to verify the fingerprint of the certificate and use it; a self-signed cert is perfectly secure if you know the fingerprint through other channels and the fingerprint of the cert you’re presented with matches that.

      The benefits? Well, a user can already fairly easily just delete the ‘s’ from the HTTPS in URL he’s atttempting to access, if an encrypted version of the page is available. So it doesn’t help much there. The one extra bit of real convenience is when accessing a page which is served only via https. This is rare enough, I think, that we’d need to ask, is it really worth all of the extra work, and the possibility of new errors introduced by these new procedures?

  2. Ed, I’m having a lot of trouble understanding your concern. Nobody’s forced to trust any CA. Internet Explorer, for one, allows a user to customize its list of trusted root CAs, and I’d be astonished if the other major browsers didn’t have similar functionality.

    Of course, the vast majority of users aren’t sophisticated enough, and/or can’t be bothered, to do that. Instead, they choose to rely on their browser vendor’s choice of default policy. That seems perfectly sensible on their part. And in fact, it seems to have worked pretty well so far, unless I’ve missed a report somewhere of a successful, damaging attack based on a popular browser’s poor choice of default root CA list.

    Now, you may not care for the particular defaults that the major browsers have chosen, or for the customization options they’ve offered users. But in neither case would that indicate a problem with the general browser security model, or with the assumptions behind it. Rather, you’d be expressing dissatisfaction with the specific tradeoffs users and browser vendors have made between security and manageability.

    Moreover, those tradeoffs would be fairly easy for browser vendors to adjust, if a substantial empirical case were to be made for the necessity of doing so. But to be frank, I don’t think you’ve even made that case, let alone the case for radically revising the underlying security model.

  3. Larry Seltzer says

    The first thing that needs to be said about the attitude in this story is that it confuses identity with trustworthiness. No site is trustworthy simply because you know what it is, and all SSL does is to identify the party to a communication (well that and encryption).

    The other point is that conventional SSL has become completely debased by dirt-cheap domain-certified SSL certificates. These certs have no identity in them other than the domain for which they are sold. So you get the lock icon on http://www.searching-your-hard-disk-right-now.com, what does that prove other than that you’re on that web site?

    EV-SSL does change this dynamic by setting standards for identification and verification by the CA. A true identity has to be in the cert and the CA has to verify it, and it’s a non-trivial process. This is why the certificates are expensive. And it’s good that they’re expensive also because that makes them less desirable as hit-and-run fraud tools, like conventional SSL has too often become.

    • Larry, I agree with your observations about what SSL/X509 provides and about the practical race-to-the-bottom on DV certs. Those issues are discussed in the earlier posts, and I’d be interested in your thoughts there.

      I don’t think you’re right about this post confusing identity with trustworthiness. The post is about how these things interact. Ed is discussing “Certificate Authorities (CAs) who are trusted to certify identities.” The trust in question is about who can vouch for identity, and there’s no claim here that it goes further.

  4. “MakRober” said the following in a discussion on mozilla.dev.tech.crypto:

    It appears to me that what we have here is a clash between the concepts
    of trust held by two sides: in the world of crypto product architects,
    trust is created by a promise, and it takes a proven malfeasance for
    it to expire. In real world, a promise is not enough to create trust.
    There, it is earned by actions and can be lost by mere suspicion. </blockquote

  5. Ed,

    You do an excellent job of naming the problem. When I look at my computer’s trusted root certificate list, I see 27 trusted certificates, many from organizations I’ve never heard of.

    I believe that a major part of the solution to the problem will involve remembering past interactions: If I’ve connected connected to my bank’s web site for years with no problems, and the next time I connect it uses the same certificate as last time, that’s a strong indicator that there is no man in the middle. On the other hand, if I connect and it presents a different certificate, I ought to be suspicious, even if some third party claims it’s OK.

    This is similar to the way human interactions work: You don’t demand to see your friend Bob’s birth certificate every time you meet him; in fact, you’ve probably never asked to see it. You trust that he is Bob because he is the same person that you knew as Bob yesterday. The trust relationship is built up over years of interactions.

    For another interesting take on the issue, see Security Snake Oil by Matthew Paul Thomas. For a suggestion that even now we may not be asking the right questions, see What’s Your Threat Model by Ian Grigg.

    Jim Lyon

    • The problem with using past interactions to drive security decisions is that they do not work well in the computer world. When I first meet a person, say at a party, I do not immediately give them the keys to my house. But when I first connect to a website claiming to be my bank’s website, I have to give them full credentials in order to get access to my account. On the current Internet, there is no “courting” period with the website during which trust is built. You either trust it or you do not.

      • Joe,

        You point out that using history is not helpful when there is no history to use. While true, it does not negate my point that history is useful when it does exist.

        It’s also not true that there is no courting period on the internet. Many banks limit the size, frequency and types of transactions that you can perform immediately after signing up. This is an example of slowly building trust based on history. Similarly, there are many web sites that I’ve signed up with using unique user names, email addresses, and passwords, just because I don’t know how far I can trust them.

      • John Millington says

        “when I first connect to a website claiming to be my bank’s website…”

        This is a case where the whole question of trusted introducers shouldn’t even be coming up. We don’t needs CAs for banking. Everyone meets their bank in the real world at least once, and there are numerous opportunities for out-of-band communication. It’s not hard to print a key fingerprint on mailed statements, have it posted on a sign on the wall at the bank, etc.

        And not only should you be able to certify them, they ought to be able to certify you. The last time I opened a bank account, they checked my id.

        • Anonymous says

          Why not just dispense with all of the CA, certificates, SSL, etc. nonsense and just do this?

          Start with the USB Crypto Key: a device that appears to computers as a thumbdrive, but with these peculiar properties:

          a) Its filesystem root contains a public.key file full of numbers and two subdirectories, enc and dec, and if you reformat it these mysteriously come back immediately, albeit the subdirectories will be empty.

          b) if a file is written to enc, a second file mysteriously appears there with a predictably-altered file name that is the result of encrypting the first file with the private key that corresponds to the public.key.

          c) if a file is written to dec, a second file mysteriously appears there with a predictably-altered file name that is the result of decrypting the first file.

          This isn’t hard to design, physically: when attached to a host computer, it draws power from the USB connection (as many USB gadgets do) to run a small onboard microprocessor (a 6502 ought to suffice) and run the encryption and decryption etc.

          How to use it? Well, imagine when you got a bank account you got one of these, and the bank’s site used Java or Flash that identified and authenticated account holders via the key. The key would be used to encrypt and decrypt communications with the bank’s web site. Note that the private key is never visible even to the user’s PC (and any malware residing thereupon) let alone anyone else (except the bank may have retained a copy). It’s easy to also make the transactions immune to replay attacks (sequence numbers might even suffice), and since a given bank customer has a fixed key pair for interacting with the bank, MITM attacks are right out. A phishing web site also can’t get at the private key (and the USB key may have ways of validating/invalidating the bank’s web site too).

          In fact, one can imagine this USB key being usable at a future Wal-Mart checkout in place of a swipe card terminal. Plug into socket, okay transaction, wait for response, unplug. Wal-Mart can’t get at your private key. No one looking over your shoulder can either. Although the key is vulnerable to physical theft; if it’s to be usable at stores maybe it should have a PIN.

          Of course, if you’re going to do that, you might as well replace all debit and credit cards with something like this. Card cloning and fraud becomes virtually impossible without physically stealing someone’s key. And if someone loses their key, they can go to the bank to have it invalidated and replaced, just as with a lost or stolen card now.