October 30, 2024

The Security Mindset and "Harmless Failures"

Bruce Schneier has an interesting new essay about how security people see the world. Here’s a sample:

Uncle Milton Industries has been selling ant farms to children since 1956. Some years ago, I remember opening one up with a friend. There were no actual ants included in the box. Instead, there was a card that you filled in with your address, and the company would mail you some ants. My friend expressed surprise that you could get ants sent to you in the mail.

I replied: “What’s really interesting is that these people will send a tube of live ants to anyone you tell them to.”

Security requires a particular mindset. Security professionals – at least the good ones – see the world differently. They can’t walk into a store without noticing how they might shoplift. They can’t use a computer without wondering about the security vulnerabilities. They can’t vote without trying to figure out how to vote twice. They just can’t help it.

This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.

I’ve often speculated about how much of this is innate, and how much is teachable. In general, I think it’s a particular way of looking at the world, and that it’s far easier to teach someone domain expertise – cryptography or software security or safecracking or document forgery – than it is to teach someone a security mindset.

The ant farm story illustrates another aspect of the security mindset. Your first reaction to the might have been, “So what? What’s so harmful about sending a package of ordinary ants to an unsuspecting person?” Even Bruce Schneier, who has the security mindset in spades, doesn’t point to any terrible consequence of misdirecting the tube of ants. (You might worry about the ants’ welfare, but in that case ant farms are already problematic.) If you have the security mindset, you’ll probably find the possibility of ant misdirection to be irritating; you’ll feel that something should have been done about it; and you’ll probably file it away in your mental attic, in case it becomes relevant later.

This interest in “harmless failures” – cases where an adversary can cause an anomalous but not directly harmful outcome – is another hallmark of the security mindset. Not all “harmless failures” lead to big trouble, but it’s surprising how often a clever adversary can pile up a stack of seemingly harmless failures into a dangerous tower of trouble. Harmless failures are bad hygiene. We try to stamp them out when we can.

To see why, consider the donotreply.com email story that hit the press recently. When companies send out commercial email (e.g., an airline notifying a passenger of a flight delay) and they don’t want the recipient to reply to the email, they often put in a bogus From address like . A clever guy registered the domain donotreply.com, thereby receiving all email addressed to donotreply.com. This included “bounce” replies to misaddressed emails, some of which contained copies of the original email, with information such as bank account statements, site information about military bases in Iraq, and so on. Misdirected ants might not be too dangerous, but misdirected email can cause no end of trouble.

The people who put donotreply.com email addresses into their outgoing email must have known that they didn’t control the donotreply.com domain, so they must have thought of any reply messages directed there as harmless failures. Having gotten that far, there are two ways to avoid trouble. The first way is to think carefully about the traffic that might go to donotreply.com, and realize that some of it is actually dangerous. The second way is to think, “This looks like a harmless failure, but we should avoid it anyway. No good can come of this.” The first way protects you if you’re clever; the second way always protects you.

Which illustrates yet another part of the security mindset: Don’t rely too much on your own cleverness, because somebody out there is surely more clever and more motivated than you are.

Comments

  1. Maybe we should simply call this mindset something else other than the “security mindset”. Possibly the “what can go wrong mindset” or the “how can I misuse this mindset”. I’m sure someone of a more creative mindset can come up with a better name than I.

    Good engineers don’t have much trouble figuring out how to make something work. They have trouble figuring out how to make sure it works the same way every time. Average engineers struggle to make things work at all. Good engineers already have this mindset (or have been taught it) and, I’m sure, could easily apply this mindset to security just as easily as they apply it to engineering.

    Nonetheless, it does help explain the funny looks I get sometimes when I explain a security or an engineering problem to someone who just doesn’t understand the mindset, whatever you want to call it.

  2. You haven’t convinced me. All of the things you mention as being particular to security experts seem to me to be just the same things that good technical people of all types will do – and bad ones of all types (in including security people) will fail to do.

    Of course security problems are often interesting engineering problems – and so good engineers and scientists will be attracted to them – eg Feynmann’s safecracking exploits but it’s wrong to try to put them on a pedestal above other kinds of engineering.

    When I am are trying to guarantee the integrity of a system I will in fact model the failure modes as if they were deliberate attacks even though there is no intelligent agency doing the attacking. That is simply the best way to do the job.

  3. As other commenters have mentioned, good engineers definitely think about how things can go wrong, and even about how “harmless failures” can add up to real problems. The big difference is that they think about failures by modeling them as *systems*, rather than as *engineers*.

    Consider hackers, for instance. They’re basically engineers–often very good ones–who simply design hacks on systems, rather than systems themselves. And they generally have strikingly little insight into how to design secure systems. After, they think of their adversaries as systems–static, well-understood objects–and aren’t used to thinking of them as the products of ingenious engineering.

    The hard part of security thinking, then, seems to be the ability to flip back and forth smoothly between engineering systems and engineering attacks on them. The faster and more skillfully you can do that, the faster and more thoroughly you can identify the underlying security threats to a given system, and how best to protect against them.

  4. Good programmers will indeed seek to identify and eliminate possible failure modes, but in many cases it’s considered adequate to reduce the probability of failure such that it just plain isn’t going to happen. As a simple example, some embedded document formats begin and end each attachment with a random sequence of bytes. The program that parses the containing document locates the initial sequence and then finds where that sequence occurs again. If the program randomly-generates a 32-byte sequence, the likelihood of that sequence appearing within a document is essentially nil. I wouldn’t be surprised if a number of such programs simply assume that the random sequence won’t appear in the file being embedded, without an actual check to ensure that’s the case.

    To a programmer, such a vulnerability is not a real concern. If a program generates a 32-byte string using a decent random generator, it’s unlike that string will exist anywhere in the universe, much less within the document being encoded. On the other hand, suppose that the string is generated using some unique identifier for the machine processing it along with the time of day and a counter held by that machine. Someone with access to that information may be able to predict what ‘random’ strings will be generated, and thus able to create files that include the strings that will ‘enclose’ them. The probability of an unexpected match may be changed from 1:2^192 to 1:100 or even less.

    Another factor that needs to be considered is that security experts can often snoop out ‘out of band’ methods of getting information they’re not supposed to access. For example, if an attempt to access a non-existent file reports failure more quickly than an attempt to access a file to which one doesn’t have access, one can use this information to discover what files exist even if their existence is supposed to be confidential. A typical programmer would hardly see as a problem the fact that program sometimes runs faster and sometimes runs slower, but a security expert might exploit such things.

  5. Richard, the point he’s making is that Programmers think about one way that things can go wrong, but security experts will look at the same failure modes in a very different way. Both groups of people will often identify the same potential failures. The difference is that the security group will very often be more creative in the specific ways they see the failures being exploited, both intentionally and unintentionally.

    You are correct to the extent that *good* programmers, engineers, etc, will often identify the same *set* of potential failures as a good security person. However, the security person will most often come up with ways of exploiting the failure or ways that a single failure could mushroom that most good programmers (engineers, etc) would not think of. A good security person will also come up with more interesting consequences to the failures.

  6. Sorry – why exactly am I wrong?

    I can’t see it in the post anywhere.

    My point is that the difference between “security” and “engineering” is just that in security there is a human adversary trying to find the flaws in your system whereas in engineering it is just the random behaviour of those parts of the system that you don’t control – however the correct way to deal with those problems is the same in both cases.
    The fact that a lot of engineers don’t do it very well is not relevant.

  7. Richard, you’re wrong. Read the post again to understand.

  8. Actually the security mindset is just the same as a good maths/scientific/engineering/programming mindset. In particular anyone who deals with concurrent asynchronous systems knows that chance (given a few billion events) is more devious than any human adversary.

    The problem is that the worlds of business and politics don’t think that way.

  9. Sending ants is no big deal, no. However, there’s a similar story that shows the real danger. A few years ago, some infamous spammer told reporters “if you don’t want it, just delete it”. In response, people went to hundreds of web sites and signed him up to receive catalogs from companies; he was absolutely inundated with snail mail, just like most of us are inundated by spam. Granted the guy deserved it, but the same tactic, now exposed, can be used against the innocent…

  10. It struck me that the security mindset and the QA mindset have a great deal in common.

    While I’m a developer, I have enough of that mindset that I often say “paranoia is a virtue.”

  11. I think Bruce Schneier shows more of a programmers’ (or IT security experts’) mindset than the general “security mindset” in that special case: the possibility of ants going somewhere unexpected is an archetype of an “undefined state” – the “real world”s equivalent to a software bug. The difference is that the real world is probably not going to crash from it.

  12. I think I have yet another mindset: the privacy mindset. The first thing that came to my mind when I read the ant story was, “Those dirty rascals at the ant farm company make me jump through hoops so that they can get my mailing address, which they undoubtedly will sell, and use to send me spam for decades to come.”

    In the case of ants, perhaps their methods are sound, because the ant farm product might sit on a store shelf for years, and any ants in it would die, so it’s better to mail the ants once a customer has bought the ant farm.

    Nevertheless, there are all sorts of schemes like this these days that try to get your personal information when they don’t really need it. Ever go to Radio Shack and have the clerk ask for your phone number? I just want to pay cash and have a nice anonymous transaction, thank you very much.

  13. I’m just surprised at how few people know about RFC 2606.

  14. Actually, I think the mindset of people who put “donotreply.com” in a “From” address is that people actually won’t reply to it, ergo, problem solved. It simply doesn’t occur to them that (a) people may reply even when you tell them not to, and (b) an automated system may use that address without consideration for the meta-level message contained within.

    To me, that represents a third level of trust that sits above the security mindset *and* the engineer mindset. Of course, the source of that mindset probably comes as much from a lack of education of how things work as from a lack of education on what could happen.