November 22, 2024

Microsoft: No Security Updates for Infringers

Microsoft, reversing a previous decision, says it will not provide security updates to unlicensed users of Windows XP. Microsoft is obviously entitled to do this if it wants, since it has no obligation to provide product support to people who didn’t buy the product in the first place. A more interesting question is whether this was the best decision from the standpoint of Microsoft and its existing customers. The answer is far from obvious.

Before I go further, let me make two assumptions clear. First, I’m assuming Microsoft has a reliable way to tell which copies of Windows are legitimate, so that they never deny updates mistakenly to legitimate customers. Second, I’m assuming Microsoft doesn’t care about the welfare of infringers and feels no obligation at all to help them.

Helping infringers could easily hurt Microsoft’s business, if doing so makes infringement a more attractive option. If patches are one of the benefits of buying the product, then people are more likely to buy; but if they can get patches even without buying, some will choose to infringe, thereby costing Microsoft sales.

On the other hand, if there is a sizable population of unpatched infringing copies out there, this hurts Microsoft’s legitimate customers, because an infringing customer might infect a legitimate customer. A large reservoir of unpatched (infringing) machines will aggravate an already serious malware problem, by making Windows an even more attractive target to malware authors, and by speeding the spread of new malware.

But wait, it gets even more complicated. If infringing copies are susceptible to existing malware, then some of the bad guys will be satisfied to reuse old malware, since there is still a population of (infringing) machines it can attack. But if infringing copies are patched, then the bad guys may create more new malware which is not stopped by patches; and this new malware will affect legitimate and infringing copies alike. So refusing to update infringing copies may leave the infringers as decoys who draw fire away from legitimate customers.

There are even more factors in play, but I’ve probably written too much about this already. The effect of all this on Microsoft’s reputation is particularly interesting. Ultimately, I have no idea whether Microsoft made the right choice. And I doubt that Microsoft knows either.

Regulating Stopgap Security

I wrote previously about stopgap security, a scenario in which there is no feasible long-term defense against a security threat, but instead one resorts to a sequence of measures that have only short-term efficacy. Today I want to close the loop on that topic, by discussing how government might regulate fields that rely on stopgap security. I’ll assume throughout that government has some reason (which may be wise or unwise) to regulate, and that the regulation is intended to support those deploying stopgap measures to defend their systems.

The first thing to note is that stopgap areas are inherently difficult to regulate, as stopgap security causes the technological landscape to change even faster than usual. The security strategy is to switch rapidly between short-term measures; and, because adversaries tend to defeat whole families of measures at once, the measures adopted tend to vary widely over time. It is very difficult for any regulatory scheme to keep up. In stopgap areas, regulation should be viewed with even more skepticism than usual.

If we must regulate stopgap areas, the regulation must strive to be technology-neutral. Regulation that mandates one technical approach, or even one family of approaches, is likely to block necessary adaptation. Even if no technology is mandated, regulations tend to encode technological assumptions, in their basic structure or in how they define terms; and these assumptions are likely to become invalid before long, making the regulatory scheme fit the defensive technology poorly.

One of the rules for stopgap security technology is to avoid approaches that impose a long-term cost in order to get a short-term benefit. The same is true for regulation. A regulatory approach should not impose long-term costs (such as compliance costs) in order to bolster a technical approach that offers only short-term benefits. Any regulation that requires all devices to do something, for the indefinite future, would therefore be suspect. Equally so, any regulation that creates compatibility barriers between compliant devices and non-compliant devices would be suspect, since the incompatibility would frustrate attempts to stop using the compliant technology once it becomes ineffective.

Finally, it is important not to shift the costs of a security strategy away from the people who decide whether to adopt that strategy. Stopgap measures carry an unusually high risk of having a disastrous cost-benefit ratio; in the worst case they impose significant long-term costs in exchange for limited, short-term benefit. If the party choosing which stopgap to use is also the party who has to absorb any long-term cost, then that party will be suitably cautious. But if regulation shifts the potential long-term cost onto somebody else, then the risk of disastrous technical choices gets much larger.

By this point, alert readers will be thinking “This sounds like an argument against the broadcast flag.” Indeed, the FCC’s broadcast flag violates most of these rules: it mandates one technical approach (providing flexibility only within that approach), it creates compatibility barriers between compliant and non-compliant devices, and it shifts the long-term cost of compliance onto technology makers. How can the FCC have made this mistake? My guess is that they didn’t, and still don’t, realize that the broadcast flag is only a short-term stopgap.

Stopgap Security

Another thing I learned at the Harvard Speedbumps conference (see here for a previous discussion) is that most people have poor intuition about how to use stopgap measures in security applications. By “stopgap measures” I mean measures that will fail in the long term, but might do some good in the short term while the adversary figures out how to work around them. For example, copyright owners use simple methods to identify the people who are offering files for upload on P2P networks. It’s only a matter of time before P2P designers deploy better methods for shielding their users’ identities so that today’s methods of identifying P2P users no longer work.

Standard security doctrine says that stopgap measures are a bad idea – that the right approach is to look for a long-term solution that the bad guys can’t defeat simply by changing their tactics. Standard doctrine doesn’t demand an impregnable mechanism, but it does insist that a good mechanism must not become utterly useless once the adversary adapts to it.

Yet sometimes, as in copyright owners’ war on P2P infringement, there is no good solution, and stopgap measures are the only option you have. Typically you’ll have many stopgaps to choose from. How should you decide which ones to adopt? I have three rules of thumb to suggest.

First, you should look carefully at the lifetime cost of each stopgap measure, compared to the value it will provide you. Since a measure will have a limited – and possibly quite short – lifetime, any measure that is expensive or time-consuming to deploy will be a loser. Equally unwise is any measure that incurs a long-term cost, such as a measure that requires future devices to implement obsolete stopgaps in order to remain compatible. A good stopgap can be undeployed fully once it has become obsolete.

Second, recognize that when the adversary adapts to one stopgap, he may thereby render a whole family of potential stopgaps useless. So don’t plan on rolling out an endless sequence of small variations on the same method. For example, if you encrypt data in transit, the adversary may shift to a strategy of observing your data at the destination, after the data has been decrypted. Once the adversary has done this, there is no point in changing cryptographic keys or shifting to different encryption methods. Plan to use different kinds of tactics, rather than variations on a single theme.

Third, remember that the adversary will rarely attack a stopgap head-on. Instead, he will probably work around it, by finding a tactic that makes it irrelevant. So don’t worry too much about how well your stopgap resists direct attack, and don’t choose a more expensive stopgap just because it stands up marginally better against direct attacks. If you’re throwing an oil slick onto the road in front of your adversary, you needn’t worry too much about the quality of the oil.

There are some hopeful signs that the big copyright owners are beginning to use stopgaps more effectively. But their policy prescriptions still reflect a poor understanding of stopgap strategy. In the third and final installment of my musings on speedbumps, I’ll talk about the public policy implications of the speedbump/stopgap approach to copyright enforcement.

Cyber-Security Research Undersupported

Improving cybersecurity is supposedly a national priority in the U.S., but after reading Peter Harsha’s report on a recent meeting of the President’s Information Technology Advisory Committee (PITAC), it’s clear that cybersecurity research is severely underfunded.

Here’s a summary: The National Science Foundation has very little security research money, enough to fund 40% or less of the research that NSF thinks deserves support. Security research at DARPA (the Defense department’s research agency) is gradually being classified, locking out many of the best researchers and preventing the application of research results in the civilian infrastructure. The Homeland Security department is focusing on very short term deployment issues, to the near-exclusion of research. And corporate research labs, which have shrunk drastically in recent years, do mostly short term work. There is very little money available to support research with a longer term (say, five to ten year) payoff.

Witty Worm Analysis

Peter Harsha at CRA points to an interesting analysis, by Colleen Shannon and David Moore of CAIDA, of the recent Witty worm.