September 20, 2020

Tax Breaks for Security Tools

Congress may be considering offering tax breaks to companies that deploy cybersecurity tools, according to an Anne Broache story at This might be a good idea, depending on how it’s done.

I’ve written before about the economics of cybersecurity. A user’s investment in security protects the user himself; and he has an incentive to pay for the efficient level of protection for himself. But each user’s security choices also affect others. If Alice’s computer is compromised, it can be used as a springboard for attacking Bob’s computer, so Alice’s decisions affect Bob’s security. Alice has little or no incentive to invest in protecting Bob. This kind of externality is common and leads to underinvestment in security.

Public policy can try to fix this by adjusting incentives in the right direction. A good policy will boost incentives to deploy the kinds of security measures that tend to protect others. Protecting oneself is good, but there is already an adequate incentive to do that; what we want is a bigger incentive to protect others. (To the extent that the same steps tend to protect both oneself and others, it makes sense to boost incentives for those steps too.)

A program along these lines would presumably give tax breaks to people and organizations that use networked computers in a properly secure way. In an ideal world, breaks would be given to those who do well in managing their systems to protect others. In practice, of course, we can’t afford to do a fancy security evaluation on each taxpayer to see whether he deserves a tax break, so we would instead give the break to those who meet some formalized criteria that serve as a proxy for good security. Designing these criteria so that they correlate well with the right kind of security, and so that they can’t be gamed, is the toughest part of designing the program. As Bruce Schneier says, the devil is in the details.

Another approach, which may be what Rep. Lundgren is trying to suggest in the original story, is to give tax breaks to companies that develop security technologies. A program like this might just be corporate welfare, or it might be designed to have a useful public purpose. To be useful, it would have to lead to lower prices for the right kinds of security products, or better performance at the same price. Whether it would succeed at this depends again on the details of how the program is designed.

If the goal is to foster more capable security products in the long run, there is of course another approach: government could invest in basic research in cybersecurity, or at least it could reverse the current disinvestment.


  1. Another thing the government might do to improve network security would be to remove any legal redress against perpetrators of computer trespass via the Internet.

    If your computer is connected to the internet, it would be best if the law recognised this as a deliberate act of publishing the contents of the computer and all resources and information it had access to.

    Any redress is then against those persons who caused this publication.

    What this ends up doing is to persuade companies that they do not connect computers to the Internet unless they’re CERTAIN respective information is either secure in the open (PKI) or ok to publish.

  2. If your computer is connected to the internet, it would be best if the law recognised this as a deliberate act of publishing the contents of the computer and all resources and information it had access to.

    Any redress is then against those persons who caused this publication.

    So in the most typical case, who is the “person” connecting a computer to the internet:

    a) the authors of the computer’s operating system
    b) the person who clicks “OK” in response to a dialog box prompting for registration and a check for updates

  3. The person who connects a computer to the internet is the person who plugs an ethernet cable between the computer and a socket that may connect to the internet (or installs a WiFi card, etc.).

    I’m suggesting that the law refuse to recognise the existence of a technical protection measure that can prevent access to information on a connected computer, i.e. that the only effective measure is PHYSICAL disconnection (including wireless).

    In this way, the buck for negligence stops at the party who connected a computer to the Internet, i.e. they do not have an excuse of taking reasonable steps to secure the information. This means that anyone who buys a TPM will want some pretty hefty indemnity to back it up.

    Security improves because hackers then have free rein to harden TPMs without compunction/repercussion, which consequently improves these TPMs.

    So, yes, a computer that is secured in the eyes of the law is one that can be conclusively be demonstrated to be fully isolated from potential connection to the Internet. In other words, not very many computers are secure, and this is the reality that we ignore at our folly.

  4. Dennis Eichenlaub says:

    This is another terrible idea. To paraphrase, “There are no free tax breaks”. If the government gives tax incentives for security software, there will have to be complex definitions of what comprises security software. The definitions will have to be turned into still more complex regulations. People will have to be hired to administer and interpret the regulations. Other people will try to bend the rules, which will require lawyers on both sides as well as the rest of the legal system. In the end, most of the incentives will go to software that has little marginal impact on computer security. We will spend 10x the benefit. (A lot more if you’re not optimistic like me!)

  5. This is completely off topic, feel free to ignore.

    Just letting you know I modified your Wikipedia entry in line with the “neutral point of view” policy. As it stood the page only really told one side of some of those controversies. If the last paragraph under “The SDMI challenge” and the modified “lawsuit” section are grossly inaccurate or distortionary then feel free to mention so here or edit the page yourself.

  6. I would feel more like forcing liability on the software provider that is responsible for 99% of the compromised machines. Car makers have to pay damages when they sell unsafe cars, computer software manufacturers can get away with selling crap and denying any responsibility for errors.

  7. Steve Purpura says:

    In general terms, I am supportive of improving incentives to enhance computer security. But, as a participant in the Congressional discussion on this topic, I’m concerned about the deadweight loss associated with the complexity of enforcement.

    One of the problems with incentives to purchase technology is that the incentive to purchase may overwhelm the incentive to implement effectively. As a general principle, I think it is more useful to provide incentives that will encourage market participants to adopt competitively effective solutions and actually implement them in an effective manner.

    For this reason, I am intriged by the proposals that I’ve seen for an ad valorem tax on Internet pollution. I find this type of tax frightening yet I am curious whether it would result in a meaningful change in behavior. Such proposals setup relatively low cost monitoring points to examine backscatter and other artifacts of pollution. And then they charge based on volume of pollution transiting the egress points.

  8. A tax does not sound optimal. Obviously investment in basic research is a public good. But there is much that is known, information for which there are not open questions, that is not applied.

    The idea of a tax on pollution is an interesting one. Of course the original paper on the topic ( suggested a market where vulnerability “owners” would have to pay. At least then the price would be visible. Again enforcement and pricing is problematic, and the cure may be worse than the problem from the point of view of the end user. (Since you cannot secure your machine, here is a whopping bill!) Of course there are a range of papers on markets for vulnerabilities but this is mostly for attackers and defenders (e.g., Incentive Modeling –

    I like the idea of ad valorem tax of at least three kinds – one for ISPs who fail to support users in securing their own machines, one for software producers, and one for corporations who do not fit under the ISP rubric yet fail to secure their networks. That way small struggling producers of software and small ISPs would take small hits, and the big polluters would hit hard. The ISPs could be judged on the basis of how many users and how much bandwidth, the code providers on installed base, and the others on a combination. Model practices already developed (e.g., by insurers) could be applied, other model practices would be developed as standards. Those who produce free and open source code or who donate to the code base without commercial returns would not be hit; but those that take the open source and market it could be. All in all, a pretty cool idea. And not corporate welfare the for the DRM market, as the current proposals risks becoming!