December 13, 2024

How Can Government Improve Cyber-Security?

Wednesday was the kickoff meeting of the Commission on Cyber Security for the 44th Presidency, of which I am a member. The commissionhas thirty-four members and has four co-chairs: Congressmen Jim Langevin and Michael McCaul, Admiral Bobby Inman, and Scott Charney. It was organized by the Center for Strategic and International Studies, a national security think tank in Washington. Our goal is to provide advice about cyber-security policy to the next presidential administration. Eventually we’ll produce a report with our findings and recommendations.

I won’t presume to speak for my fellow members, and it’s way too early to predict the contents of our final report. But the meeting got me thinking about what government can do to improve cyber-security. I’ll offer a few thoughts here.

One of the biggest challenges comes from the broad and porous border between government systems and private systems. Not only are government computers networked pervasively to privately-owner computers; but government relies heavily on off-the-shelf technologies whose characteristics are shaped by the market choices of private parties. While it’s important to better protect the more isolated, high-security government systems, real progress elsewhere will depend on ordinary technologies getting more secure.

Ordinary technologies are designed by the market, and the market is big and very hard to budge. I’ve written before about the market failures that cause security to be under-provided. The market, subject to these failures, controls what happens in private systems, and in practice also in ordinary government systems.

To put it another way, although our national cybersecurity strategy might be announced in Washington, our national cybersecurity practice will be defined in the average Silicon Valley cubicle. It’s hard to see what government can do to affect what happens in that cubicle. Indeed, I’d judge our policy as a success if we have any positive impact, no matter how small, in the cubicle.

I see three basic strategies for doing this. First, government can be a cheerleader, exhorting people to improve security, convening meetings to discuss and publicize best practices, and so on. This is cheap and easy, won’t do any harm, and might help a bit at the margin. Second, government can use its purchasing power. In practice this means deliberately overpaying for security, to boost demand for higher-security products. This might be expensive, and its effects will be limited because the majority of buyers will still be happy to pay less for less secure systems. Third, government can invest in human capital, trying to improve education in computer technology generally and computer security specifically, and supporting programs that train researchers and practitioners. This last strategy is slow but I’m convinced it can be effective.

I’m looking forward to working through these problems with my fellow commission members. And I’m eager to hear what you all think.

Comments

  1. lotro gold says

    Like you said, the copy-protection will make the devices more fragile and expensive. Sounds good for the industry. Since when have the manufacturers done anything that’s good for the consumer?

  2. In response to the very first post: “What if the government passes a law which somehow shifts the liability for security holes from the software consumer to the producer…”

    We can tweak this great idea slightly to make it even more effective. How about a law that imposes writhing torturous death to the individual and his/her entire family (children included) who writes a line of code that does not follow security best practices?

  3. While liability in practice sounds like an easy result, I think it will create far too many incentives to cease interoperability – if the vuln is cause by interaction between systems then dominant market players will want less openness and interoperability.

    One thing government can do very well is set default requirements — defaults rule the world, as you note in the Verizon DNS post. Security must be set high, and then users will bring it down to the level they can tolerate. Having security as a default, in email, in browsing, in home-based wireless routers, and in the default start-up account in machines would make a very large difference.

    Two articles on security economics:
    1. vulnerabilities are an externality, the market won’t solve the problem

    L Jean Camp and Catherine Wolfram, Pricing Security, Proceedings of the CERT Information Survivability Workshop, 2000 Oct 24-26, pp. 31-39, Boston, MA,
    available online at papers.ssrn.com/sol3/papers.cfm?abstract_id=894966},

    2. defaults rule the security world

    Matthew Hottell and Drew Carter and Matthew Deniszczuk, Predictors of Home-Based Wireless Security, Fifth Workshop on the Economics of Information Security, 2006, Cambridge, UK, available online, at http://weis2006.econinfosec.org/docs/51.pdf.

    and the bibliography:
    http://infosecon.net/workshop/bibliography.php

    thanks,
    Jean

  4. Ed,

    Your recommendations may be correct–I haven’t figured it out for myself. However, this post is one of my least favorites of yours because it seems so self-serving.

    Suggestion #2 obviously would help the commercial interests represented on your panel, and suggestion #3 would help you and your fellow researchers personally. Ironic that the panel would suggest government handouts to the interests represented on the panel, isn’t it?

    I’ll need to see a more detailed account of what the actual problem is and why these suggestion will do anything to solve the problem before I accept your recommendations.

    I have an open mind – but I’d like to see a persuasive account!

  5. Sane Scientist says

    To Mad Scientist above, please remove yourself from your delusional state. Any complex OS has vulnerabilities. Want proof? Please see secunia.org. Oh, you’ll probably say that it has fewer vulns… well if that OS really mattered for anything, the vuln research community would definitely increase those #’s quickly.

  6. Anonymous Coward says

    What about another option:

    Any product that can be purchased by the government must be subject to a source-code level security audit (by the government or its contractors). The results of the audit must be publicly disclosed. You would also require scientific functional audits (e.g. testing a machine connected to the network unprotected, one protected with the product to be tested, compare the result, etc.).

    That keeps the vendors from hiding their flaws, and the public disclosure allows for some economic penalty dealt by end-users who avoid the flaws.

  7. Hi Ed,

    I’ve offered up some thoughts at “How government can improve cyber-security,” and it appears the trackback didn’t work.

    Adam

  8. “One of the biggest challenges comes from the broad and porous border between government systems and private systems. Not only are government computers networked pervasively to privately-owner computers….”

    How is that any different than the US government’s connection to and reliance on, say, the postal service and UPS? Many of the threats to cyber-security are the same as we have always encountered, only moved to the electronic realm.

    One computer virus sent by a private party may be able to take down government systems. But that’s not any different than in the real world: remember the 2001 anthrax mail attacks?

    It was never publicized, but those forced the shutdown and decontamination of embassy and consulate facilities worldwide. All because one anthrax-bearing letter was misdirected to the State Department in Virginia. Not altogether different from a worm or a trojan with a malicious payload, eh?

  9. Ned Ulbricht says

    Seems like the economic problem can be addressed via opening the door to negligence suits.

    Eric,

    Just as a matter of political pragmatism, I usually try to avoid being pessimistic in my public comments: Pessimism doesn’t sell well to the American public.

    But, for four and a half years now, a large number of people have been watching the fraud that has, in its latest chapter, moved into the United States Bankruptcy Court for the District of Delaware. What do we see? Well, we see that when people try to protect themselves against the damage caused by baseless accusations, public threats and lawsuits, the court system works to continue and worsen the damage.

    I have no confidence left in the U.S. Court system.

    The American courts are broken: They’re a joke. A bad joke. A very bad joke.

    It’s not funny.

  10. Eric Johnson says

    As per “Foolish Jordan”, Bruce Schneier has frequently commented on the economic incentives being the most powerful ones around security.

    Seems like the economic problem can be addressed via opening the door to negligence suits. If you offer paid support for a product, you’re responsible for best efforts at informing your customer about how to keep their system secure. Since we cannot actually force people to upgrade their systems, apply patches, etc., this is a low threshold, but I think this is the possibly the highest threshold you can apply without destroying much commercial and open source software. I also note that much of enterprise software works with open source software, and for the most part, shifting liability to open source developers would likely lead to less open software, and probably less secure software. So we have to be very careful that open source software is not generally affected by shifting liabilities – especially when it is offered with no support!

    We clearly cannot cut off our nose to spite our face – but then open source software is generally free, and generally without formal support contracts. That is, unless you pay for a Linux distribution, for example. Those distributions *already* are very good at security notifications, and I don’t think they’d be affected by any new regulation – they probably have more to fear from a baseless patent suit.

  11. You forgot the fourth method: require higher security via regulation, and spread the cost out amongst the entire population of consumers, including businesses and individuals. I’m not saying it is a good idea, mind you, as it is difficult to produce a decently secured but usable system without designing to a risk analysis, and there are too many agendas out there to formulate a consensus. But it certainly is a fourth method.

    Historically, I think the various solutions work best in different problem domains. If the problem is pervasive and effects everyone significantly and more or less uniformly, then the fourth solution is best. If the problem is pervasive and effects everyone, but non-uniformly, then the second method is the best, as it simply introduces an economic driver to make things better without spreading the burden across everyone directly. If the problem is pervasive and requires a major shift in paradigm, then the third method is more or less required – the Manhattan Project approach. The first method is generally a net loss; even though it is cheap and it may produce good results, I think that it is much more likely to confuse the issue by presenting the appearance that something is being done without actually doing anything.

    As regards to cyber-security, I think we’re in a problem domain now where we have a pervasive problem that is going to require not just one but several changes in paradigm, from how systems are designed and built to how they are deployed. It’s time to pour money at a think tank of smart people who love this problem domain and let them beat on it for a while.

  12. Here’s the contract for the USS Indianapolis. Could similar penalty and incentive clauses apply to IT products?

  13. When you say “deliberately overpaying for security”, do you mean paying more than they would pay if secure products were commodities like any other, or paying more than the risk-adjusted value of the products? If only the first, “overpaying” would be the right choice for the government even in the absence of massive adoption.

    (This gets back, of course, to the point you often make about incentives and who captures the gains and losses from the current state of security practice.)

  14. Foolish Jordan says

    What if the government passes a law which somehow shifts the liability for security holes from the software consumer to the producer? The model is the $50 limit on credit card fraud to the consumer; because the bank is on the hook for nearly all the fraudulent credit card use, the bank takes a lot more effort to make credit cards secure (but still easy to use!) than it would otherwise.

    There are obviously downsides (is there an action you can take without downsides?) and the devil is, as always, in the details. But we do SEEM to have solved the “make the people who can actually DO something about the risk have the incentive to do it” problem better than The Market would have (although I’m not really sure if we have a proper counterfactual).