December 5, 2024

What's the Cybersecurity Czar's Job?

The sudden resignation of Amit Yoran, the Department of Homeland Security’s “Cybersecurity Czar”, reportedly due to frustration at being bureaucratically marginalized, has led to calls for upgrading of the position, from the third- or fourth-level administrator billet that Yoran held, to a place of real authority in the government. If you’re going to call someone a czar you at least ought to give him some power.

But while we consider whether the position should be upgraded, we should also ask what the cybersecurity czar should be doing in the first place.

One uncontroversial aspect of the job is to oversee the security of the government’s own computer systems. Doing this will require the ability to knock heads, because departments and offices won’t want to change their practices and won’t want to spend their budgets on hiring and retaining top quality system administrators. That’s one good argument for upgrading the czar’s position, perhaps affiliating it with a government-wide Chief Information Officer (CIO) function.

A harder question is what the government or its czar can do about private-sector insecurity. The bully pulpit is fine but it only goes so far. What, if anything, should the government actually do to improve private-sector security?

Braden Cox at Technology Liberation Front argues that almost any government action will do more harm than good.

In an article I wrote last year when Yoran was first appointed, I argued that the federal government has a role to play in cybersecurity, but that it should not be in the business of regulating private sector security. Mandated security audits, stringent liability rules, or minimum standards would not necessarily make software and networks more secure than would a more market-based approach, though it would surely help employ more security consultants and increase the bureaucracy and costs for industry.

Certainly, most of the things the government can do would be harmful. But I don’t see the evidence that the market is solving this problem. Despite the announcements that Microsoft and others are spending more on security, I see little if any actual improvement in security.

There’s also decent evidence of a market failure in cybersecurity. Suppose Alice buys her software from Max, and Max can provide different levels of security for different prices. If Alice’s machine is compromised, she suffers some level of harm, which she will take into account in negotiating with Max. But a breakin to Alice’s machine will turn that machine into a platform for attacking others. Alice has no incentive to address this harm to others, so she will buy less than a socially optimal level of security. This is not just a theoretical possibility – huge networks of compromised machines do exist and do sometimes cause serious trouble.

Of course, the existence of a problem does not automatically imply that government action is required. Is there anything productive the government can do to address this market failure?

I can see two possibilities. The first approach is for the government to use its market power, as a buyer of technology, to try to nudge the market in the right direction. Essentially, the government would pay for compromise-resistance, beyond its market incentive to do so, in order to bolster the market for more compromise-resistant software. For example, it might, in deciding what to buy, try to take into account the full social cost of potential breakins to its computers. Exactly how to make this happen, within a budget-conscious bureaucracy, is a challenge that I can’t hope to address here.

The second approach government might take is to impose some form of liability, on somebody, for the types of security breaches associated with this market failure. Liability could be placed on the user (Alice, in our example above) or on the technology vendor. There has been lots of talk about the possibility of liability rules, but no clear picture has emerged. I haven’t studied the issue enough to have a reliable opinion on whether liability changes are a good idea, but I do know that the idea should not be dismissed out of hand.

What’s clear, I think, is that none of these possibilities require a “czar” position of the sort that Yoran held. Steps to improve cybersecurity inside the government need muscle from a CIO type. Changes to liability rules should be studied, but if they are adopted they won’t require government staff to administer them. We don’t need a czar to oversee the private sector.

Comments

  1. Dan,

    Magicians believing in this complete and utter nonsense seem to be able to keep our global financial markets working from day to day.

    I refuse to answer your question because we can’t agree to principles. My principle is in free and competitive markets with limited govt oversight to iron out incorrect incentives. Your principle seems to be waving your hands in the air and claiming nothing can be done.

  2. Whoops–that was me, above.

  3. Steve, you have no “solution”. You’ve as much as admitted that by repeatedly refusing to answer my question. To reiterate: the commercial server software market is as fiercely competitive as any I know of. Yet server software security is abysmal. Thus your repeated assertion that a more competitive software market in those sectors where there is a single dominant vendor will somehow magically make all commercial software more secure, is obviously complete and utter nonsense. Why are you having so much trouble just conceding this self-evident point, and being done with it?

  4. Dan,

    I must applaud your bravery for calling belief in competitive markets “an imaginary monster used to fighten children”. It takes a special type of self-confidence to make that statement and then accuse someone of ranting incoherently.

    We disagree on who is absorbing the costs of failures in endpoint security and it’s impact on the market. And, as far as solutions, we also seem to disagree.

    You seem to think we need to redesign the Internet to solve security problems. I think we need to assure competitive markets and address inappropriate market incentives. One example of a misguided incentive that I gave is that ISPs earn profit from DoS attacks, so they are not incented to work to stop them.

    My solution applies to all aspects of the software industry, and other industries as well. Does yours?

  5. Steve, I wish you’d at least try to address my points, rather than just ranting incoherently about your personal bugbear. To repeat: the commercial server software market is extremely competitive. Yet nobody would dare suggest (or would you?) that the server security situation is anything but dire. How do you account for this state of affairs? And whatever your explanation, why doesn’t it apply equally well to other sectors of the software industry?

  6. We have discussed many things over the course of this thread. One is my belief that customers themselves are being hurt by the insecurity of the software they buy and use, and are powerless to choose an alternative product because of one seller’s monopoly position.

    According to MSFT testimony to Congress, the overwhelming majority of the costs being incurred as a result of insecure commercial software are being borne by MSFT’s customers. I’ll dig this reference up when I have more time, but for the time-being, let’s just assume that is true.

    MSFT’s customers are primarily locked into a security model that MSFT designs. The fact that you believe there is a thriving market in products that fit into this model, yet do not address the problem, is not evidence that the monopoly market solution is working. Or that the monopoly market solution is generating a solution as fast as a competitive market solution would generate.

    Furthermore, I’ll repeat that as a principle, we either believe or we don’t believe that more competitive markets will better address customer needs. Let’s have a competitive market for security models and see what happens. What do you have to lose?

  7. Steve, you’ve completely switched the argument from “market failure due to externalities” to “market failure due to monopoly”. That is, you’re arguing that customers themselves are being hurt by the insecurity of the software they buy and use, and are powerless to choose an alternative product because of one seller’s monopoly position.

    Now, I may be wrong about this, but my impression is that the overwhelming majority of the costs being incurred as a result of insecure commercial software are being borne by businesses whose servers are disrupted by various types of attack (DoS, worms, targeted break-ins, and others). Moreover, it should be obvious that the business server market is anything but a monopoly market. Hence your argument for “market failure due to monopoly” is completely inapplicable there.

    As for the desktop, I would guess that the main costs of insecure desktop software are due to phishing attacks, spam and viruses. In all of these cases, there is a thriving market in products (alternative browsers/email clients, spam blockers and anti-virus software) that might address the problem. In the case of phishing, I am unaware of any commercial browser or email client that does better than the dominant one at protecting customers. In the latter two cases, there is, again, obviously no monopoly to blame if they’re not doing their job.

    So if you believe that the problem of insecure software is due to a monopoly, then you also have to believe (1) that server software is much more secure than desktop software; (2) that desktop software is where the real security problem resides; and (3) that the security problems besetting desktop PCs are not solvable by third-party desktop security products, despite the thriving market in them. Do you really believe all these things?

  8. Dan, by definition, a monopoly market is not a competitive market. My point is that by creating incentives to address the lack of a competitive market, smart people will work at addressing customer needs. It is common for MSFT employees to make the claim that we are making progress, but making some progress is irrelevant. My local monopoly phone company also claims we’re making progress on the move to lower cost, higher bandwidth home internet connections. But does any person believe that the price/performance of DSL would have improved as fast if cablemodems didn’t have a chance to thrive? And if neighborhood wi-fi weren’t looming on the horizon?

    As a principle, we either believe or we don’t believe that more competitive markets will better address customer needs.

    Now, in answer to your question about lawsuits against customers for P2P. My answer is simple: fear of a lawsuit against an individual is not the right incentive to stop music theft. It isn’t working. It wasn’t a govt policy decision to introduce competition into a marketplace. It was an industry solution to create fear, uncertainty, and doubt among it’s customers with the capability to pay for music. Similarly, I would not be ‘for’ legislation that gives MSFT the right to sue customers for not using WindowsUpate.

  9. Steve, you keep harping on the assertion that today’s PC users should not be held liable for the damage their machines cause, because they’re either too ignorant or too powerless to prevent it. But that’s clearly a red herring, because even in the case of copyright-violating file-sharing–which is obviously quite knowing, voluntary and intentional–you admit that you “don’t think prosecuting [perpetrators] will have any effect”, and that you “view the exercise as a huge waste of time and energy.” Why, then, would it be any more effective to punish knowing, intentional DoS facilitators? And if wouldn’t be, then why does the fact that today’s DoS facilitators may be unknowing or unintentional make one bit of difference?

    You may be right, for all I know, that the susceptibility of the Internet to DoS is not a matter of architecture, but merely a matter of altering the business relationships among ISPs. If so, then I am delighted to hear that the problem is easier than I thought, and it’d be great to see it quickly and easily fixed. But if you eliminate the DoS problem, then the whole supposed “market failure” caused by externalities associated with insecure commercial software disappears completely. That’s not to say that insecure software is no longer a problem, of course–only that the cost of the problem is borne entirely by the customers themselves, who are free to choose a preferable alternative if the market offers one.

    And customers are demanding more secure software, and the market is responding. Today’s software is more secure today than it has been in the past, and it will likely get more secure in the future–although possibly not as secure as you would like, because customers have their own cost/convenience/functionality/security tradeoffs. I see absolutely no evidence of market failure, unless “market failure” is defined as not giving Steve what he wants, now, at the price he wants to pay for it.

  10. Dan, I’ll try to answer your questions, although I don’t know that they are all related to the point.

    1) Assuming the file-sharers are capable of understanding the law and they willfully violate the law, I don’t care if individual file-sharers are prosecuted for copyright violation. I don’t think prosecuting them will have any effect. And I view the exercise as a huge waste of time and energy. So I try not to spend much time thinking about it.

    2) I do not believe that people capture by botnets today should be held liable for the damage their PCs cause. Given my experience at MSFT, if I sat on a jury, I would *never* find a typical citizen responsible for failing to secure their PC connected to the Internet. IMHO, MSFT has misled customers about the safety of their systems for quite some time and it would be difficult for me to hold any citizen accountable for receiving bad information.

    3) I will not take the bait and pick a specific solution for securing computers attached to the Internet. If your point is that it is impossible to secure endpoints on the Internet, then why are we wasting time doing it? If you concede that it is possible to secure endpoints well enough that the costs of damage would significantly drop then let’s get there by providing market incentives to reach that end and let the market figure out which solution it wants.

    4) As for “changing the network”, one of the problems you mention (DoS attacks) is at least partially caused by a market failure. The failure is that Internet Service Providers are financially incented to transit big DoS attacks. If the incentive were taken away, I suspect we would see a significant reduction in the number and/or severity of these attacks.

    Again, I think the problem here is that the market is not being sufficiently monitored for the correct incentives. While I realize that acting quickly to correct market failures is not in the best interests of the market, I think 5-6 years or so is enough time to wait.

  11. Steve, I don’t understand your position. Are you arguing that individual file-sharers should be prosecuted for copyright violation? And if so, do you think that there is a realistic hope of (a) this happening, and (b) it having an appreciable effect on file-sharing?

    Or are you arguing that they shouldn’t be prosecuted for copyright violation? If not, then then what makes future facilitators of DoS attacks any different?

    More to the point, do you believe that people whose machines are captured by botnets today ought to be held directly liable for the damage their PCs cause? If not, then what is it about the act of buying a PC, setting it up insecurely, connecting it to the Internet, and allowing it to be controlled by a DoS attacker that will somehow become radically different the day (certain specific) software products (though presumably not all of them) become (potentially, if properly configured, somewhat more) secure?

    And if so (and in the highly unlikely event that liability imposition on individual PC owners becomes a widely enforced, credible deterrent), then which reaction do you think is more likely–that users will stop using popular commercial software, or that they will start looking for some inexpensive software, hardware or ISP services that do (albeit clumsily, inconveniently, and only partially effectively) the kind of traffic control that a properly designed network would do for them?

    I’d love to see the security of commercial software improve. I believe that security holes in commercial software cause enormous damage to its users today, and I’m personally doing my part to reduce the number and severity of those holes. But I simply don’t buy the “externalities” argument. The DoS attacks that are being blamed on commercial software holes would be just as easy to mount if all commercial software were ironclad. (In fact, I suspect they’d be roughly as common as they are today, for reasons I omit here for brevity, but could elaborate on if asked.) On the other hand, these attacks would be impossible to mount if the Internet weren’t designed to make them ridiculously easy. Where to assign the blame is therefore obvious–to me, at least.

  12. Dan, you sound like a dog owner who claims they can’t control their dog. Should I believe it’s not your fault if the dog bites some guy at a party in your backyard?

    You are right about one thing — at the root of this discussion is a debate about property rights. While I am not sympathetic to the case of P2P music sharing that violates copyright law, I am sympathetic to the debate about what the copyright laws should be.

    In the case of PC owners, I am arguing that if market competition were encouraged to find a solution to this problem, it probably would. You seem to argue that a solution is impossible. I just don’t agree. I think we should give market incentives to smart people to find a way out of the mess that monopoly has given us.

  13. Steve, if users were “held responsible for the use of their machines”, then that wouldn’t be an example of “the market with correct incentives, [working] to correct this problem”. It would be an example of government regulation intervening to override market forces, exacting legal penalties on PC users who merely chose to run programs that (as a side effect) happened to allow others to run proxy denial-of-service attacks on, say, corporate Web sites.

    Now, it’s just a little bit odd to see, here on freedom-to-tinker.com, people so eager to hold individual PC users liable for using their personal computers in ways that deprive a few faraway businesses of some of their revenue. But even if the moral argument for punishing my hypothetical future porn viewers/DoS attackers weren’t so strikingly hypocritical when coming from RIAA opponents, the practical argument would still hold in both cases: prosecuting individual PC users for using freeware that just happens to facilitate proxy DoS attacks is about as feasible as prosecuting individual PC users for using freeware that just happens to make mincemeat out of the copyright laws protecting works of entertainment.

    The real difference between these two cases, I assert, is not that today’s users are not in control of their PCs–indeed, file-swappers are quite conscious and deliberate in their copyright violation. The difference is that file-swappers are engaging in voluntary communication amongst themselves, whereas DoS victims are being subjected to unsolicited communication that they don’t want and are unable to avoid–largely because of a network architecture that (arguably by its very design) makes such communication next to impossible to avoid. That’s a failure of network security, not a matter of legal liability. I simply don’t understand how anyone who considers ridding commercial software of security vulnerabilities so simple a task that it can be made legally compulsory for software vendors, can simultaneously believe that DoS-resilient networking is so infeasible that an onerous regime of legal liability rules for PC owners is a more realistic alternative. Have you never heard of the telephone network, for pity’s sake?

  14. If all commercial software were impregnably secure, users could be held responsible for the use of their machines. But the situation today is that no user could ever reasonably be held responsible for security problems.

    I’m not certain why anyone would think the market, with correct incentives, won’t work to correct this problem. However, an arguer steeped in economic theory can easily make the argument that a monopoly provider of computer security models has very limited incentive to work quickly to address the core problems.

  15. I suppose any security problem can be viewed as a “market failure”, in the sense that it allows people to do bad things and get away with them, and therefore fails to “disincentivize” them sufficiently. I’m not sure why it’s useful to use this terminology, though, except perhaps insofar as one would like to argue implicitly that the correct solution is legal (as with most conventional market failures) rather than technical (as with typical security holes).

    I’m afraid I don’t know what a better Internet architecture would look like, either–I’m no networking guru, after all. I’ve heard of various proposals, and I gather that serious networking people tend to laugh them off with roughly the same dismissive, that’s-not-how-we-do-real-networking-son tone with which serious networking people used to laugh off the Internet.

    Suppose, though, that the cybersecurity czar were to knock heads at the research funding agencies, to get them to re-allocate at least some of the money currently devoted to applying duct tape and baling wire to an irreparably insecure Internet, towards more farsighted, speculative research into new, more secure Internet architectures. Who knows what brilliant ideas they might come up with?

    Now that would be a welcome correction to a serious market failure.

  16. Dan,

    There is a market failure in the situation you describe. Even if users knowingly rent their machines to attackers, this represents a market failure, because a user’s decision to do so does not take into account the harm that decision inflicts on the victims of attacks.

    This problem could, in theory at least (although the theory-practice gap is probably large here), be addressed by imposing liability on end users for the harm caused by misuse of their machines. Doing so might drive some people off the Internet, but those are people who cause net harm by being there.

    Perhaps this market failure could have been prevented, or at least mitigated, by designing the Internet differently. You make an interesting argument on that point, but I’m not sure I see what a better Internet architecture would look like (bearing in mind that the goal is not to minimize losses but to maximize net benefit). That’s a conversation worth having.

  17. It’s not the market that’s failed, but rather the Internet itself. The fact that thousands of individual users’ computers can be harnessed to attack and disable a large computer system on the other side of the world, at essentially no cost to the individual users, is a fundamental flaw in the Internet’s security model (one of many) that has nothing whatsoever to do with the security (or lack thereof) of off-the-shelf commercial software. Indeed, this problem would exist even if all commercial software were impregnably secure–attackers could simply use freeware, or pornography, or some other very cheap, scalable incentive, to get users to allow their machines to be harnessed voluntarily.

    But the computer science community turns a blind eye to the fact that the Internet’s architecture is fundamentally broken from a security point of view, because they have a huge collective stake (intellectual, philosophical, and even economic) in seeing the Internet continue to limp along, with minimal serious discussion of its deep, fundamental shortcomings. Some in the community (or so I hear) even seem to think it more sensible to demand that all commercial software be essentially bug-free (an obviously ludicrous requirement) than to question whether the Internet should be built on design principles that practically guarantee that it will always be a cyber-attacker’s paradise.

    Here’s a job for the cybersecurity czar: gather together a group of smart, reasonable people who are not in the grip of the Internet religion, and commission them to come up with a global network architecture that preserves as many of the attractive features of the Internet as possible, but isn’t an utter, complete, hopeless, irreparable security disaster. Next, offer the design to the commercial world for free. Finally, figure out how to protect it from the inevitable hysterical attacks from the computer science community–who would of course be happy to see the cybersecurity czar wield his big stick to protect their cherished baby, the Internet, but will surely scream blue murder if he dares promote a safer, perhaps less technogeek-friendly alternative.

  18. Chris Tunnell says

    Interestingly enough, he resigned according to the BBC:

    http://news.bbc.co.uk/2/hi/technology/3714412.stm

  19. BTW, I’ve long argued that the lack of competing security models within desktop operating systems slows the pace of innovation. No matter how much money MSFT spends on security, there will be little real progress as long as other real competitors fail to emerge.

    Perhaps the most encouraging information that I’ve seen lately are the predictions that MSFT’s IE browser will drop below 75% marketshare by this time next year. Why? Because users want a different security model.

  20. The government has a set of rules that describe the types of applications that can be used in govt applications. The Security Czar should be the guy that the administration hires to be the “buck stops point” in the arbitration of the rules related to security.

    The security czar should also be an opinion-influencer, giving speaches at ComDex and other major shows to tell people the criteria they should use to make purchasing decisions that will lead to better security.

    The job will suck. The person who holds the position will be under constant attack by every industry player. So it should be an old timer with ample budget and a significant staff of young, smart rug rats charged with the job of figuring out policy.

  21. There are a range of options for a cyberterrorism czar. Some that come to mind (from previous thinking I have done about the notion of a Chief Risk Officer of the US):

    * defines the government’s risk-appetite
    * marshalls the required risk management resources
    * defines and estabilishes an organization and the roles and responsibilities within it
    * defining how to measure and monitor risk
    * putting the systems into place to collect that data
    * creating effective reports to improve transparency to stakeholders (i.e., the President, Congress and US citizens)

    Your two suggestions are interesting policy changes, whereas I am suggesting various functions of a new bureaucracy.

    These dot points are just a framework. There is a bit of work to translate each of these bullet points into something more specific to cyberterrorism.

    For example, consider the point “marshalling risk management resources.” I recall seeing somewhere, probably here or slashdot, that 90% of viruses last year were written by one single person. A task of the cyberterrorism czar would be to work with the FBI, CIA and other enforcement agencies to find the very few people who have the capability and intention of inflicting massive damage to us through the Internet.

    Anyway, I think there is a lot of real work that cyberterrorism czar could do other than “watching over” the private sector.

  22. The government can and does allocate liability, provide security information, and even secure products. The security market will not work until the incentives and the risks are aligned; and the information is made available to individuals so they can manage their own risks.

    For economics of security info check out
    http://www.infosecon.net

    Here is my thought experiment, and the paper it is in describes the problems of economics of security in more detail than this post. (http://www.ljean.com/files/isw.pdf)
    “We argue that provision of computer security in a networked environment is an externality and subject to market failures. However, regulatory regimes or a pricing schemes can causes parties to internalize the externalities and provide more security. The current mechanisms for dealing with security are security analysis firms; publications of vulnerabilities; the provision of emergency assistance through incident response teams; and the option of seeking civil redress through the courts. The overall effectiveness of these mechanisms is questionable. The foundation of environmental economics supports building a market as a solution to the problem of widespread vulnerabilities. In this work we propose a market for vulnerability credits. “