October 6, 2022

Archives for February 2017

How the Politics of Encryption Affects Government Adoption

I wrote yesterday about reports that people in the White House are using encrypted communication apps more often, and why that might be. Today I want to follow up by talking about how the politics of encryption might affect government agencies’ choices about how to secure their information.  I’ll do this by telling the stories of the CIOs of three hypothetical Federal agencies.

Alice is CIO of Agency A. Her agency’s leader has said in speeches that encryption is a tool of criminals and terrorists, and that encryption is used mostly to hide bad or embarrassing acts. Alice knows that if she adopts encryption for the agency, her boss could face criticism for hypocrisy, for using the very technology that he criticizes. Even if there is evidence that encryption will make Agency A more secure, there is a natural tendency for Alice to look for other places to try to improve security instead.

Bob is CIO of Agency B. His agency’s leader has taken a more balanced view, painting encryption as a tool with broad value for honest people, and which happens to be used by bad people as well. Bob will be in a better position than Alice to adopt encryption if he thinks it will improve his agency’s security. But he might hesitate a bit to do so if Agencies A and B need to work together on other issues, or if the two agency heads are friends–especially if encryption seems more important to the head of Agency A than it does to the head of Bob’s own agency.

Charlie is CIO of Agency C. His agency’s leader hasn’t taken a public position on encryption, but the leader is known to be impulsive, thin-skinned, and resistant to advice from domain experts. Charlie worries that if he starts deploying encryption in his agency, and then the leader impulsively takes a strong position against encryption without consulting his team, the resulting accusations of hypocrisy could anger the leader. That might cost Charlie his job, or seriously undermine the authority he needs to properly manage agency IT. The safe thing for Charlie to do is to avoid deploying encryption–not only to preserve his job but also to protect the rest of the agency’s IT agenda. If Charlie doesn’t change the agency’s practice, then criticism of the practice can be deflected onto the previous leader–and of course we’ll be upgrading to the better practice soon. Here the uncertainty created by the leader’s management style deters Charlie from changing encryption practice.

Let’s recap. Alice, Bob, and Charlie are operating in different environments, but in all three cases, the politics of encryption are deterring them, at least a little, from deploying encryption. Their decision to deploy it or not will depend not only on their best judgment as to whether it will improve the agency’s security, but also on political factors that raise the cost of adopting encryption. And so their agencies may not make enough use of encryption.

This is yet another reason we need a serious and specific discussion about encryption policy.

 

On Encryption Apps in the White House

Politico ran a long story today pointing to an increase in the use of encrypted communication apps by people in DC, government, and the White House specifically.

Poisonous political divisions have spawned an encryption arms race across the Trump administration, as both the president’s advisers and career civil servants scramble to cover their digital tracks in a capital nervous about leaks.

The surge in the use of scrambled-communication technology — enabled by free smartphone apps such as WhatsApp and Signal — could skirt or violate laws that require government records to be preserved and the public’s business to be conducted in official channels, several ethics experts say. It may even cloud future generations’ knowledge of the full history of Donald Trump’s presidency.

The article seems to be well reported, and it raises some of the important issues around the trend toward encryption in DC. But I think it misses a few points, which I’d like to open up in this post.

The first point is that there is nothing wrong with government employees using encrypted apps for their personal communication. Indeed, doing so should be considered a best practice for people who might be targets for foreign intelligence services–such as people who work at the White House. Insecure practices in the personal lives of government officials create risk–and it seems ill-advised for White House officials to try to stop their employees from following security best practices on their personal phones.

The second issue is the relationship between encryption and record-keeping. Government employees are required to retain records of much of their official communication–which is one of the reasons why business and personal activities are conducted on separate systems, more so in the government than in other enterprises. (The other main reason is security. And of course classified information is handled on yet another separate array of systems.) Government systems are set up to collect the necessary records, whereas your personal systems probably don’t retain everything that you would need to keep if you were carrying out government business on them.

But notice that record-keeping does not depend on messages being encrypted or not encrypted as they traverse a network. It is perfectly feasible to transmit a message in encrypted form, while archiving that message at one or both endpoints. If you’re using an untrusted network–and most of the networks you’ll encounter as you move through your life should be treated as untrusted–then it’s prudent to use encryption for data traversing those networks, and to meet any record-keeping requirement by logging messages at the endpoints. Some government-issued systems already work that way.

But the reality for White House employees–based on my experience working there–is that they seem to have access to better encrypted communication tools on their private devices than they do on their government-issue devices. And that leads to a natural temptation to transact government business using secure apps on personal devices. One way to address that would be to improve the encrypted communication tools available on government-issued devices, while making sure to configure those tools to keep records and maintain accountability as legally required. That wouldn’t stop employees from using their personal devices because they want to avoid accountability–cheaters gonna cheat–but at least it would reduce the temptation to use personal devices to try to improve security.

Finally, one has to wonder how this discussion is affected by the politics of encryption. I’ll write about that in a future post.

 

RIP, SHA-1

Today’s cryptography news is that researchers have discovered a collision in the SHA-1 cryptographic hash function. Though long-expected, this is a notable milestone in the evolution of crypto standards.

Kudos to Marc Stevens, Elie Bursztein, Pierre Karpma, Ange Albertine, and Yarik Markov of CWI Amsterdam and Google Research for their result.

SHA-1 was standardized by NIST in 1995 for use as a cryptographic hash function (or simply “hash”).  Hashes have many uses, most notably as unique short “fingerprints” for data. A secure hash will be collision-resistant, which means it is infeasible to find two files that have the same hash.

One consequence of collision-resistance is that if you want to detect whether anyone has tampered with a file, you can just remember the hash of the file. This is nice because the hash will be small: a SHA-1 hash is only 20 bytes, and other popular hashes are typically 32 bytes. Later, if you want to verify that the file hasn’t changed, you can recompute the hash of the (possibly modified) file , and verify that the result is the same as the hash you remembered. If the hashes of two files are the same, then the files must be the same–otherwise the two files would constitute a collision.

By 2011 it had become clear that SHA-1 was not as strong as expected. Any hash can be defeated by a brute-force search, so hashes are designed so that the cost of brute-force search is too high to be feasible. But methods had been discovered that reduced the cost of finding a collision by a factor of about 100,000 below the cost of a brute-force search.  All was not lost, because even with that cost reduction, defeating SHA-1 was still massively costly by 2011 standards. But the writing was on the wall, and NIST deprecated SHA-1 in 2011, which is standards-speak for advising people to stop using it as soon as practical.

The new result demonstrates a collision in SHA-1. The researchers found two PDF files that have the same hash. This required a lot of computation: 6500 machine-years on standard computers (CPUs), plus 100 machine-years on slightly specialized computers (GPUs).

Today, some systems in the field still rely on SHA-1, despite stronger hashes like SHA-2 getting more use, and the presumably stronger SHA-3 standard was issued in 2015. It is well past time to stop using SHA-1, but the process of phasing it out has taken longer than expected.

There are two lessons here about crypto standards. First, it can take a long time to phase out a standard, even if it is deprecated and known to be vulnerable. Second, the work by NIST and the crypto community to plan ahead, deprecate SHA-1 early, and push forward successor standards, will pay many dividends.

[Post updated (24 Feb) to improve terminology (collision-resistant, rather than collision-free), and to reflect the correct status of the SHA-3 standard.]

Smart Contracts: Neither Smart nor Contracts?

Karen Levy has an interesting new article critiquing blockchain-based “smart contracts.”  The first part of her title, “Book-Smart, not Street-Smart,” sums up her point. Here’s a snippet:

Though smart contracts do have some features that might serve the goals of social justice and fairness, I suggest that they are based on a thin conception of what law does, and how it does it. Smart contracts focus on the technical form of contract to the exclusion of the social contexts within which contracts operate, and the complex ways in which people use them. In the real world, contractual obligations are enforced through all kinds of social mechanisms other than formal adjudication—and contracts serve many functions that are not explicitly legal in nature, or even designed to be formally enforced.

To review, “smart contracts” are a feature of some blockchain-based systems, which allow an interaction between multiple parties to be encoded as a set of rules which will be executed automatically by the system, so that neither the parties nor anyone else can prevent those rules from being enforced. There are lots of variations on the basic idea, which differ in aspects such as exactly what kind of code is used to program the rules, what kinds of actions can be expressed in a ruleset, and so on.

A simple example is an escrow arrangement, where Alice puts some money into escrow, and the money is released to Bob later if an arbiter Charlie determines that Bob performed some required action; otherwise the money returns to Alice. An escrow mechanism can be encoded as a “smart contract” so that once put into escrow the funds can only be disbursed to Alice or Bob, and only as specified by Charlie. Additional features, such as (say) splitting the money 50/50 between Alice and Bob if Charlie fails to act, can be built in. Indeed, the whole idea is that complicated rules can be encoded and then automatically executed with no dispute or appeal possible.

Karen’s argument, that contracts serve functions that are not merely legal, is correct–and that is one reason why “smart contracts” may not be street-smart.  But in addition to failing to do the non-legal work that contracts do, “smart contracts” also fail to do much of the legal work that contracts do, because they don’t work in the same way as contracts.

To give just one example, a legal contract need not try to anticipate absolutely every relevant event that might occur. If some weird thing happens that is not envisioned in a regular legal contract, the parties can work out a modification to the contract that seems reasonable to them, and failing that, a judge might decide the outcome, subject to established legal principles.  Similarly, a single error or “bug” in writing a regular contract, causing its literal meaning to differ from what the parties intended, is unlikely to lead to extreme results because the legal system will often resolve such a problem by trying to be reasonable.

Contrast this with “smart contracts” where a bug in a “contract’s” code can lead to a perverse result that may allow one party to exploit the bug, extracting much of the value out of the arrangement with no recourse for the other parties. That’s what happened with the DAO in Ethereum, leading to a controversial attempt to unwind a legal-according-to-the-rules set of transactions, and dividing the Ethereum community.

So if “smart contracts” may not be smart, and may not be contracts, what are they? It’s best to think of them not as contracts but as mechanisms. A mechanism is a sort of virtual machine that will do exactly what it is designed to do. Like an industrial machine, which can cause terrible damage if it’s not designed very carefully for safety or if it is used thoughtlessly, a mechanism can cause harm unless designed and used with great care.  That said, in some circumstances a mechanism will be exactly what you need.

Discarding the term “smart contract” which promises too much in both respects–being sometimes not smart and sometimes unlike a contract–and instead thinking of these virtual objects as nothing more or less than mindless mechanisms is not only more accurate, but also more likely to lead to more prudent application of this powerful idea.

 

 

Mitigating the Increasing Risks of an Insecure Internet of Things

The emergence and proliferation of Internet of Things (IoT) devices on industrial, enterprise, and home networks brings with it unprecedented risk. The potential magnitude of this risk was made concrete in October 2016, when insecure Internet-connected cameras launched a distributed denial of service (DDoS) attack on Dyn, a provider of DNS service for many large online service providers (e.g., Twitter, Reddit). Although this incident caused large-scale disruption, it is noteworthy that the attack involved only a few hundred thousand endpoints and a traffic rate of about 1.2 terabits per second. With predictions of upwards of a billion IoT devices within the next five to ten years, the risk of similar, yet much larger attacks, is imminent.

The Growing Risks of Insecure IoT Devices

One of the biggest contributors to the risk of future attack is the fact that many IoT devices have long-standing, widely known software vulnerabilities that make them vulnerable to exploit and control by remote attackers. Worse yet, the vendors of these IoT devices often have provenance in the hardware industry, but they may lack expertise or resources in software development and systems security. As a result, IoT device manufacturers may ship devices that are extremely difficult, if not practically impossible, to secure. The large number of insecure IoT devices connected to the Internet poses unprecedented risks to consumer privacy, as well as threats to the underlying physical infrastructure and the global Internet at large:

  • Data privacy risks. Internet-connected devices increasingly collect data about the physical world, including information about the functioning of infrastructure such as the power grid and transportation systems, as well as personal or private data on individual consumers. At present, many IoT devices either do not encrypt their communications or use a form of encrypted transport that is vulnerable to attack. Many of these devices also store the data they collect in cloud-hosted services, which may be the target of data breaches or other attack.
  • Risks to availability of critical infrastructure and the Internet at large. As the Mirai botnet attack of October 2016 demonstrated, Internet services often share underlying dependencies on the underlying infrastructure: crippling many websites offline did not require direct attacks on these services, but rather a targeted attack on the underlying infrastructure on which many of these services depend (i.e., the Domain Name System). More broadly, one might expect future attacks that target not just the Internet infrastructure but also physical infrastructure that is increasingly Internet- connected (e.g., power and water systems). The dependencies that are inherent in the current Internet architecture create immediate threats to resilience.

    The large magnitude and broad scope of these risks implore us to seek solutions that will improve infrastructure resilience in the face of Internet-connected devices that are extremely difficult to secure. A central question in this problem area concerns the responsibility that each stakeholder in this ecosystem should bear, and the respective roles of technology and regulation (whether via industry self-regulation or otherwise) in securing both the Internet and associated physical infrastructure against these increased risks.

Risk Mitigation and Management

One possible lever for either government or self-regulation is the IoT device manufacturers. One possibility, for example, might be a device certification program for manufacturers that could attest to adherence to best common practice for device and software security. A well-known (and oft-used) analogy is the UL certification process for electrical devices and appliances.

Despite its conceptual appeal, however, a certification approach poses several practical challenges. One challenge is outlining and prescribing best common practices in the first place, particularly due to the rate at which technology (and attacks) progress. Any specific set of prescriptions runs the risk of falling out of date as technology advances; similarly, certification can readily devolve into a checklist of attributes that vendors satisfy, without necessarily adhering to the process by which these devices are secured over time. As daunting as challenges of specifying a certification program may seem, enforcing adherence to a certification program may prove even more challenging. Specifically, consumers may not appreciate the value of certification, particularly if meeting the requirements of certification increases the cost of a device. This concern may be particularly acute for consumer IoT, where consumers may not bear the direct costs of connecting insecure devices to their home networks.

The consumer is another stakeholder who could be incentivized to improve the security of the devices that they connect to their networks (in addition to more effectively securing the networks to which they connect these devices). As the entity who purchases and ultimately connects IoT devices to the network, the consumer appears well-situated to ensure the security of the IoT devices on their respective networks. Unfortunately, the picture is a bit more nuanced. First, consumers typically lack either the aptitude or interest (or both!) to secure either their own networks or the devices that they connect to them. Home broadband Internet access users have generally proved to be poor at applying software updates in a timely fashion, for example, and have been equally delinquent in securing their home networks. Even skilled network administrators regularly face network misconfigurations, attacks, and data breaches. Second, in many cases, users may lack the incentives to ensure that their devices are secure. In the case of the Mirai botnet, for example, consumers did not directly face the brunt of the attack; rather, the ultimate victims of the attack were DNS service providers and, indirectly, online service providers such as Twitter. To the first order, consumers suffered little direct consequence as a result of insecure devices on their networks.

Consumers’ misaligned incentives suggest several possible courses of action. One approach might involve placing some responsibility or liability on consumers for the devices that they connect to the network, in the same way that a citizen might be fined for other transgressions that have externalities (e.g., fines for noise or environmental pollution). Alternatively, Internet service providers (or another entity) might offer users a credit for purchasing and connecting only devices that it pass certification; another variation of this approach might require users to purchase ”Internet insurance” from their Internet service providers that could help offset the cost of future attacks. Consumers might receive credits or lower premiums based on the risk associated with their behavior (i.e., their software update practices, results from security audits of devices that they connect to the network).

A third stakeholder to consider is the Internet service provider (ISP), who provides Internet connectivity to the consumer. The ISP has considerable incentives to ensure that the devices that its customer connects to the network are secure: insecure devices increase the presence of attack traffic and may ultimately degrade Internet service or performance for the rest of the ISPs’ customers. From a technical perspective, the ISP is also in a uniquely effective position to detect and squelch attack traffic coming from IoT devices. Yet, relying on the ISP alone to protect the network against insecure IoT devices is fraught with non-technical complications. Specifically, while the ISP could technically defend against an attack by disconnecting or firewalling consumer devices that are launching attacks, such an approach will certainly result in increased complaints and technical support calls from customers, who connect devices to the network and simply expect them to work. Second, many of the technical capabilities that an ISP might have at its disposal (e.g., the ability to identify attack traffic coming from a specific device) introduce serious privacy concerns. For example, being able to alert a customer to (say) a compromised baby monitor requires the ISP to know (and document) that a consumer has such a device in the first place.

Ultimately, managing the increased risks associated with insecure IoT devices may require action from all three stakeholders. Some of the salient questions will concern how the risks can be best balanced against the higher operational costs that will be associated with improving security, as well as who will ultimately bear these responsibilities and costs.

Improving Infrastructure Resilience

In addition to improving defenses against the insecure devices themselves, it is also critical to determine how to better build resilience into the underlying Internet infrastructure to cope with these attacks. If one views the occasional IoT-based attack inevitable to some degree, one major concern is ensuring that the Internet Infrastructure (and the associated cyberphysical infrastructure) remains both secure and available in the face of attack. In the case of the Mirai attack on Dyn, for example, the severity of the attack was exacerbated by the fact that many online services depended on the infrastructure that was attacked. Computer scientists and Internet engineers should be thinking about technologies that can both potentially decouple these underlying dependencies and ensure that the infrastructure itself remains secure even in the event that regulatory or legal levers fail to prevent every attack. One possibility that we are exploring, for example, is the role that an automated home network firewall could play in (1) help- ing users keep better inventory of connected IoT devices; (2) providing users both visibility into and control over the traffic flows that these devices send.

Summary

Improving the resilience of the Internet and cyberphysical infrastructure in the face of insecure IoT devices will require a combination of technical and regulatory mechanisms. Engineers and regulators will need to work together to improve security and privacy of the Internet of Things. Engineers must continue to advance the state of the art in technologies ranging from lightweight encryption to statistical network anomaly detection to help reduce risk; similarly, engineers must design the network to improve resilience in the face of the increased risk of attack. On the other hand, realizing these advances in deployment will require the appropriate alignment of incentives, so that the parties that introduce risks are more aligned with those who bear the costs of the resulting attacks.