August 30, 2016

avatar

Security against Election Hacking – Part 2: Cyberoffense is not the best cyberdefense!

State and county election officials across the country employ thousands of computers in election administration, most of them are connected (from time to time) to the internet (or exchange data cartridges with machines that are connected).  In my previous post I explained how we must audit elections independently of the computers, so we can trust the results even if the computers are hacked.

Still, if state and county election computers were hacked, it would be an enormous headache and it would certainly cast a shadow on the legitimacy of the election.  So, should the DHS designate election computers as “critical cyber infrastructure?”

This question betrays a fundamental misunderstanding of how computer security really works.  You as an individual buy your computers and operating systems from reputable vendors (Apple, Microsoft, IBM, Google/Samsung, HP, Dell, etc.).  Businesses and banks (and the Democratic National Committee, and the Republican National Committee) buy their computers and software from the same vendors.  Your security, and the security of all the businesses you deal with, is improved when these hardware and software vendors build products without security bugs in them.   Election administrators use computers that run Windows (or MacOS, or Linux) bought from the same vendors.

Parts of the U.S. government, particularly inside the NSA, have “cyberdefense” teams that analyze widely used software for security vulnerabilities.  The best thing they could do to enhance our security is notify the vendors immediately about vulnerabilities, so the vendors can fix the bugs (and learn their lessons).   Unfortunately, the NSA also has “cyberoffense” teams that like to save up these vulnerabilities, keep them secret, and use them as weak points to break into their adversaries’ computers.  They think they’re so smart that the Russkies, or the Chinese, will never be able to figure out the same vulnerabilities and use them to break into the computers of American businesses, individuals, the DNC or RNC, or American election administrators.  There’s even an acronym for this fallacy: NOBUS.  “NObody But US” will be able to figure out this attack.

Vulnerability lists accumulated by the NSA and DHS probably don’t include a lot of vote-counting software: those lists (probably) focus on widely used operating systems, office and word-processing, network routers, phone apps, and so on.  But vote-counting software typically runs on widely used operating systems, uses PDF-handling software for ballot printing, network routers for vote aggregation.  Improvements in these components would improve election security.

So, the “cyberdefense” experts in the U.S. Government could improve everyone’s security, including election administrators, by promptly warning Microsoft, Apple, IBM, and so on about security bugs.  But their hands are often tied by the “cyberoffense” hackers who want to keep the bugs secret—and unfixed.  For years, independent cybersecurity experts have advocated that the NSA’s cyberdefense and cyberoffense teams be split up into two separate organizations, so that the offense hackers can’t deliberately keep us all insecure.   Unfortunately, in February 2016 the NSA did just the opposite: it merged its offense and defense teams together.

Some in the government talk as if “national cyberdefense” is some kind of “national guard” that they can send in to protect a selected set of computers.  But it doesn’t work that way.  Our computers are secure because of the software we purchase and install; we can choose vendors such as Apple, IBM, Microsoft, HP, or others based on their track record or based on their use of open-source software that we can inspect.  The DHS’s cybersecurity squad is not really in that process, except as they help the vendors improve the security of their products.  (See also:  “The vulnerabilities equities process.”)

Yes, it’s certainly helpful that the Secretary of Homeland Security has offered “assistance in helping state officials manage risks to voting systems in each state’s jurisdiction.”  But it’s too close to the election to be fiddling with the election software—election officials (understandably) don’t want to break anything.

But really we should ask: Should the FBI and the NSA be hacking us or defending us?  To defend us, they must stop hoarding secret vulnerabilities, and instead get those bugs fixed by the vendors.

avatar

Election security as a national security issue

We recently learned that Russian state actors may have been responsible for the DNC emails recently leaked to Wikileaks. Earlier this spring, once they became aware of the hack, the DNC hired Crowdstrike, an incident response firm. The New York Times reports:

Preliminary conclusions were discussed last week at a weekly cyberintelligence meeting for senior officials. The Crowdstrike report, supported by several other firms that have examined the same bits of code and telltale “metadata” left on documents that were released before WikiLeaks’ publication of the larger trove, concludes that the Federal Security Service, known as the F.S.B., entered the committee’s networks last summer.

President Obama added that “on a regular basis, [the Russians] try to influence elections in Europe.” For the sake of this blog piece, and it’s not really a stretch, let’s take it as a given that foreign nation-state actors including Russia have a large interest in the outcome of U.S. elections and are willing to take all sorts of unseemly steps to influence what happens here. Let’s take it as a given that this is undesirable and talk about how we might stop it.

It’s bad enough to see foreign actors leaking emails with partisan intent. To make matters worse,  Bruce Schneier in a Washington Post op-ed and many other security experts in the past have been worried about our voting systems themselves being hacked. How bad could this get? Several companies are now offering Internet-based voting systems alongside apparently unfounded claims as to their security. In one example, Washington D.C. looked at using one such system for its local elections and had a “pilot” in 2010, wherein the University of Michigan’s Alex Halderman and his students found and exploited significant security vulnerabilities. Had this system been used in a real election, any foreign nation-state actor could have done the same. Luckily, these systems aren’t widely used.

How vulnerable are our nation’s election systems, as they’ll be used this November 2016, to being manipulated by foreign nation-state actors? The answer depends on how close the election will be. Consider Bush v. Gore in 2000. If an attacker, knowing it would be a very close election, had found a way to specifically manipulate the outcome in Florida, then their attack could well have had a decisive impact. Of course, predicting election outcomes is as much an art as a science, so an attacker would need to hedge their bets and go after the voting systems in multiple “battleground” states. Conversely, there’s no point in going after highly polarized states, where small changes will have no decisive impact. As an attacker, you want to leave a minimal footprint.

How good are we at defending ourselves? Will cyber attacks on current voting systems leave evidence that can be detected prior to our elections? Let’s consider the possible attacks and how our defenses might respond.

Voter de-registration: The purpose of a many attacks is simply to break things. Applied with partisan intent, you’d want to break things for one party more than the other. The easiest attack would be to hack a voter registration system, deleting voters who you believe are likely to support the candidate you don’t like. For voters who have registered for a political party, you know everything you need to know for who to delete. For independent voters you can probabilistically infer a their political opinions based on how their local precinct votes and on other demographic variables. (Political scientists do this sort of thing all the time.) Selectively destroying voter registration databases is likely to be recoverable. Such voters could demand to vote “provisional ballots” and those ballots would get counted as normal, once the voter registration databases were restored.

Vote flipping: A nastier attack would require an attacker to access the computers inside DRE voting systems. (“Direct recording electronic” systems are typically touch-screen computers with no voter-verifiable paper trail. The only record of a voter’s ballot is stored electronically, inside the computer.) These voting systems are typically not connected to the Internet, although they do connect to election management computers, and those sometimes use modems to gather data from remote precincts. (Details vary from state to state and even county to county.) From the perspective of a nation-state cyber attacker, a modem might as well be a direct connection to the Internet. Once you can get malware into one of these election management computers, you can delete or flip votes. If you’re especially clever, you can use the occasional connections from these election management computers to the voting machines and corrupt the voting machines themselves. (We showed how to do these sort of viral attacks as part of the California Top to Bottom Review in 2007.)

With paperless DRE systems, attacked by a competent nation-state actor, there will be no reason to believe any of the electronic records are intact, and a competent attacker would presumably also be good enough to clean up on their way out, so there wouldn’t necessarily even be any evidence of the attack.

The good news is that paperless DRE systems are losing market share and being replaced slowly-but-surely with several varieties of paper-ballot systems (some hand-marked and electronically scanned, others machine-marked). A foreign nation-state adversary can’t reach across the Internet and change what’s printed on a piece of paper, which means that a post-election auditing strategy to compare the electronic results to the paper results can efficiently detect (and thus deter) electronic tampering.

Where would an adversary attack? The most bang-for-the-buck for a foreign nation-state bent on corrupting our election would be to find a way to tamper with paperless DRE voting systems in a battleground state. So where then? Check out the NYT’s interactive “paths to the White House” page, wherein you can play “what-if” games on which states might have what impact in the Electoral College. The top battleground state is Florida, but thanks in part to the disastrous 2006 election in Florida’s 13th Congressional district, Florida dumped its DRE voting systems for optically scanned paper ballots; it would be much harder for an adversarial cyber attack to go undetected. What about other battleground states? Following the data in the Verified Voting website, Pennsylvania continues to use paperless DREs as does Georgia. Much of Ohio uses DRE systems with “toilet paper roll” printers, where voters are largely unable to detect if anything is printed incorrectly, so we’ll lump them in with the paperless states. North Carolina uses a mix of technologies, some of which are more vulnerable than others. So let’s say the Russians want to rig the election for Trump. If they could guarantee a Trump win in Pennsylvania, Georgia, Ohio, and North Carolina, then a Florida victory could put Trump over the top. Even without conspiracy theories, Florida will still be an intensely fought battleground state, but we don’t need a foreign government making it any worse.

So what should these sensitive states do in the short term? At this point, it’s far too late to require non-trivial changes in election technologies or even most procedures. They’re committed to what they’ve got and how they’ll use it. We could imagine requiring some essential improvements (security patches and updates installed, intrusion detection and monitoring equipment installed, etc.) and even some sophisticated analyses (e.g., pulling voting machines off the line and conducting detailed / destructive analyses of their internal state, going beyond the weak tamper-protection mechanisms presently in place). Despite all of this, we could well end up in a scenario where we conclude that we have unreliable or tampered election data and cannot use it to produce a meaningful vote tally.

Consider also that all an adversary needs to do is raise enough doubt that the loser has seemingly legitimate grounds to dispute the result. Trump is already suggesting that this November’s election might be rigged, without any particular evidence to support this conjecture. This makes it all the more essential that we have procedures that all parties can agree to for recounts, for audits, and for what to do when those indicate discrepancies.

In case of emergency, break glass. If we’re facing a situation where we see tampering on a massive scale, we could end up in a crisis far worse than Florida after the Bush/Gore election of 2000. If we do nothing until after we find problems, every proposed solution will be tinted with its partisan impact, making it difficult to reach any sort of procedural consensus. Nobody wants to imagine a case where our electronic voting systems have been utterly compromised, but if we establish processes and procedures, in advance, for dealing with these contingencies, such as commissioning paper ballots and rerunning the elections in impacted areas, we will disincentivize foreign election adversaries and preserve the integrity of our democracy.

(Addendum: contingency planning was exactly the topic of discussion after Hurricane Sandy disrupted elections across the Northeast in November 2012. It would be useful to revisit whatever changes were made then, in light of the new threat landscape we have today.)

Related reading:

avatar

Brexit Exposes Old and Deepening Data Divide between EU and UK

After the Brexit vote, politicians, businesses and citizens are all wondering what’s next. In general, legal uncertainty permeates Brexit, but in the world of bits and bytes, Brussels and London have in fact been on a collision course at least since the 90s. The new British prime minister, Theresa May, has been personally responsible for a deepening divide across the North Sea on data and communication policy. Although EU citizens will see stronger privacy and cybersecurity protections through EU law post-Brexit, multinational companies should be particularly worried about how future regulation will treat the loads of data they traffic about customers, employees, and deals between the EU and the UK.  [Read more…]

avatar

Who Will Secure the Internet of Things?

Over the past several months, CITP-affiliated Ph.D. student Sarthak Grover and fellow Roya Ensafi been investigating various security and privacy vulnerabilities of Internet of Things (IoT) devices in the home network, to get a better sense of the current state of smart devices that many consumers have begun to install in their homes.

To explore this question, we purchased a collection of popular IoT devices, connected them to a laboratory network at CITP, and monitored the traffic that these devices exchanged with the public Internet. We initially expected that end-to-end encryption might foil our attempts to monitor the traffic to and from these devices. The devices we explored included a Belkin WeMo Switch, the Nest Thermostat, an Ubi Smart Speaker, a Sharx Security Camera, a PixStar Digital Photoframe, and a Smartthings hub.

What We Found: Be Afraid!

Many devices fail to encrypt at least some of the traffic that they send and receive. Investigating the traffic to and from these devices turned out to be much easier than expected, as many of the devices exchanged personal or private information with servers on the Internet in the clear, completely unencrypted.

We recently presented a summary of our findings to the Federal Trade Commission, last week at PrivacyCon.  The video of Sarthak’s talk is available from the FTC website, as well as on YouTube.  Among some of the more striking findings include:

  • The Nest thermostat was revealing location information of the home and weather station, including the user’s zip code, in the clear.  (Note: Nest promptly fixed this bug after we notified them.)
  • The Ubi uses unencrypted HTTP to communicate information to its portal, including voice chats, sensor readings (sound, temperature, light, humidity). It also communicates to the user using unencrypted email. Needless to say, much of this information, including the sensor readings, could reveal critical information, such as whether the user was home, or even movements within a house.
  • The Sharx security camera transmits video over unencrypted FTP; if the server for the video archive is outside of the home, this traffic could also be intercepted by an eavesdropper.
  • All traffic to and from the PixStar photoframe was sent unencrypted, revealing many user interactions with the device.

Traffic capture from Nest Thermostat in Fall 2015, showing user zip code and other information in cleartext.

Traffic capture from Ubi, which sends sensor values and states in clear text.

Some devices encrypt data traffic, but encryption may not be enough. A natural reaction to some of these findings might be that these devices should encrypt all traffic that they send and receive. Indeed, some devices we investigated (e.g., the Smartthings hub) already do so. Encryption may be a good starting point, but by itself, it appears to be insufficient for preserving user privacy.  For example, user interactions with these devices generate traffic signatures that reveal information, such as when power to an outlet has been switched on or off. It appears that simple traffic features such as traffic volume over time may be sufficient to reveal certain user activities.

In all cases, DNS queries from the devices clearly indicate the presence of these devices in a user’s home. Indeed, even when the data traffic itself is encrypted, other traffic sent in the clear, such as DNS lookups, may reveal not only the presence of certain devices in your home, but likely also information about both usage and activity patterns.

Of course, there is also the concern about how these companies may use and share the data that they collect, even if they manage to collect it securely. And, beyond the obvious and more conventional privacy and security risks, there are also potential physical risks to infrastructure that may result from these privacy and security problems.

Old problems, new constraints. Many of the security and privacy problems that we see with IoT devices sound familiar, but these problems arise in a new, unique context, which present unique challenges:

  • Fundamentally insecure. Manufacturers of consumer products have little interest in releasing software patches and may even design the device without any interfaces for patching the software in the first place.  There are various examples of insecure devices that ordinary users may connect to the network without any attempts to secure them (or any means of doing so).  Occasionally, these insecure devices can result in “stepping stones” into the home for attackers to mount more extensive attacks. A recent study identified more than 500,000 insecure, publicly accessible embedded networked devices.
  • Diverse. Consumer IoT settings bring a diversity of devices, manufacturers, firmware versions, and so forth. This diversity can make it difficult for a consumer (or the consumer’s ISP) to answer even simple questions such as exhaustively identifying the set of devices that are connected to the network in the first place, let alone detecting behavior or network traffic that might reveal an anomaly, compromise, or attack.
  • Constrained. Many of the devices in an IoT network are severely resource-constrained: the devices may have limited processing or storage capabilities, or even limited battery life, and they often lack a screen or intuitive user interface. In some cases, a user may not even be able to log into the device.  

Complicating matters, a user has limited control over the IoT device, particularly as compared to a general-purpose computing platform. When we connect a general purpose device to a network, we typically have at least a modicum of choice and control about what software we run (e.g., browser, operating system), and perhaps some more visibility or control into how that device interacts with other devices on the network and on the public Internet. When we connect a camera, thermostat, or sensor to our network, the hardware and software are much more tightly integrated, and our visibility into and control over that device is much more limited. At this point, we have trouble, for example, even knowing that a device might be sending private data to the Internet, let alone being able to stop it.

Compounding all of these problems, of course, is the access a consumer gives an untrusted IoT device to other data or devices on the home network, simply by connecting it to the network—effectively placing it behind the firewall and giving it full network access, including in many cases the shared key for the Wi-Fi network.

A Way Forward

Ultimately, multiple stakeholders may be involved with ensuring the security of a networked IoT device, including consumers, manufacturers, and Internet service providers. There remain many unanswered questions concerning both who is able to (and responsible for) securing these devices, but we should start the discussion about how to improve the security for networks with IoT devices.

This discussion will include both policy aspects (including who bears the ultimate responsibility for device insecurity, whether devices need to adopt standard practices or behavior, and for how long their manufacturers should continue to support them), as well as technical aspects (including how we design the network to better monitor and control the behavior of these often-insecure devices).

Devices should be more transparent. The first step towards improving security and privacy for IoT should be to work with manufacturers to improve the transparency of these IoT devices, so that consumers (and possibly ISPs) have more visibility into what software the devices are running, and what traffic they are sending and receiving. This, of course, is a Herculean effort, given the vast quantity and diversity of device manufacturers; an alternative would be trying to infer what devices are connected to the network based on their traffic behavior, but doing so in a way that is both comprehensive, accurate, and reasonably informative seems extremely difficult.

Instead, some IoT device manufacturers might standardize on a manifest protocol that announces basic information, such as the device type, available sensors, firmware version, the set of destinations the device expects to communicate with (and whether the traffic is encrypted), and so forth. (Of course, such a manifest poses its own security risks.)

Network infrastructure can play a role. Given such basic information, anomalous behavior that is suggestive of a compromise or data leak would be more evident to network intrusion detection systems and firewalls—in other words, we could bring more of our standard network security tools to bear on these devices, once we have a way to identify what the devices are and what their expected behavior should be. Such a manifest might also serve as a concise (and machine readable!) privacy statement; a concise manifest might be one way for consumers to evaluate their comfort with a certain device, even though it may be far from a complete privacy statement.

Armed with such basic information about the devices on the network, smart network switches would have a much easier way to implement network security policies. For example, a user could specify that the smart camera should never be able to communicate with the public Internet, or that the thermostat should only be able to interact with the window locks if someone is present.

Current network switches don’t provide easy mechanisms for consumers to either express or implement these types of policies. Advances in Software-Defined Networking (SDN) in software switches such as Open vSwitch may make it possible to implement policies that resolve contention for shared resources and conflicts, or to isolate devices on the network from one another, but even if that is a reasonable engineering direction, this technology will only take us part of the way, as users will ultimately need far better interfaces to both monitor network activity and express policies about how these devices should behave and exchange traffic.

Update [20 Jan 2015]: After significant press coverage, Nest has contacted the media to clarify that the information being leaked in cleartext was not the zip code of the thermostat, but merely the zip code that the user enters when configuring the device. (Clarifying statement here.) Of course, when would a user ever enter a zip code other than that of their home, where the thermostat was located?

avatar

A clear line between offense and defense

The New York Times, in an editorial today entitled “Arms Control for a Cyberage“, writes,

The problem is that unlike conventional weapons, with cyberweapons “there’s no clear line between offense and defense,” as President Obama noted this month in an interview with Re/code, a technology news publication. Defense in cyberwarfare consists of pre-emptively locating the enemy’s weakness, which means getting into its networks.

This is simply wrong.
[Read more…]

avatar

Expert Panel Report: A New Governance Model for Communications Security?

Today, the vulnerable state of electronic communications security dominates headlines across the globe, while surveillance, money and power increasingly permeate the ‘cybersecurity’ policy arena. With the stakes so high, how should communications security be regulated? Deirdre Mulligan (UC Berkeley), Ashkan Soltani (independent, Washington Post), Ian Brown (Oxford) and Michel van Eeten (TU Delft) weighed in on this proposition at an expert panel on my doctoral project at the Amsterdam Information Influx conference. [Read more…]

avatar

“Loopholes for Circumventing the Constitution”, the NSA Statement, and Our Response

CBS News and a host of other outlets have covered my new paper with Sharon Goldberg, Loopholes for Circumventing the Constitution: Warrantless Bulk Surveillance on Americans by Collecting Network Traffic Abroad. We’ll present the paper on July 18 at HotPETS [slides, pdf], right after a keynote by Bill Binney (the NSA whistleblower), and at TPRC in September. Meanwhile, the NSA has responded to our paper in a clever way that avoids addressing what our paper is actually about. [Read more…]

avatar

Will Greenwald’s New Book Reveal How to Conduct Warrantless Bulk Surveillance on Americans from Abroad?

Tomorrow, Glenn Greenwald’s highly anticipated book ‘No Place to Hide’ goes on sale. Apart from personal accounts on working with whisteblower Edward Snowden in Hong Kong and elsewhere, Mr. Greenwald announced that he will reveal new surveillance operations by Western intelligence agencies. In the last weeks, Sharon Goldberg and I have been finishing a paper on Executive Order 12333 (“EO 12333”). We argue that EO 12333 creates legal loopholes for U.S. authorities to circumvent the U.S. Constitution and conduct largely unchecked and unrestrained bulk surveillance of American communications from abroad. In addition, we present several known and new technical means to exploit those legal loopholes. Today, we publish a summary of our new paper in this post.

We stress that we’re not in a position to suggest that U.S. authorities are actually structurally circumventing the Constitution using the international loophole we discuss in the paper.  But, we’re wondering: will the gist of our analysis be part of Greenwald’s new revelations tomorrow? A first snippet of Greenwald’s new book in The Guardian, about hacking American routers destined for use overseas, seems to point in that direction. Here’s our summary. [Read more…]

avatar

Signing Mass Surveillance Declarations and Petitions: Should Academics Take a Stance?

Quite often, especially since the Snowden revelations began, tech policy academics will be approached by NGO’s and colleagues to sign petitions ‘to end mass surveillance’. It’s not always easy to decide whether you want to sign. If you’re an academic, you might want to consider co-signing one initiative launched today. [Read more…]

avatar

The Politics of the EU Court Data Retention Opinion: End to Mass Surveillance?

The Wall Street Journal headlines: “EU Court Opinion: Data Retention Directive Incompatible With Fundamental Rights”. The Opinion is strong, but in fact not yet an outright victory to privacy and civil liberties. The jury is out: the Opinion is a non-binding, but influential advice to the E.U. Court, that will deliver its final judgment come next spring. Now is a perfect moment to analyze the Opinion, as well as the institutional politics of the E.U. Court — critical in understanding the two-tier approach to surveillance and fundamental rights in Europe. The two-tier approach converges, after 60 years, when the E.U. accedes to the European Convention of Human Rights anytime soon. Amidst the Snowden revelations, these are the fundamental legal developments that will ultimately answer the question whether European law can end mass surveillance.

[Read more…]