April 24, 2017

Pragmatic advice for buying “Internet of Things” devices

We’re hearing an increasing amount about security flaws in “Internet of Things” devices, such as a “messaging” teddy bear with poor security or perhaps Samsung televisions being hackable to become snooping devices. How are you supposed to make purchasing decisions for all of these devices when you have no idea how they work or if they’re flawed?

Threat modeling and understanding the vendor’s stance. If a device comes from a large company with a reputation for caring about security (e.g., Apple, Google, and yes, even Microsoft), then that’s a positive factor. If the device comes from a fresh startup, or from a no-name Chinese manufacturer, that’s a negative factor. One particular thing that you might look for is evidence that the device in question automatically updates itself without requiring any intervention on your behalf. You might also consider the vendor’s commitment to security features as part of the device’s design. And, you should consider how badly things could go wrong if that device was compromised. Let’s go through some examples.

Your home’s border router. When we’re talking about your home’s firewall / NAT / router box, a compromise is quite significant, as it would allow an attacker to be a full-blown man-in-the-middle on all your traffic. Of course, with the increasing use of SSL-secured web sites, this is less devastating than you’d think. And when our ISPs might be given carte blanche to measure and sell any information about you, you already need to actively mistrust your Internet connection. Still, if you’ve got insecure devices on your home network, your border router matters a lot.

A few years ago, I bought a pricey Apple Airport Extreme, which Apple kept updated automatically. It has been easy to configure and manage and it faithfully served my home network. But then Apple supposedly decided to abandon the product. This was enough for me to start looking around for alternatives, wherein I settled on the new Google WiFi system, not only because it does a clever mesh network thing for whole-home coverage, but because Google explicitly claims security features (automatic updates, trusted boot, etc.) as part of its design. If you decide you don’t trust Google, then you should evaluate other vendors’ security claims carefully rather than just buying the cheapest device at the local electronics store.

Your front door / the outside of your house. Several vendors offer high-tech door locks that speak Bluetooth or otherwise can open themselves without requiring a mechanical key. Other vendors offer “video doorbells”. And a number of IoT vendors have replacements for traditional home security systems, using your Internet connection for connecting to a “monitoring” service (and, in some cases, using a cellular radio connection as a backup). For my own house, I decided that a Ring video doorbell was a valuable idea, based on its various advertised features, but also if it’s compromised, nobody can see into my house. In the worst case, an attacker can learn the times that I arrive and leave from my house, which aren’t exactly a nation-state secret. Conversely, I stuck with our traditional mechanical door locks. Sure, they’re surprisingly easy to pick, but I might at least end up with a nice video of the thief. I’m assuming that I have more to risk from “smash and grab” amateur thieves than from careful professionals. Ultimately, we do have insurance for these sorts of things. Speaking of which, Ring provides a free replacement if somebody steals your video doorbell. That’s as much a threat as anything.

Your home interior. Unlike the outside, I decided against any sort of always-on audio or video devices inside my house. No NestCam. No “smart” televisions. No Alexa or Google Home. After all the hubbub about baby monitors being actively cataloged and compromised, I wouldn’t be willing to have any such devices on my network because the risks outweigh the benefits. On the flip side, I’ve got no problem with my Nest thermostats. They’re incredibly convenient, and the vendor seems to have kept up with software patches and feature upgrades, continuing to support my first-generation devices. If compromised, an attacker might be able to overheat my house or perhaps damage my air conditioner by power-cycling it too rapidly. Possible? Yes. Likely? I doubt it. As with the video doorbell, there’s also a risk that an attacker could profile when I leave in the morning and get home in the evening.

Your mobile phones. The only in-home always-on audio surveillance is the “Ok Google” functionality in our various Android devices, which leads to a broader consideration of mobile phone security. All of our phones are Google Nexus or Pixel devices, so are running the latest release builds from Google. I’d feel similarly comfortable with the latest Apple devices. Suffice to say that mobile phone security is really an entirely separate topic from IoT security, but many of the same considerations apply: is the vendor supplying regular security patches, etc.

Your home theater. As mentioned above, I’m not interested in “smart” devices that can turn into surveillance devices. Our “smart TV” solution is a TiVo device: actively maintained by TiVo, and with no microphone or camera. If compromised, an attacker could learn what we’re watching, but again, there are no deep secrets that need to be protected. (Gratuitous plug: SyFy’s “The Expanse” is fantastic.) Our TV itself is not connected to anything other than the TiVo and a Chromecast device (which, again, has a remarkably limited attack surface; it’s basically just a web browser without the buttons around the edges).

I’m pondering a number of 4K televisions to replace our older TV, and they all come with “smarts” built-in. For most TV vendors, I’d just treat them as “dumb” displays, but I might make an exception for an Android TV device. I’ll note that Google abandoned support for its earlier Google TV systems, including a Sony-branded Google TV Bluray player that I bought back in the day, so I currently use it as a dumb Bluray player rather than as a smart device for running apps and such. My TiVo and Chromecast provide the “smart” features we need and both are actively supported. Suffice to say that when you buy a big television, you should expect it to last a decade or more, so it’s good to have the “smart” parts in smaller/cheaper devices that you can replace or upgrade on a more frequent basis.

Other gadgets. In our home, we’ve got a variety of other “smart” devices on the network, including a Rachio sprinkler controller, a Samsung printer, an Obihai VoIP gateway, and a controller for our solar panel array (powerline networking to gather energy production data from each panel!). The Obihai and the Samsung don’t do automatic updates and are probably security disaster areas. The Obihai apparently only encrypts control traffic with internet VoIP providers, while the data traffic is unencrypted. So do I need to worry about them? Certainly, if an attacker could somehow break in from one device and move laterally to another, the printer and the VoIP devices are the tastiest targets, as an attacker could see what I print (hint: nothing very exciting, unless you really love grocery shopping lists) or listen in on my phone calls (hint: if it’s important, I’d use Signal or I’d have an in-person conversation without electronics in earshot).

Some usability thoughts. After installing all these IoT devices, one of the common themes that I’ve observed is that they all have radically different setup procedures. A Nest thermostat, for example, has you spin the dial to enter your WiFi password, but other devices don’t have dials. What should they do? Nest Protect smoke detectors, for example, have a QR code printed on them which drives your phone to connect to a local WiFi access point inside the smoke detector. This is used to communicate the password for your real WiFi network, after which the local WiFi is never again used. For contrast, the Rachio sprinkler system uses a light sensor on the device that reads color patterns from your smartphone screen, which again send it the configuration information to connect to your real WiFi network. These setup processes, and others like them, are making a tradeoff across security, usability, and cost. I don’t have any magic thoughts on how best to support the “IoT pairing problem”, but it’s clearly one of the places where IoT security matters.

Security for “Internet of Things” devices is a topic of growing importance, at home and in the office. These devices offer all kinds of great features, whether it’s a sprinkler controller paying attention to the weather forecast or a smoke alarm that alerts you on your phone. Because they deliver useful features, they’re going to grow in popularity. Unfortunately, it’s not to possible for most consumers to make meaningful security judgments about these devices, and even web sites that specialize in gadget reviews don’t have security analysts on staff. Consequently, consumers are forced to make tradeoffs (e.g., no video cameras inside the house) or to use device brands as a proxy for measuring the quality and security of these devices.

Engineering around social media border searches

The latest news is that the U.S. Department of Homeland Security is considering a requirement, while passing through a border checkpoint, to inspect a prospective visitor’s “online presence”. That means immigration officials would require users to divulge their passwords to Facebook and other such services, which the agent might then inspect, right there, at the border crossing. This raises a variety of concerns, from its chilling impact on freedom of speech to its being an unreasonable search or seizure, nevermind whether an airport border agent has the necessary training to make such judgments, much less the time to do it while hundreds of people are waiting in line to get through.

Rather than conduct a serious legal analysis, however, I want to talk about technical countermeasures. What might Facebook or other such services do to help defend their users as they pass a border crossing?

Fake accounts. It’s certainly feasible today to create multiple accounts for yourself, giving up the password to a fake account rather than your real account. Most users would find this unnecessarily cumbersome, and the last thing Facebook or anybody else wants is to have a bunch of fake accounts running around. It’s already a concern when somebody tries to borrow a real person’s identity to create a fake account and “friend” their actual friends.

Duress passwords. Years ago, my home alarm system had the option to have two separate PINs. One of them would disable the alarm as normal. The other would sound a silent alarm, summoning the police immediately while making it seem like I disabled the alarm. Let’s say Facebook supported something similar. You enter the duress password, then Facebook locks out your account or switches to your fake account, as above.

Temporary lockouts. If you know you’re about to go through a border crossing, you could give a duress password, as above, or you could arrange an account lockout in advance. You might, for example, designate ten trusted friends, where any five must declare that the lockout is over. Absent those declarations, your account would remain locked, and there would be no means for you to be coerced into giving access to your own account.

Temporary sanitization. Absent any action from Facebook, the best advice today for somebody about to go through a border crossing is to sanitize their account before going through. That means attempting to second-guess what border agents are looking for and delete it in advance. Facebook might assist this by providing search features to allow users to temporarily drop friends, temporarily delete comments or posts with keywords in them, etc. As with the temporary lockouts, temporary sanitization would need to have a restoration process that could be delegated to trusted friends. Once you give the all-clear, everything comes back again.

User defense in bulk. Every time a user, going through a border crossing, exercises a duress password, that’s an unambiguous signal to Facebook. Even absent such signals, Facebook would observe highly unusual login behavior coming from those specific browsers and IP addresses. Facebook could simply deny access to its services from government IP address blocks. While it’s entirely possible for the government to circumvent this, whether using Tor or whatever else, there’s no reason that Facebook needs to be complicit in the process.

So is there a reasonable alternative?

While it’s technically feasible for the government to require that Facebook give it full “backdoor” access to each and every account so it can render threat judgments in advance, this would constitute the most unreasonable search and seizure in the history of that phrase. Furthermore, if and when it became common knowledge that such unreasonable seizures were commonplace, that would be the end of the company. Facebook users have an expectation of privacy and will switch to other services if Facebook cannot protect them.

Wouldn’t it be nice if there was some less invasive way to support the government’s desire for “extreme vetting”? Can we protect ordinary users’ privacy while still enabling the government to intercept people who intend harm to our country? We certainly must assume that an actual bona fide terrorist is going to have no trouble creating a completely clean online persona to use while crossing a border. They can invent wholesome friends with healthy children sharing silly videos of cute kittens. While we don’t know too much about our existing vetting strategies to distinguish tourists from terrorists, we have to assume that the process involves the accumulation of signals and human intelligence, and other painstaking efforts by professional investigators to protect our country from harm. It’s entirely possible that they’re already doing a good job.

A response to the National Association of Secretaries of State

NASS logo
Election administration in the United States is largely managed state-by-state, with a small amount of Federal involvement. This generally means that each state’s chief election official is that state’s Secretary of State. Their umbrella organization, the National Association of Secretaries of State, consequently has a lot of involvement in voting issues, and recently issued a press release concerning voting system security that was remarkably erroneous. What follows is a point-by-point commentary on their press release.

To date, there has been no indication from national security agencies to states that any specific or credible threat exists when it comes to cyber security and the November 2016 general election.

Unfortunately, we now know that it appears that Russia broke into the DNC’s computers and leaked emails with clear intent to influence the U.S. presidential election (see, e.g., the New York Times’s article on July 26: “Why Security Experts Think Russia was Behind the DNC Breach”). It’s entirely reasonable to extrapolate from this that they may be willing to conduct further operations with the same goals, meaning that it’s necessary to take appropriate steps to mitigate against such attacks, regardless of the level of specificity of available intel.

However, as a routine part of any election cycle, Secretaries of State and their local government counterparts work with federal partners, such as the U.S. Election Assistance Commission (EAC) and the National Institute of Standards and Technology (NIST), to maintain rigorous testing and certification standards for voting systems. Risk management practices and controls, including the physical handling and storage of voting equipment, are important elements of this work.

Expert analyses of current election systems (largely conducted ten years ago in California, Ohio, and Florida) found a wide variety of security problems. While some states have responded to these issues by replacing the worst paperless electronic voting systems, other states, including several “battleground” states, continue to use unacceptably insecure systems.

State election offices also proactively utilize election IT professionals and security experts to regularly review, identify and address any vulnerabilities with systems, including voter registration databases and election night reporting systems (which display the unofficial tallies that are ultimately verified via statewide canvassing).

The implication here is that all state election officials have addressed known vulnerabilities. This is incorrect. While some states have been quite proactive, other states have done nothing of the sort.

A national hacking of the election is highly improbable due to our unique, decentralized process.

Security vulnerabilities have nothing to do with probabilities. They instead have to do with a cost/benefit analysis on the part of the attacker. An adversary doesn’t have to attack all 50 states. All they have to do is tamper with the “battleground” states where small shifts in the vote can change the outcome for the whole state.

Each state and locality conducts its own system of voting, complete with standards and security requirements for equipment and software. Most states publicly conduct logic and accuracy testing of their machines prior to the election to ensure that they are working and tabulating properly, then they are sealed until Election Day to prevent tampering.

So-called “logic and accuracy testing” varies from location to location, but most boil down to casting a small number of votes for each candidate, on a handful of machines, and making sure they’re all there in a mock tally. Similarly, local election officials will have procedures in place to make sure machines are properly “zeroed”. Computer scientists refer to these as “sanity tests”, in that if the system fails, then something is obviously broken. If these tests pass, they say nothing about the sort of tampering that a sophisticated nation-state adversary might conduct.

Some election officials conduct more sophisticated “parallel testing”, where some voting equipment is pulled out of general service and is instead set up in a mock precinct, on election day, where mock voters cast seemingly real ballots. These machines would have a harder time distinguishing whether they were in “test” versus “production” conditions. But what happens if the machines fail the parallel test? By then, the election is over, the voters are gone, and there’s potentially no way to reconstruct the intent of the voters.

Furthermore, electronic voting machines are not Internet-based and do not connect to each other online.

This is partially true. Electronic voting systems do connect to one another through in-precinct local networks or through the motion of memory cards of various sorts. They similarly connect to election management systems before the start of the election (when they’re loaded with ballot definitions) and after the end of the election (for backups, recounts, inventory control, and/or being cleared prior to subsequent elections). All of these “touch points” represent opportunities for malware to cross the “air gap” boundaries. We built attacks like these a decade ago as part of the California Top to Bottom Review, showing how malware could spread “virally” to an entire county’s fleet of voting equipment. Attacks like these require a non-trivial up-front engineering effort, plus additional effort for deployment, but these efforts are well within the capabilities of a nation-state adversary.

Following the election, state and local jurisdictions conduct a canvass to review vote counting, ultimately producing the election results that are officially certified. Post-election audits help to further guard against deliberate manipulation of the election, as well as unintentional software, hardware or programming problems.

Post-election audits aren’t conducted at all in some jurisdictions, and would likely be meaningless against the sort of adversary we’re talking about. If a paperless electronic voting system was hacked, there might well be forensic evidence that the attackers left behind, but such evidence would be a challenge to identify quickly, particularly in the charged atmosphere of a disputed election result.

We look forward to continued information-sharing with federal partners in order to evaluate cyber risks, and respond to them accordingly, as part of ongoing state election emergency preparedness planning for November.

“Emergency preparedness” is definitely the proper way to consider the problem. Just as we must have contingency plans for all sorts of natural phenomena, like hurricanes, we must also be prepared for man-made phenomena, where we might be unable to reconstruct an election tally that accurately represents the will of the people.

The correct time to make such plans is right now, before the election. Since it’s far too late to decommission and replace our insecure equipment, we must instead plan for rapid responses, such as quickly printing single-issue paper ballots, bringing voters back to the polls, and doing it all over again. If such plans are made now, their very existence changes the cost/benefit equation for our adversaries, and will hopefully dissuade these adversaries from acting.