June 23, 2017

Berkeley releases report on barriers to cybersecurity research

I’m pleased to share this report, as I helped organize this event.

Researchers associated with the UC Berkeley School of Information and School of Law, the Berkeley Center for Law and Technology, and the International Computer Science Institute (ICSI) released a workshop report detailing legal barriers and other disincentives to cybersecurity research, and recommendations to address them. The workshop held at Berkeley in April, supported by the National Science Foundation, brought together leading computer scientists and lawyers, from academia, civil society, and industry, to map out legal barriers to cybersecurity research and propose a set of concrete solutions.

The workshop report provides important background for the NTIA-convened multistakeholder process exploring security vulnerability disclosure, which launched today at Berkeley.  The report documents the importance of cybersecurity research, the chilling effect caused by current regulations, and the diversity of the vulnerability landscape that counsels against both single and fixed practices around vulnerability disclosures.

Read the report here.

VW = Voting Wulnerability

On Friday, the US Environmental Protection Agency (EPA) “accused the German automaker of using software to detect when the car is undergoing its periodic state emissions testing. Only during such tests are the cars’ full emissions control systems turned on. During normal driving situations, the controls are turned off, allowing the cars to spew as much as 40 times as much pollution as allowed under the Clean Air Act, the E.P.A. said.”  (NY Times coverage) The motivation for the “defeat device” was improved performance, although I haven’t seen whether “performance” in this case means faster acceleration or better fuel mileage.

So what does this have to do with voting?

For as long as I’ve been involved in voting (about a decade), technologists have expressed concerns about “logic and accuracy” (L&A) testing, which is the technique used by election officials to ensure that voting machines are working properly prior to election day.  In some states, such tests are written into law; in others, they are common practice.  But as is well understood by computer scientists (and doubtless scientists in other fields), testing can prove presence of flaws, but not their absence.

In particular, computer scientists have noted that clever (that is, malicious) software in a voting machine could behave “correctly” when it detects that L&A testing is occurring, and revert to its improper behavior when L&A testing is complete.  Such software could be introduced anywhere along the supply chain – by the vendor of the voting system, by someone in an elections office, or by an intruder who installs malware in voting systems without the knowledge of the vendor or elections office.  It really doesn’t matter who installs it – just that the capability is possible.

It’s not all that hard to write software that detects whether a given use is for L&A or a real election.  L&A testing frequently follows patterns, such as its use on dates other than the first Tuesday in November, or by patterns such as three Democratic votes, followed by two Republican votes, followed by one write-in vote, followed by closing the election.  And the malicious software doesn’t need to decide a priori if a given series of votes is L&A or a real election – it can make the decision when the election is closed down, and erase any evidence of the real votes.

Such concerns have generally been dismissed in the debate about voting system security.  But with all-electronic voting systems, especially Digital Recording Electronic (DRE) machines (such as the touch-screen machines common in many states), this threat has always been present.

And now, we have evidence “in the wild” that the threat can occur.  In this case, the vendor (Volkswagen) deliberately introduced software that detected whether it was in test mode or operational mode, and adjusted behavior accordingly.  Since the VW software had to prospectively make the decision whether to behave in test mode as the car engine is operating, this is far more difficult than a voting system, where the decision can be made retrospectively when the election is closed.

In the case of voting, the best solution today is optical scanned paper ballots.  That way, we have “ground truth” (the paper ballots) to compare to the reported totals.

The bottom line: it’s far too easy for software to detect its own usage, and change behavior accordingly.  When the result is increased pollution or a tampered election, we can’t take the risk.

Postscript: A colleague pointed out that malware has for years behaved differently when it “senses” that it’s being monitored, which is largely a similar behavior. In the VW and voting cases, though, the software isn’t trying to prevent being detected directly; it’s changing the behavior of the systems when it detects that it’s being monitored.

How not to measure security

A recent paper published by Smartmatic, a vendor of voting systems, caught my attention.

The first thing is that it’s published by Springer, which typically publishes peer-reviewed articles – which this is not. This is a marketing piece. It’s disturbing that a respected imprint like Springer would get into the business of publishing vendor white papers. There’s no disclaimer that it’s not a peer-reviewed piece, or any other indication that it doesn’t follow Springer’s historical standards.

The second, and more important issue, is that the article could not possibly have passed peer review, given some of its claims. I won’t go into the controversies around voting systems (a nice summary of some of those issues can be found on the OSET blog), but rather focus on some of the security metrics claims.

The article states, “Well-designed, special-purpose [voting] systems reduce the possibility of results tampering and eliminate fraud. Security is increased by 10-1,000 times, depending on the level of automation.”

That would be nice. However, we have no agreed-upon way of measuring security of systems (other than cryptographic algorithms, within limits). So the only way this is meaningful is if it’s qualified and explained – which it isn’t. Other studies, such as one I participated in (Applying a Reusable Election Threat Model at the County Level), have tried to quantify the risk to voting systems – our study measured risk in terms of the number of people required to carry out the attack. So is Smartmatic’s study claiming that they can make an attack require 10 to 1000 more people, 10 to 1000 times more money, 10 to 1000 times more expertise (however that would be measured!), or something entirely different?

But the most outrageous statement in the article is this:

The important thing is that, when all of these methods [for providing voting system security] are combined, it becomes possible to calculate with mathematical precision the probability of the system being hacked in the available time, because an election usually happens in a few hours or at the most over a few days. (For example, for one of our average customers, the probability was 1×10-19. That is a point followed by 19 [sic] zeros and then 1). The probability is lower than that of a meteor hitting the earth and wiping us all out in the next few years—approximately 1×10-7 (Chemical Industry Education Centre, Risk-Ed n.d.)—hence it seems reasonable to use the term ‘unhackable’, to the chagrin of the purists and to my pleasure.

As noted previously, we don’t know how to measure much of anything in security, and we’re even less capable of measuring the results of combining technologies together (which sometimes makes things more secure, and other times less secure). The claim that putting multiple security measures together gives risk probabilities with “mathematical precision” is ludicrous. And calling any system “unhackable” is just ridiculous, as Oracle discovered some years ago when the marketing department claimed their products were “unhackable”. (For the record, my colleagues in engineering at Oracle said they were aghast at the slogan.)

As Ron Rivest said at a CITP symposium, if voting vendors have “solved the Internet security and cybersecurity problem, what are they doing implementing voting systems? They should be working with the Department of Defense or financial industry. These are not solved problems there.” If Smartmatic has a method for obtaining and measuring security with “mathematical precision” at the level of 1019, they should be selling trillions of dollars in technology or expertise to every company on the planet, and putting everyone else out of business.

I debated posting this blog entry, because it may bring more attention to a marketing piece that should be buried. But I hope that writing this will dissuade anyone who might be persuaded by Smartmatic’s unsupported claims that masquerade as science. And I hope that it may embarrass Springer into rethinking their policy of posting articles like this as if they were scientific.