May 22, 2017

Archives for August 2016

Security against Election Hacking – Part 2: Cyberoffense is not the best cyberdefense!

State and county election officials across the country employ thousands of computers in election administration, most of them are connected (from time to time) to the internet (or exchange data cartridges with machines that are connected).  In my previous post I explained how we must audit elections independently of the computers, so we can trust the results even if the computers are hacked.

Still, if state and county election computers were hacked, it would be an enormous headache and it would certainly cast a shadow on the legitimacy of the election.  So, should the DHS designate election computers as “critical cyber infrastructure?”

This question betrays a fundamental misunderstanding of how computer security really works.  You as an individual buy your computers and operating systems from reputable vendors (Apple, Microsoft, IBM, Google/Samsung, HP, Dell, etc.).  Businesses and banks (and the Democratic National Committee, and the Republican National Committee) buy their computers and software from the same vendors.  Your security, and the security of all the businesses you deal with, is improved when these hardware and software vendors build products without security bugs in them.   Election administrators use computers that run Windows (or MacOS, or Linux) bought from the same vendors.

Parts of the U.S. government, particularly inside the NSA, have “cyberdefense” teams that analyze widely used software for security vulnerabilities.  The best thing they could do to enhance our security is notify the vendors immediately about vulnerabilities, so the vendors can fix the bugs (and learn their lessons).   Unfortunately, the NSA also has “cyberoffense” teams that like to save up these vulnerabilities, keep them secret, and use them as weak points to break into their adversaries’ computers.  They think they’re so smart that the Russkies, or the Chinese, will never be able to figure out the same vulnerabilities and use them to break into the computers of American businesses, individuals, the DNC or RNC, or American election administrators.  There’s even an acronym for this fallacy: NOBUS.  “NObody But US” will be able to figure out this attack.

Vulnerability lists accumulated by the NSA and DHS probably don’t include a lot of vote-counting software: those lists (probably) focus on widely used operating systems, office and word-processing, network routers, phone apps, and so on.  But vote-counting software typically runs on widely used operating systems, uses PDF-handling software for ballot printing, network routers for vote aggregation.  Improvements in these components would improve election security.

So, the “cyberdefense” experts in the U.S. Government could improve everyone’s security, including election administrators, by promptly warning Microsoft, Apple, IBM, and so on about security bugs.  But their hands are often tied by the “cyberoffense” hackers who want to keep the bugs secret—and unfixed.  For years, independent cybersecurity experts have advocated that the NSA’s cyberdefense and cyberoffense teams be split up into two separate organizations, so that the offense hackers can’t deliberately keep us all insecure.   Unfortunately, in February 2016 the NSA did just the opposite: it merged its offense and defense teams together.

Some in the government talk as if “national cyberdefense” is some kind of “national guard” that they can send in to protect a selected set of computers.  But it doesn’t work that way.  Our computers are secure because of the software we purchase and install; we can choose vendors such as Apple, IBM, Microsoft, HP, or others based on their track record or based on their use of open-source software that we can inspect.  The DHS’s cybersecurity squad is not really in that process, except as they help the vendors improve the security of their products.  (See also:  “The vulnerabilities equities process.”)

Yes, it’s certainly helpful that the Secretary of Homeland Security has offered “assistance in helping state officials manage risks to voting systems in each state’s jurisdiction.”  But it’s too close to the election to be fiddling with the election software—election officials (understandably) don’t want to break anything.

But really we should ask: Should the FBI and the NSA be hacking us or defending us?  To defend us, they must stop hoarding secret vulnerabilities, and instead get those bugs fixed by the vendors.

Security against Election Hacking – Part 1: Software Independence

There’s been a lot of discussion of whether the November 2016 U.S. election can be hacked.  Should the U.S. Government designate all the states’ and counties’ election computers as “critical cyber infrastructure” and prioritize the “cyberdefense” of these systems?  Will it make any difference to activate those buzzwords with less than 3 months until the election?

First, let me explain what can and can’t be hacked.  Election administrators use computers in (at least) three ways:

  1. To maintain voter registration databases and to prepare the “pollbooks” used at every polling place to list who’s a registered voter (for that precinct); to prepare the “ballot definitions” telling the voting machines who are the candidates in each race.
  2. Inside the voting machines themselves, the optical-scan counters or touch-screen machines that the voter interacts with directly.
  3. When the polls close, the vote totals from all the different precincts are gathered (this is called “canvassing”) and aggregated together to make statewide totals for each candidate (or district-wide totals for congressional candidates).

Any of these computers could be hacked.  What defenses do we have?  Could we seal off the internet so the Russians can’t hack us?  Clearly not; and anyway, maybe the hacker isn’t the Russians—what if it’s someone in your opponent’s political party?  What if it’s a rogue election administrator?

The best defenses are ways to audit the election and count the votes outside of, independent of the hackable computers.  For example,

[Read more…]

Can Facebook really make ads unblockable?

[This is a joint post with Grant Storey, a Princeton undergraduate who is working with me on a tool to help users understand Facebook’s targeted advertising.]

Facebook announced two days ago that it would make its ads indistinguishable from regular posts, and hence impossible to block. But within hours, the developers of Adblock Plus released an update which enabled the tool to continue blocking Facebook ads. The ball is now back in Facebook’s court. So far, all it’s done is issue a rather petulant statement. The burning question is this: can Facebook really make ads indistinguishable from content? Who ultimately has the upper hand in the ad blocking wars?

There are two reasons — one technical, one legal — why we don’t think Facebook will succeed in making its ads unblockable, if a user really wants to block them.

The technical reason is that the web is an open platform. When you visit facebook.com, Facebook’s server sends your browser the page content along with instructions on how to render them on the screen, but it is entirely up to your browser to follow those instructions. The browser ultimately acts on behalf of the user, and gives you — through extensions — an extraordinary degree of control over its behavior, and in particular, over what gets displayed on the screen. This is what enables the ecosystem of ad-blocking and tracker-blocking extensions to exist, along with extensions for customizing web pages in various other interesting ways.

Indeed, the change that Adblock Plus made in order to block the new, supposedly unblockable ads is just a single line in the tool’s default blocklist:

facebook.com##div[id^="substream_"] div[id^="hyperfeed_story_id_"][data-xt]

What’s happening here is that Facebook’s HTML code for ads has slight differences from the code for regular posts, so that Facebook can keep things straight for its own internal purposes. But because of the open nature of the web, Facebook is forced to expose these differences to the browser and to extensions such as Adblock Plus. The line of code above allows Adblock Plus to distinguish the two categories by exploiting those differences.

Facebook engineers could try harder to obfuscate the differences. For example, they could use non-human-readable element IDs to make it harder to figure out what’s going on, or even randomize the IDs on every page load. We’re surprised they’re not already doing this, given the grandiose announcement of the company’s intent to bypass ad blockers. But there’s a limit to what Facebook can do. Ultimately, Facebook’s human users have to be able to tell ads apart, because failure to clearly distinguish ads from regular posts would run headlong into the Federal Trade Commission’s rules against misleading advertising — rules that the commission enforces vigorously. [1, 2] And that’s the second reason why we think Facebook is barking up the wrong tree.

Facebook does allow human users to easily recognize ads: currently, ads say “Sponsored” and have a drop-down with various ad-related functions, including a link to the Ad Preferences page. And that means someone could create an ad-blocking tool that looks at exactly the information that a human user would look at. Such a tool would be mostly immune to Facebook’s attempts to make the HTML code of ads and non-ads indistinguishable. Again, the open nature of the web means that blocking tools will always have the ability to scan posts for text suggestive of ads, links to Ad Preferences pages, and other markers.

But don’t take our word for it: take our code for it instead. We’ve created a prototype tool that detects Facebook ads without relying on hidden HTML code to distinguish them. [Update: the source code is here.] The extension examines each post in the user’s news feed and marks those with the “Sponsored” link as ads. This is a simple proof of concept, but the detection method could easily be made much more robust without incurring a performance penalty. Since our tool is for demonstration purposes, it doesn’t block ads but instead marks them as shown in the image below.  

All of this must be utterly obvious to the smart engineers at Facebook, so the whole “unblockable ads” PR push seems likely to be a big bluff. But why? One possibility is that it’s part of a plan to make ad blockers look like the bad guys. Hand in hand, the company seems to be making a good-faith effort to make ads more relevant and give users more control over them. Facebook also points out, correctly, that its ads don’t contain active code and aren’t delivered from third-party servers, and therefore aren’t as susceptible to malware.

Facebook does deserve kudos for trying to clean up and improve the ad experience. If there is any hope for a peaceful resolution to the ad blocking wars, it is that ads won’t be so annoying as to push people to install ad blockers, and will be actually useful at least some of the time. If anyone can pull this off, it is Facebook, with the depth of data it has about its users. But is Facebook’s move too little, too late? On most of the rest of the web, ads continue to be creepy malware-ridden performance hogs, which means people will continue to install ad blockers, and as long as it is technically feasible for ad blockers to block Facebook ads, they’re going to continue to do so. Let’s hope there’s a way out of this spiral.

[1] Obligatory disclaimer: we’re not lawyers.

[2] Facebook claims that Adblock Plus’s updates “don’t just block ads but also posts from friends and Pages”. What they’re most likely referring to that Adblock Plus blocks ads that are triggered by one of your friends Liking the advertiser’s page. But these are still ads: somebody paid for them to appear in your feed. Facebook is trying to blur the distinction in its press statement, but it can’t do that in its user interface, because that is exactly what the FTC prohibits.