November 21, 2018

Archives for June 2018

Against privacy defeatism: why browsers can still stop fingerprinting

In this post I’ll discuss how a landmark piece of privacy research was widely misinterpreted, how this misinterpretation deterred the development of privacy technologies rather than spurring it, how a recent paper set the record straight, and what we can learn from all this.

The research in question is about browser fingerprinting. Because of differences in operating systems, browser versions, fonts, plugins, and at least a dozen other factors, different users’ web browsers tend to look different. This can be exploited by websites and third-party trackers to create so-called fingerprints. These fingerprints are much more effective than cookies for tracking users across websites: they leave no trace on the device and cannot easily be reset by the user.

The question is simply this: how effective is browser fingerprinting? That is, how unique is the typical user’s device fingerprint? The answer has big implications for online privacy. But studying this question scientifically is hard: while there are many tracking companies that have enormous databases of fingerprints, they don’t share them with researchers.

The first large-scale experiment on fingerprinting, called Panopticlick, was done by the Electronic Frontier Foundation starting in 2009. Hundreds of thousands of volunteers visited panopticlick.eff.org and agreed to have their browser fingerprinted for research. What the EFF found was remarkable at the time: 83% of participants had a fingerprint that was unique in the sample. Among those with Flash or Java enabled, fingerprints were even more likely to be unique: 94%. A project by researchers at INRIA in France with an even larger sample found broadly similar results. Meanwhile, researchers, including us, found that an ever larger number of browser features — Canvas, Battery, Audio, and WebRTC — were being abused by tracking companies for fingerprinting.

The conclusion was clear: fingerprinting is devastatingly effective. It would be futile for web browsers to try to limit fingerprintability by exposing less information to scripts: there were too many leaks to plug; too many fingerprinting vectors. The implications were profound. Browser vendors concluded that they wouldn’t be able to stop third-party tracking, and so privacy protection was left up to extensions. [1] These extensions didn’t aim to limit fingerprintability either. Instead, most of them worked in a convoluted way: by manually compiling block lists of thousands of third-party tracking scripts, constantly playing catch up as new players entered the tracking game.

But here’s the twist: a team at INRIA (including some of the same researchers responsible for the earlier study) managed to partner with a major French website and test the website’s visitors for fingerprintability. The findings were published a few months ago, and this time the results were quite different: only a third of users had unique fingerprints (compared to 83% and 94% earlier), despite the researchers’ use of a comprehensive set of 17 fingerprinting attributes. For mobile users the number was even lower: less than a fifth. There were two reasons for the differences: a larger sample in the new study, and because self-selection of participants appears to have introduced a bias in the earlier studies. There’s more: since the web is evolving away from plugins such as Flash and Java, we should expect fingerprintability to drop even further. A close look at the paper’s findings suggests that even simple interventions by browsers to limit the highest-entropy attributes would greatly improve the ability of users to hide in the crowd.

Apple recently announced that Safari would try and limit fingerprinting, and it’s likely that the recent paper had an influence in this decision. Notably, a minority of web privacy experts never subscribed to the view that fingerprinting protection is futile, and W3C, the main web standards body, has long provided guidance for developers of new standards on how to minimize fingerprintability. It’s still not too late. But if we’d known in 2009 what we know today, browsers would have had a big head start in developing and deploying fingerprinting defenses.

Why did the misinterpretation happen in the first place? One easy lesson is that statistics is hard, and non-representative samples can thoroughly skew research conclusions. But there’s another pill that’s harder to swallow: the recent study was able to test users in the wild only because the researchers didn’t ask or notify the users. [2] With Internet experiments, there is a tension between traditional informed consent and validity of findings, and we need new ethical norms to resolve this.

Another lesson is that privacy defenses don’t need to be perfect. Many researchers and engineers think about privacy in all-or-nothing terms: a single mistake can be devastating, and if a defense won’t be perfect, we shouldn’t deploy it at all. That might make sense for some applications such as the Tor browser, but for everyday users of mainstream browsers, the threat model is death by a thousand cuts, and privacy defenses succeed by interfering with the operation of the surveillance economy.

Finally, the fingerprinting-defense-is-futile argument is an example of privacy defeatism. Faced with an onslaught of bad news about privacy, we tend to acquire a form of learned helplessness, and reach the simplistic conclusion that privacy is dying and there’s nothing we can do about it. But this position is not supported by historical evidence: instead, we find that there is a constant re-negotiation of the privacy equilibrium, and while there are always privacy-infringing developments, there are offset from time to time by legal, technological, and social defenses.

Browser fingerprinting remains on the frontlines of the privacy battle today. The GDPR is making things harder for fingerprinters. It’s time for browser vendors to also get serious in cracking down on this sneaky practice. 

Thanks to Günes Acar and Steve Englehardt for comments on a draft.

[1] One notable exception is the Tor browser, but it comes at a serious cost to performance and breakage of features on websites. Another is Brave, which has a self-selected userbase presumably willing to accept some breakage in exchange for privacy.

[2] The researchers limited their experiment to users who had previously consented to the site’s generic cookie notice; they did not specifically inform users about their study.

 

 

Fast Web-based Attacks to Discover and Control IoT Devices

By Gunes Acar, Danny Y. Huang, Frank Li, Arvind Narayanan, and Nick Feamster

Two web-based attacks against IoT devices made the rounds this week. Researchers Craig Young and Brannon Dorsey showed that a well known attack technique called “DNS rebinding” can be used to control your smart thermostat, detect your home address or extract unique identifiers from your IoT devices.

For this type of attack to work, a user needs to visit a web page that contains malicious script and remain on the page while the attack proceeds. The attack simply fails if the user navigates away before the attack completes. According to the demo videos, each of these attacks takes longer than a minute to finish, assuming the attacker already knew the IP address of the targeted IoT device.

According to a study by Chartbeat, however, 55% of typical web users spent fewer than 15 seconds actively on a page. Does it mean that most web users are immune to these attacks?

In a paper to be presented at ACM SIGCOMM 2018 Workshop on IoT Security and Privacy, we developed a much faster version of this attack that takes only around ten seconds to discover and attack local IoT devices. Furthermore, our version assumes that the attacker has no prior knowledge of the targeted IoT device’s IP address. Check out our demo video below.

[Read more…]

Exfiltrating data from the browser using battery discharge information

Modern batteries are powerful – indeed they are smart, and have a privileged position enabling them to sense device utilization patterns. A recent research paper has identified a potential threat: researchers (from Technion, University of Texas Austin, Hebrew University) devise a scenario where malicious batteries are supplied to user devices (e.g. via compromised supply chains):

An attacker with brief physical access to the mobile device – at the supply chain, repair shops, workplaces, etc. – can replace the device’s battery. This is an example of an interdiction attack. Interdiction attacks using malicious VGA cables have been used to snoop on targeted users. Malicious battery introduces a new attack vector into a mobile device.

Poisoned batteries are thus capable of monitoring the victim’s system use, leading to the recovery of sensitive user information, such as: visited websites (with around 65% precision, better than a random guess), typed characters (accuracy better than random guess), when a camera shot is made, and incoming calls. Detection of the sensitive user data is an example of power analysis, exploiting a side channel information leak. Finally, the battery is also used to exfiltrate information to the attackers.

The whole attack is rather technically complex, and it is subject to debate how practical it could be to real-world attackers at this moment. But it is nonetheless very interesting, as it highlights how complex our computing devices really are, and that there is an inherent need to trust the components of our devices.

I’d like to call special attention to the exfiltration channel using the battery. It is a very interesting covert channel.

The W3C Battery Status API, implemented by major web browsers, notably Chrome, allows websites to query the information about the current battery level, as well as to disclose the rate of charge/discharge. The paper describes an exploitation of the Battery Status API in order to remotely exfiltrate acquired data. All the victim user has to do is to visit a sink website that is reading the data. Malicious batteries can detect when the browser enters this special website, and enable the exfiltration mode.

And how is the exfiltration done? It works by manipulating of “charging” states – the 0/1 state informing a website that the battery is either charging or discharging. But how to induce a steady stream of “charging” event changes in a way that encodes information? The employed technique is very interesting: it uses wireless charging, i.e. by placing a resonant inductive charger into the battery chip. What needs to be done is to place a charging coil close to the battery hardware.

Sounds complicated? It does not need to be, since we assume that an attacker is able to deliver a malicious battery in the first place. Then all the user has to do is to visit a website that would read the information using the standard W3C Battery Status API, when supported by the web browser (e.g.. Chrome is vulnerable but Firefox is immune). In principle, everything is done without any interaction with the Operating System – it is oblivious to the OS.

There is also this interesting observation:

Since the browser does not seem to limit the update rate, the state change depends entirely on the phone’s software and the charging pad state transition rate. We find that the time to detect the transition from not charging to charging (Tdc) is 3.9 seconds.

This allows the attacker to obtain a covert channel with a bandwidth of 0.1-0.5 bits/second. Of course there is no reason for browsers to allow frequent switches between charge/discharge events. So the Privacy by Design methodology here would be to cap the switch rate.

The attack may seem like a stretch (requires physical battery replacement – or poisoning hardware at a factory), and at this moment one can imagine multiple simpler methods. Nonetheless it is an important study. Is the sky falling? No. Is the work significant? Yes.

For more information, please see the paper here.

For more information about the privacy by design case study of the Battery Status API, see here.