August 20, 2019

Should we regulate the makers or users of insecure IoTs?

By Matheus V. X. Ferreira, Danny Yuxing Huang, Tithi Chattopadhyay, Nick Feamster, and S. Matthew Weinberg

Recent years have seen a proliferation of “smart-home” or IoT devices, many of which are known to contain security vulnerabilities that have been exploited to launch high-profile attacks and disrupt Internet-wide services such as Twitter and Reddit

The sellers (e.g., manufacturers) and buyers (e.g., consumers) of such devices could improve their security, but there is little incentive to do so. For the sellers, implementing security features on IoT devices, such as using encryption or having no default passwords, could introduce extra engineering cost. Similarly, security practices, such as regularly updating the firmware or using complex and difficult-to-remember passwords, may be a costly endeavor for many buyers.

As a result, sellers and buyers security practices are less than optimal, and this ends up increasing vulnerabilities that ultimately impact other buyers. In other words, their actions cause a negative externality to other buyers. This scenario where individuals act according to their own self-interest on detrimental to the common good is referred to as the tragedy of the commons.

One approach to incentivize agents to adopt optimal practices is through external regulations. In this blog post, we discuss two potential approaches that a regulator may adopt in the case of IoT security:

  • Regulating the seller – requiring minimum security standards on sellers of IoT devices;
  • Regulating the buyer – and/or encouraging IoT device buyers to adopt security practices through rewards (e.g., ISP discounts for buyers without signs of malicious network traffic) or penalties (e.g., fines for buyers of devices that engaged in DDoS attacks).

The goal of this hypothetical regulator is to minimize the negative externality due to compromised devices while maximizing the profitability of device manufacturers. We show that in some cases if buyers are rewarded for security practices (or penalized for the lack thereof), sellers can potentially earn higher profits if they implement extra security features on their devices.

Challenges in regulation

The hypothetical regulator’s ability to achieve the goal of minimizing negative externality depends on whether buyers can secure their devices more efficiently than sellers. 

If, for instance, buyers regularly update their devices’ firmware or set strong passwords, then regulating the sellers alone can be costly — i.e., inefficient. On the other hand, rewarding buyers for security practices (or penalizing them for the lack thereof) can still be inefficient if there is little buyers can do to improve security, or if they cannot distinguish good vs bad security practices. 

These challenges lead us to explore the impact the efficiency of buyers in improving their security has on regulatory effectiveness.

Modeling the efficiency of buyers’ security practices

A stochastic model captures the uncertainty of the efficiency of the buyer’s security practices when incentivized through regulation. A buyer has low efficiency when improving their effort towards security has a low impact in actually reducing security risks than if the same effort came from sellers. On the other hand, a buyer has high efficiency if improving their effort towards higher security translates in high improvements in security.

As an example, consider the buyer’s efficiency in a system where users (i.e., buyers) log into a website using passwords. Let’s first make two assumptions:

  1. We first assume that the website is secure. The probability of a user’s account being compromised depends on, for instance, how strong the password is. A weak password or a reused password is likely correlated with a high chance of the account being stolen; on the other hand, a strong, random password is correlated with the opposite. We say that the users/buyers are highly efficient in providing security with respect to the website operator (i.e., seller); in this case, efficiency > 1. Figure 1-a shows an example of the distribution of the buyers’ efficiency.
  2. We next assume that the website is not secure — e.g., running outdated server software. The probability of a buyer’s account being compromised depends less on password strength, for instance, but rather more on how insecure the website server is; in this case, efficiency < 1. Figure 1-b shows an example of the distribution of the buyers’ efficiency.

In reality, Assumptions (1) and (2) rarely exist in isolation but rather coexist in various degrees. We show an example of such in Figure 1-c.

Figure 1

The same model can be used to study scenarios where the actions of different agents cause externalities to common goods such as clean air.

Regulatory policies in a market of polluting cars often focus on regulating the production of vehicles (i.e., sellers). Once a car is purchased, there is little the buyer can do to lower pollution besides regular maintenance of the vehicle. In this case, the buyer’s efficiency would resemble Figure 1-b.

In the other extreme, in government (i.e., sellers) auctions of oil exploration licenses, firms (i.e., buyers) are regulated and fined for potential environmental impacts. When comparing the efficiency of the government vs. the efficiency of firms, firms are in a better position in adopting better practices and controlling the environmental impacts of their activities. The firms’ efficiency would resemble Figure 1-a.

Regulatory Impact on Manufacturer Profit

Another consideration for any regulator is the impact these regulations have on the profitability of a manufacturer.

Any regulation will directly (through higher production cost) or indirectly impact the sellers’ profit. By creating economic incentives for buyers to adopt better practices through fines (or taxes), we indirectly affect the prices a buyer is willing to pay for a product.

In Figure 2, we plot the maximum profit a seller can acquire in expectation from a population of buyers such that:

  • buyer’s value for the IoT device is drawn uniformly between $0 and $20; efficiency is uniform [0,1], [0,3] and [2, 3]; 
  • a regulator imposes on buyers a fine ranging from $0 to $10 and/or impose on sellers minimum production cost ranging from $0 to $5 (e.g., for investing in extra security/safety features). 

If buyers have low efficiency (Figure 2-a) and they are liable for the externalities caused by their devices, regulating the sellers can, in fact, increase the manufacturer’s profit since the regulation reduces the chance buyers are compromised. As buyers become more efficient (Figure 2-b and then 2-c), regulating the sellers can only lower profit since they prefer to provide security themselves.

Figure 2
Selecting the Best Regulation

To select the optimal regulatory strategy when constrained by minimum profit impact on sellers, we must understand the distribution of efficiency of buyers.

We show that in homogeneous markets where buyer’s ability to follow security practices is always high or always low (Figure 1-a and 1-b) — the optimal regulatory policy would be to regulate only the buyers or the sellers.

In arbitrary markets where buyer’s ability to follow security practices can have high variance (Figure 1-c), by contrast, we show that while the optimal policy may require regulating both buyers and sellers, there is always an approximately optimal policy which regulates just one. In other words, although an efficient regulation might be required to regulate both buyers and sellers, considering policies that either only creates incentives for buyers or only regulate the seller can approximate the optimal policy that potentially intervenes on both buyers and sellers.

In practice, it is challenging to completely infer all the features that can affect the efficiency of buyers — that is, precisely measure efficiency distributions Figure 1-a to 1-c. Our theoretical results provide a tool for security researchers to infer an approximately optimal regulation from an inaccurate model of the efficiency distribution.

By estimating that most of the population that purchase a device is highly efficient, we have shown that regulating only the buyer is approximately optimal. On the other hand, by estimating that the population that purchases a device is highly inefficient, regulating only the seller approximates the optimal regulation.

At the end of the day, by better understanding the efficiency of buyer’s security practices, we will be in a better position to make a decision about regulatory strategies for different information technology markets such as for the market of IoT devices without the need for complex regulation.

For more details, the full paper can be accessed at https://arxiv.org/abs/1902.10008 which was presented at The Web Conference 2019.

Is This An Ad? Help Us Identify Misleading Content On YouTube

by Michael Swart, Arunesh Mathur, and Marshini Chetty

Ever watched a video on YouTube and wondered if the YouTuber was paid for endorsing a product? You are not alone. In fact, Senator Blumenthal of Connecticut recently called for the Federal Trade Commission (FTC) to look into deceptive practices where YouTubers do not disclose that they are being paid to market detoxifying teas. According to current regulations, anytime a social media influencer is paid by a company to endorse their product, the FTC requires that the influencer explicitly disclose to his or her followers that they have partnered with the brand. However, in practice, influencers often fail to include such a disclosure. As we describe in a previous post, only about 1 out of every 10 YouTube videos that contain a type of endorsement called affiliate marketing (usually including marketing links to products in the video description) actually discloses that a relationship existed between the content creator and a brand. This is problematic because in videos without disclosures, users do not know that the influencer’s endorsement of the product is unauthentic and that they were incentivized to give a positive review.

To address this issue, we built a Google Chrome Extension called AdIntuition that combats these deceptive marketing practices. The extension automatically detects and discloses whether a YouTube video contains affiliate marketing links in the video description. Our goal is to help inform users of a relationship between an influencer and a brand on YouTube.

What can you do to help?:
In order to further improve the extension, we need data on how users make use of it in their everyday lives. You can help us achieve this goal by downloading the extension here and reading about our study here. We have a version for Firefox and Chrome. Then, as you watch YouTube videos, you will be notified of affiliate marketing content. For research purposes such as to improve the tool design, our detection algorithms, and to determine the best way to help people identify ads online, we will collect data in the tool about how often you encounter affiliate marketing content. (Full details on data collection here). This will help us further our understanding of how to create tools to keep users informed online! You could also consider participating in a more in depth study – details here.

How we built AdIntuition:
Building on our previous work, we look for the presence of affiliate marketing links in any level of the redirect chain that can be present in a YouTube video description. We also highlight Urchin Tracking Module parameters which are correlated with tracking links. Finally, we built a classifier that identifies the presence of coupon codes in YouTube descriptions, which are used to track users in an online shop.

Conference on Social Protection by Artificial Intelligence: Decoding Human Rights in a Digital Age

Christiaan van Veen[1] and Ben Zevenbergen [2]

Governments around the world are increasingly using Artificial Intelligence and other digital technologies to streamline and transform their social protection and welfare systems. This move is usually presented as a means by which to provide an improved and enhanced system and to be better able to assist individuals in a more targeted and efficient manner. But because social protection budgets represent such a significant part of State expenditure in most countries, and because austerity and tax-cuts continue to drive policy, the driving force is usually the prospect of major budgetary savings and a greatly slimmed down system of benefits. But it is becoming increasingly apparent that the impact of these new technologies on the nature of the social protection systems themselves and on the lives of the many individuals who rely upon them can be far-reaching and very often problematic.  There are many examples of systems that are being challenged, ranging from the disastrous ‘robo-debt’ saga in Australia to the litigation and protest against the massive biometric identification system – Aadhaar – in India. Yet, the push for digital innovation in this area of government is certain to continue.

These developments have significant implications for the human rights of roughly half of the world’s population who are covered by social protection measures, as well as those who are not yet covered. Social protection itself is a human right[3] with a long and rich history, dating back to the creation of the International Labour Organization by the 1919 Treaty of Versailles. The introduction of digital technologies in social protection systems risks creating barriers to access to this right, although one can also imagine ways in which technology can facilitate access to social protection. A range of other human rights are implicated with the introduction of these new technologies in social protection systems, ranging from the right to a remedy to the right to privacy.

Despite the significant risks and opportunities involved with the introduction of digital technologies, there has been only limited research and analysis undertaken to better understand the implications for the protection of human rights, especially in the area of social protection/welfare. The poorest and most vulnerable individuals, both in the Global North and Global South, are inevitably the ones who will be most affected by these developments.

To highlight these issues, the Center for Information Technology Policy and the United Nations Special Rapporteur on extreme poverty and human rights, organized a conference at Princeton University on April 12, 2019. The conference brought together leading experts from academia, NGOs, international organizations and the private sector to further explore the implications of digital technologies in social protection systems. The conference was also part of a consultation for a report that the UN Special Rapporteur is preparing and will present to the United Nations General Assembly in October of this year.

Below, a few of the experts who spoke at the conference present some of their key issues and concerns where it comes to the human rights implications of digital technologies in welfare systems.

Cary Coglianese, Edward B. Shils Professor of Law at the University of Pennsylvania Law School

Government has an important responsibility to help provide social services and financial support to those in need. Let us imagine a future where, seeking to fulfill this responsibility, government develops a sophisticated system to help it identify those applicants who qualify for support. But imagine further that, in the end, this identification system turns out to award benefits arbitrarily and to prefer white applicants over applicants of color. Such a system would be properly condemned as unfair. And this is exactly what worries critics who oppose the use of artificial intelligence in administering social programs.

Yet the future imagined above actually appears to have arrived long ago. By many accounts, the scenario I have painted describes the system already in place in the United States and presumably other countries. It is just that the “technology” underlying the current identification system is not artificial intelligence but human decision-making. The U.S. Social Security Administration’s (SSA) disability system, for example, relies on more than a thousand human adjudicators. Although most of these officials are no doubt well-trained and dedicated, they also work under heavy caseloads. And for decades, studies have suggested that racial disparities exist in SSA disability awards, with certain African-American applicants tending to receive less favorable outcomes compared with white applicants.

Any system that relies on thousands of human decision-makers working at high capacity will surely yield variable outcomes. A 2011 report issued by independent researchers offers a stark illustration of the potential for variability across humans: among the fifteen most active administrative judges in a Dallas SSA office, “the judge grant rates in this single location ranged … from less than 10 percent being granted to over 90 percent.” The researchers reported that three judges in this office awarded benefits to no more than 30 percent of their applicants, while three judges awarded to more than 70 percent.

In light of reasonable concerns about arbitrariness and bias in human decisions, the relevant question to ask about artificial intelligence is not whether it will be free of any bias or unexplainable variation. Rather, the question should be whether artificial intelligence can perform better than the current human-based system. Anyone concerned about fairness in government decision-making should entertain the possibility that digital algorithms might sometimes prove to be fairer and more consistent than humans. At the very least, it might turn out to be easier to remedy biased algorithms than to remove deeply ingrained implicit biases from human decision-making.

Jonathan McCully and Nani Jansen Reventlow, Digital Freedom Fund

International law obliges states to provide an effective remedy to victims of human rights violations, but how can this obligation be met in the age of AI? At the conference, a number of points were raised in relation to this question.

For systems of redress or reparation to work, there needs to be a traceable line of responsibility. This is muddied in the AI context as public and private entities claim that certain decisions are reached by machine learning algorithms that lack human intervention. Human rights are devoid of content if victims cannot hold a natural or legal person to account for decisions violating their rights. Therefore, liability regimes should not allow individuals, private entities or public authorities to hide behind their AI. 

For individuals to effectively pursue remedies for AI-related human rights violations, there needs to be an equality of arms. This is also made difficult by AI, where the “allure of objectivity” presented by algorithms can mean that victims are held to a higher standard of evidence compared to those deploying an algorithm. This needs to be corrected.

Finally, like surveillance, AI-related human rights violations can often be hidden from victims. Those who have been subject to an AI-based decision do not necessarily know about it and, even before a decision has been reached against an individual, the models generating these decisions are often trained on datasets that have been processed without the knowledge or consent of those to whom the data relates. Transparency is, therefore, vital to an individual’s ability to pursue remedies in the AI context.  

Jennifer Raso, Assistant Professor, University of Alberta Faculty of Law

Current discussions about algorithmic systems and social protection tend to overlook two key issues. First, the “new” technologies in today’s welfare programs are evolutionary rather than revolutionary. For decades, social assistance offices have been the first sites in which governments introduce new tools to streamline bureaucratic decisions in a context of perpetuated (and seemingly perpetual) resource scarcity. These tools (new and old) are laborious for all who interact with them. They regularly malfunction and require intrusive data about benefits recipients. Such tools perform a dual deterrence: they discourage people from seeking state-funded assistance; and they prevent front-line workers from providing vulnerable individuals access to last-resort assistance.

Second, by centring our debates on privacy and transparency, we fail to address all that is at stake. Focusing on data protection ignores that data intensity is a long-standing feature of social assistance programs. What does privacy mean to someone who must report intimate personal details to remain eligible for welfare benefits? Likewise, transparency conversations overlook the importance of substantive outcomes. How would a transparent decision-making process address the fact that, in many places, welfare rates fall far short of covering one’s basic needs? Instead, we should be into the needs and interests of people who require assistance.

Going forward, we must centre the experiences of those most deeply affected by algorithmic systems. To fully comprehend the impact of these tools in social protection programs, and their potential human rights implications, it is crucial that we attend to the people and communities most targeted by algorithmic systems, and to the front-line workers responsible for maintaining and working with these tools.

Please find here the video of the opening and first panel of the conference, and here the video of the second panel.


[1] Director of the Digital Welfare State and Human Rights Project at the Center for Human Rights and Global Justice at New York University School of Law and Special Advisor on new technologies and human rights to the UN Special Rapporteur on extreme poverty and human rights: https://chrgj.org/people/christiaan-van-veen/
[2] Professional Specialist at CITP, Princeton University.
[3] See, e.g., article 9 of the International Covenant on Economic, Social and Cultural Rights, ratified by 169 States.