July 18, 2018

Internet of Things in Context: Discovering Privacy Norms with Scalable Surveys

by Noah Apthorpe, Yan Shvartzshnaider, Arunesh Mathur, Nick Feamster

Privacy concerns surrounding disruptive technologies such as the Internet of Things (and, in particular, connected smart home devices) have been prevalent in public discourse, with privacy violations from these devices occurring frequently. As these new technologies challenge existing societal norms, determining the bounds of “acceptable” information handling practices requires rigorous study of user privacy expectations and normative opinions towards information transfer.

To better understand user attitudes and societal norms concerning data collection, we have developed a scalable survey method for empirically studying privacy in context.  This survey method uses (1) a formal theory of privacy called contextual integrity and (2) combinatorial testing at scale to discover privacy norms. In our work, we have applied the method to better understand norms concerning data collection in smart homes. The general method, however, can be adapted to arbitrary contexts with varying actors, information types, and communication conditions, paving the way for future studies informing the design of emerging technologies. The technique can provide meaningful insights about privacy norms for manufacturers, regulators, researchers and other stakeholders.  Our paper describing this research appears in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.

Scalable CI Survey Method

Contextual integrity. The survey method applies the theory of contextual integrity (CI), which frames privacy in terms of the appropriateness of information flows in defined contexts. CI offers a framework to describe flows of information (attributes) about a subject from a sender to a receiver, under specific conditions (transmission principles).  Changing any of these parameters of an information flow could result in a violation of privacy.  For example, a flow of information about your web searches from your browser to Google may be appropriate, while the same information flowing from your browser to your ISP might be inappropriate.

Combinatorial construction of CI information flows. The survey method discovers privacy norms by asking users about the acceptability of a large number of information flows that we automatically construct using the CI framework. Because the CI framework effectively defines an information flow as a tuple (attributes, subject, sender, receiver, and transmission principle), we can automate the process of constructing information flows by defining a range of parameter values for each tuple and generating a large number of flows from combinations of parameter values.

Applying the Survey Method to Discover Smart Home Privacy Norms

We applied the survey method to 3,840 IoT-specific information flows involving a range of device types (e.g., thermostats, sleep monitors), information types (e.g., location, usage patterns), recipients (e.g., device manufacturers, ISPs) and transmission principles (e.g., for advertising, with consent). 1,731 Amazon Mechanical Turk workers rated the acceptability of these information flows on a 5-point scale from “completely unacceptable” to “completely acceptable”.

Trends in acceptability ratings across information flows indicate which context parameters are particularly relevant to privacy norms. For example, the following heatmap shows the average acceptability ratings of all information flows with pairwise combinations of recipients and transmission principles.

Average acceptability scores of information flows with given recipient/transmission principle pairs.

Average acceptability scores of information flows with given recipient/transmission principle pairs. For example, the top left box shows the average acceptability score of all information flows with the recipient “its owner’s immediate family” and the transmission principle “if its owner has given consent.” Higher (more blue) scores indicate that flows with the corresponding parameters are more acceptable, while lower (more red) scores indicate that the flows are less acceptable. Flows with the null transmission principle are controls with no specific condition on their occurrence. Empty locations correspond to less intuitive information flows that were excluded from the survey. Parameters are sorted by descending average acceptability score for all information flows containing that parameter.

These results provide several insights about IoT privacy, including the following:

  • Advertising and Indefinite Data Storage Generally Violate Privacy Norms. Respondents viewed information flows from IoT devices for advertising or for indefinite storage as especially unacceptable. Unfortunately, advertising and indefinite storage remain standard practice for many IoT devices and cloud services.
  • Transitive Flows May Violate Privacy Norms. Consider a device that sends its owner’s location to a smartphone, and the smartphone then sends the location to a manufacturer’s cloud server. This device initiates two information flows: (1) to the smartphone and (2) to the phone manufacturer. Although flow #1 may conform to user privacy norms, flow #2 may violate norms. Manufacturers of devices that connect to IoT hubs (often made by different companies), rather than directly to cloud services, should avoid having these devices send potentially sensitive information with greater frequency or precision than necessary.

Our paper expands on these findings, including more details on the survey method, additional results, analyses, and recommendations for manufacturers, researchers, and regulators.

We believe that the survey method we have developed is broadly applicable to studying societal privacy norms at scale and can thus better inform privacy-conscious design across a range of domains and technologies.

Announcing IoT Inspector: Studying Smart Home IoT Device Behavior

By Noah Apthorpe, Danny Y. Huang, Gunes Acar, Frank Li, Arvind Narayanan, Nick Feamster

An increasing number of home devices, from thermostats to light bulbs to garage door openers, are now Internet-connected. This “Internet of Things” (IoT) promises reduced energy consumption, more effective health management, and living spaces that react adaptively to users’ lifestyles. Unfortunately, recent IoT device hacks and personal data breaches have made security and privacy a focal point for IoT consumers, developers, and regulators.

Many IoT vulnerabilities sound like the plot of a science fiction dystopia. Internet-connected dolls allow strangers to spy on children remotely. Botnets of millions of security cameras and DVRs take down a global DNS service provider. Surgically implanted pacemakers are susceptible to remote takeover.

These security vulnerabilities, combined with the rapid evolution of IoT products, can leave consumers at risk, and in the dark about the risks they face when using these devices. For example, consumers may be unsure which companies receive personal information from IoT appliances, whether an IoT device has been hacked, or whether devices with always-on microphones listen to private conversations.

To shed light on the behavior of smart home IoT devices that consumers buy and install in their homes, we are announcing the IoT Inspector project.

Announcing IoT Inspector: Studying IoT Security and Privacy in Smart Homes

Today, at the Center for Information Technology Policy at Princeton, we are launching an ongoing initiative to study consumer IoT security and privacy, in an effort to understand the current state of smart home security and privacy in ways that ultimately help inform both technology and policy.

We have begun this effort by analyzing more than 50 home IoT devices ourselves. We are working on methods to help scale this analysis to more devices. If you have a particular device or type of device that you are concerned about, let us know. To learn more, visit the IoT Inspector website.

Our initial analyses have revealed several findings about home IoT security and privacy.

[Read more…]

Is It Time for an Data Sharing Clearinghouse for Internet Researchers?

Today’s Senate hearing with Facebook’s Mark Zuckerberg will start a long discussion on data collection and privacy from Internet companies. Although the spotlight is currently on Facebook, we shouldn’t forget that the picture is  broader: companies from device manufacturers to ISPs collect network traffic and use it for a variety of purposes.

The uses that we will hear about today are largely about the widespread collection of data about Internet users for targeted content delivery and advertising.  Meanwhile, yesterday Facebook announced an initiative to share data with independent researchers to study social media’s impact on elections. At the same time Facebook is being raked over the coals for sharing their data with “researchers” (Cambridge Analytica), they’ve announced a program to share their data with (presumably more “legitimate”) researchers.

Internet researchers depend on data. Sometimes, we can gather the data ourselves, using measurement tools deployed at the edge of the Internet (e.g., in home networks, on phones). In other cases, we need data from the companies that operate parts of the Internet, such as an Internet service provider (ISP), an Internet registrar, or an application provider (e.g., Facebook).

  • If incentives align, data flows to the researcher. Interacting with a company can work very well when goals are aligned. I’ve worked well with companies to develop new spam filtering algorithms, to develop new botnet detection algorithms, and to highlight empirical results that have informed policy debates.
  • If incentives do not align, then the researcher probably won’t get the data.  When research is purely technical, incentives often align. When the technical work crosses over into policy (as it does in areas like net neutrality, and as we are seeing with Facebook), there can be (insurmountable) hurdles to data access.

How an Internet Researcher Gets Data Today

How do Internet researchers get data from companies today? An Internet operator I know aptly characterizes the status quo:

“Show Internet operators you can do something useful, and they’ll give you data.”

Researchers get access to Internet data from companies in two ways: (1) working for the company (as an employee), or (2) working with the company (as an “independent” researcher).

Option #1: Work for a Company.

Working for a company offers privileged access to data, which can be used to mint impressive papers (irreproducibility aside) simply because nobody else has the same data. I have taken this approach myself on a number of occasions, having worked for an ISP (AT&T), a DNS company (Verisign), and an Internet security service provider (Damballa).

How this approach works. In the 2000s, research labs at AT&T and Sprint had privileged access to data, which gave rise to a proliferation of papers on “Everything You Wanted to Know About the Internet Backbone But Were Afraid to Ask”.  Today, the story repeats itself, except that the players are Google and Facebook, and the topic du jour is data-center networks.

Shortcomings of This Approach. Much research—from projects with a longer arc to certain policy-oriented questions—would never come to light if we only relied on company employees to do it. By the nature of their work, however, company employees lack independence. They lack both autonomy of selecting problems and in the ability to take positions or publish results that run counter to the company’s goals or priorities. This shortcoming may not matter if what the researcher wants to work on and what the company want to accomplish are the same. For many technical problems, this is the case (although there is still the tendency for the technical community to develop tunnel vision around areas where there is an abundance of data, while neglecting other areas). But for many problems—ranging from problems with a longer arc to deployment to those that may run counter to priorities—we can’t rely on industry to do the work.

#2: Work with a Company. 

How this approach works. A researcher may instead work with a company, typically gaining privileged access to data for a particular project. Sometimes, we demonstrate the promise of a technique with some data that we can gather or bootstrap without any help and use that initial study to pique the interest of a company who may then share data with us to further develop the idea.

Shortcomings of this approach. Research done in collaboration with a company often has similar shortcomings as the research that is done within a company’s walls. If the results of the research align with the company’s perspectives and viewpoints, then data sharing is copacetic. Even these cooperative settings do pose some risks to researchers, who may create the perception that they are not independent, merely by their association with the company. With purely technical research risks are lower, though still non-zero: for example, because the work depends on privileged data access, the researcher may still face challenges in presenting the research in a way that could help others reproduce it in the future.

With technical work that can inform or speak to policy questions, there are some concerns. First, certain types of research or results may never come to light—if a company doesn’t like the result that may result from the data analysis, then they may simply not share the data, or they may ask for “pre-publication review” for results based on that data (this practice is common for research that is conducted within companies as well). There is also a second, more subtle concern. Even when the work is technically watertight, a researcher can still face questions—fair or unfair—about the soundness of the work due to the perceived motivations or agendas of cooperative parties involved.

Current Data Sharing Approaches are Good, But They are Not Sufficient

The above methods for data sharing can work well for certain types of research. In my career, I have made hay playing by these rules—often working with a company, first by demonstrating the viability of an idea with a smaller dataset that we gather ourselves and “pitching” the idea to a company.

Yet, in my experience these approaches have two shortcomings. The first relates to incentives. The second relates to privacy.

Problem #1: Incentives.

Certain types of work depend on access to Internet data, but the company who holds the data may not have a direct incentive to facilitate the research. Possible studies of Facebook’s effect on elections certainly fall into this category: They simply may not like the results of the research.

But, there are plenty of other lines of research that fall into the category where incentives may not align. Other examples range from measurements of Internet capacity and performance as they relate to broadband regulation (e.g., net neutrality) to evaluation of an online platform’s content moderation algorithms and techniques. Lots of other work relating to consumer protection falls into this category as well. We have to rely on users and researchers measuring things at the edge of the network to figure out what’s going on; from this vantage point, certain activities may naturally slip under the radar more easily.

The current Internet data sharing roadmap doesn’t paint a rosy picture for research where incentives don’t align. Even when incentives do align, there can be perceptions of “capture”—effectively shilling an intellectual or technical finding in exchange for data access.

It is in the interests of everyone—the academics and their industry partners alike—to establish more formal modes of data exchange when either (1) there is determination that the problem is important to study for the health of the Internet, or for the benefit of consumers; (2) there is the potential that the research will be perceived as not objective due to the nature of the data sharing agreement.

Problem #2: Privacy.

Sharing Internet data with researchers can introduce substantial privacy risks, and the need to share data with any researcher who works with a company should be evaluated carefully—ideally by an independent third party.

When helping develop the researcher exception to the FCC’s broadband privacy rules, I submitted a comment that proposed the following criteria for sharing ISP data with researchers:

  1. Purpose of research. The data satisfies research that aims to promote security, stability, and reliability of networks. The research should have clear benefits for Internet innovation, operations, or security.
  2. Research goals do not violate privacy. The goals of the research does not include compromising consumer privacy;
  3. Privacy risks of data sharing are offset by benefits of the research. The risks of the data exchange are offset by the benefits of the research;
  4. Privacy risks of the data sharing are mitigated. Researchers should strive to use de-­identified data wherever possible.
  5. The data adds value to the research. The research is enhanced by access to the data.

Yet, outlining the criteria is one thing. The thornier question (which we did not address!) is: Who gets to decide the answers?

Universities have institutional review boards that can help evaluate the merits of such a data sharing agreement. But, Cambridge Analytica might have the veneer of “research”, and a company may have no internal incentive to independently evaluate the data sharing agreement on its merits. In light of recent events, we may be headed towards the conclusion that such data-sharing agreements should always be vetted by independent third-party review. If the research doesn’t involve a university, however, the natural question is: Who is that third party?

Looking Ahead: Data Clearinghouses for Internet Data?

Certain types of Internet research—particularly those that involve thorny regulatory or policy questions—could benefit from an independent clearing house, where researchers could propose studies and experiments for independent evaluation and have them evaluated and selected by an independent third party, based on their benefits and risks. Facebook is exploring this avenue in the limited setting of election integrity. This is an exciting step.

Moving forward, it will be interesting to see how Facebook’s meta-experiment on data sharing plays out, and whether it—or some variant—can serve as a model for Internet data sharing for other types of work writ large. In purely technical areas, such a clearinghouse could allow a broader range of researchers to explore, evaluate, reproduce and extend the types of work that for now remains largely irreproducible because data is under lock and key. For these questions, there could be significant benefit to the scientific community. In areas where the technical work or data analysis informs policy questions, the benefits to consumers could be even greater.