June 19, 2018

Archives for April 2018

Ethics Education in Data Science: Classroom Topics and Assignments

[This blog post is a continuation of a recap of a recent workshop on data science ethics education.]

The creation of ethics modules that can be inserted into a variety of classes may help ensure that ethics as a subject is not marginalized and enable professors with little experience in philosophy or with fewer resources to incorporate ethics into their more technical classes. This post will outline some of the topics that professors have decided to cover in this field, as well as suggestions for types of assignments that may be useful. We hope that readers will consider ways to add these into their classes, and we welcome comments with further suggestions of topics or assignments.

With regards to ethics, some of the key topics that professors have taught about include: deontology, consequentialism, utilitarianism, virtue ethics, moral responsibility, cultural relativism, social contract, feminist ethics, justice consequentialism, the distinction between ethics and law, and the relationship between principles, standards, and rules.

Using these frameworks, professors can discuss a variety of topics, including: privacy, algorithmic bias, misinformation, intellectual property, surveillance, inequality, data collection, AI governance, free speech, transparency, security, anonymity, systemic risk, labor, net neutrality, accessibility, value-sensitive design, codes of ethics, predictive policing, virtual reality, ethics in industry, machine learning, clinical versus actuarial reasoning, issue spotting, and basic social science concepts.

In determining the most effective types of assignments to use, a common thread was the use of real world data sets or examples to engage students. Some effective assignment methods include:

Debates: Students split up into groups, each representing a different interest group or stakeholder, and then argue for that entity’s stance. This could entail asking students to justify the way that groups or people actually acted in the past, or it may have students act as decision makers and decide how they would act or react in a given situation.

Critique Existing Policies: Ask students to choose a particular company’s data policy, a data collection method at their University, a recent FCC policy, or an organization’s code of ethics and critique it. This gives students experience in understanding specific and concrete details of a policy and how it affects real people. By the end of the assignment, students may even be able to suggest changes to a company or university policy, providing impact beyond the classroom. This assignment can be framed to focus on either policy or ethics depending on the goal of the project.

Adversarial Mindset: Assignments can provide insight by placing students into the mind of the adversary, such as having them design a fake news campaign or attempt to dox their professor. Understanding how malicious users think can enhance students’ ability to counter such attacks or even to counter the mindset itself. However, such assignments should be framed very carefully – students may enjoy the thrill of such assignments and find them intellectually exciting, ignoring the elements that are ethically problematic.

Peer Audit: Asking students to review the ethics of a given project can be a useful exercise, and it may be even more interesting to students if they are able to review the work of their peers. Peer audits can pair nicely with more technical assignments from the same class – for example, if students are asked to capture and inspect network traffic in one assignment, the next assignment may entail reviewing other students’ methods for doing so to analyze any questionable methods. Graduate students can also be asked to audit their peer’s graduate research

Some recent case studies that may be interesting for students include: Cambridge Analytica’s use of Facebook data, the fatal crash of a self-driving Uber car, Facebook’s emotional contagion study, Encore censorship research, Chinese criminal facial tracking, Uber’s tracking of one night stands, Stanford’s “Gaydar” research, Black Mirror episodes, Latanya Sweeney’s anonymization work, NYPD Stop and Frisk Data, Predictive Policing, and COMPAS’s recidivism risk assessment tool.

A critical aspect of data science ethics education is ensuring that this field is well-respected so that students, universities, research communities, and industry respect and engage in efforts on this front. This may require a research-focused element, but efforts should also be dedicated to ensuring that students understand concretely how this applies to their lives and the lives of others. The field must encourage people to think beyond IRB and legal compliance, and to consider the impact of research or products even when they do not fall under the conventional conception of “human-subject research.” It will also be critical to engage industry in this field – largely because private companies impact our lives on a daily basis, but also because industry devotion to ethics can serve as an indicator to students that considering ethics is a worthwhile endeavor.

Although some have considered writing a textbook on this subject, technical capabilities and real-world examples change so rapidly that a textbook may be obsolete before it is even published. We encourage people to use other methods to share ideas on data science ethics education, such as blog posts, papers, or shared repositories with assignments and teaching tools that have been successful.

Announcing IoT Inspector: Studying Smart Home IoT Device Behavior

By Noah Apthorpe, Danny Y. Huang, Gunes Acar, Frank Li, Arvind Narayanan, Nick Feamster

An increasing number of home devices, from thermostats to light bulbs to garage door openers, are now Internet-connected. This “Internet of Things” (IoT) promises reduced energy consumption, more effective health management, and living spaces that react adaptively to users’ lifestyles. Unfortunately, recent IoT device hacks and personal data breaches have made security and privacy a focal point for IoT consumers, developers, and regulators.

Many IoT vulnerabilities sound like the plot of a science fiction dystopia. Internet-connected dolls allow strangers to spy on children remotely. Botnets of millions of security cameras and DVRs take down a global DNS service provider. Surgically implanted pacemakers are susceptible to remote takeover.

These security vulnerabilities, combined with the rapid evolution of IoT products, can leave consumers at risk, and in the dark about the risks they face when using these devices. For example, consumers may be unsure which companies receive personal information from IoT appliances, whether an IoT device has been hacked, or whether devices with always-on microphones listen to private conversations.

To shed light on the behavior of smart home IoT devices that consumers buy and install in their homes, we are announcing the IoT Inspector project.

Announcing IoT Inspector: Studying IoT Security and Privacy in Smart Homes

Today, at the Center for Information Technology Policy at Princeton, we are launching an ongoing initiative to study consumer IoT security and privacy, in an effort to understand the current state of smart home security and privacy in ways that ultimately help inform both technology and policy.

We have begun this effort by analyzing more than 50 home IoT devices ourselves. We are working on methods to help scale this analysis to more devices. If you have a particular device or type of device that you are concerned about, let us know. To learn more, visit the IoT Inspector website.

Our initial analyses have revealed several findings about home IoT security and privacy.

[Read more…]

No boundaries for Facebook data: third-party trackers abuse Facebook Login

by Steven Englehardt [0], Gunes Acar, and Arvind Narayanan

So far in the No boundaries series, we’ve uncovered how web trackers exfiltrate identifying information from web pages, browser password managers, and form inputs.

Today we report yet another type of surreptitious data collection by third-party scripts that we discovered: the exfiltration of personal identifiers from websites through “login with Facebook” and other such social login APIs. Specifically, we found two types of vulnerabilities [1]:

  • seven third parties abuse websites’ access to Facebook user data
  • one third party uses its own Facebook “application” to track users around the web.

 

Vulnerability 1: Third parties piggyback on Facebook access granted to websites

Diagram of third-party script accessing Facebook API

When a user clicks “Login with Facebook”, they will be prompted to allow the website they’re visiting to access some of their Facebook profile information [2]. Even after Facebook’s recent moves to lock down the feature, websites can request the user’s email address and  “public profile” (name, age range, gender, locale, and profile photo) without triggering a manual review by Facebook. Once the user allows access, any third-party Javascript embedded in the page, such as tracker.com in the figure above, can also retrieve the user’s Facebook information as if they were the first party [3].

[Read more…]