February 20, 2018

Archives for 2014

CITP Call for Fellows, Postdocs, and Visiting Professor for 2015-16

The Center for Information Technology Policy is an interdisciplinary research center at Princeton that sits at the crossroads of engineering, the social sciences, law, and policy.

CITP seeks Visiting Fellows and Postdoctoral Research Associates for the 2015-2016 year who work at the intersection of digital technology and public life, with backgrounds in fields including computer science, sociology, public policy, engineering, economics, law, and civil service. Visiting Fellow appointments are typically for nine months, commencing on September 1; postdoctoral appointments are for one to two years, normally commencing on July 1. Applicants may be appointed as a Visiting Fellow, Visiting Researcher, or Postdoctoral Research Associate.

CITP also seeks candidates for our Microsoft Visiting Professor of Information and Technology Policy position. Applicants must be currently appointed faculty members at an academic institution and must be on leave from such an appointment during their time at CITP. The successful applicant is expected to be appointed to a term between ten months and two years old based on their individual circumstances.

For full consideration, applications should be submitted by February 1, 2015 through jobs.princeton.edu.

Click for details on the Postdoctoral Research Associate application
Click for details on the Visiting Fellow application
Click for details on the Microsoft Visiting Professor of Information and Technology Policy application

"Information Sharing" Should Include the Public

The FBI recently issued a warning to U.S. businesses about the possibility of foreign-based malware attacks. According to a Reuters story by Jim Finkle:

The five-page, confidential “flash” FBI warning issued to businesses late on Monday provided some technical details about the malicious software used in the attack. It provided advice on how to respond to the malware and asked businesses to contact the FBI if they identified similar malware.

The report said the malware overrides all data on hard drives of computers, including the master boot record, which prevents them from booting up.

“The overwriting of the data files will make it extremely difficult and costly, if not impossible, to recover the data using standard forensic methods,” the report said.

The document was sent to security staff at some U.S. companies in an email that asked them not to share the information.

The information found its way to the press, as one would expect of widely-shared information that is of public interest.

My question is this: Why didn’t they inform the public?
[Read more…]

How do we decide how much to reveal? (Hint: Our privacy behavior might be socially constructed.)

[Let’s welcome Aylin Caliskan-Islam, a graduate student at Drexel. In this post she discusses new work that applies machine learning and natural-language processing to questions of privacy and social behavior. — Arvind Narayanan.]

How do we decide how much to share online given that information can spread to millions in large social networks? Is it always our own decision or are we influenced by our friends? Let’s isolate this problem to one variable, private information. How much private information are we sharing in our posts and are we the only authority controlling how much private information to divulge in our textual messages? Understanding how privacy behavior is formed could give us key insights for choosing our privacy settings, friends circles, and how much privacy to sacrifice in social networks. Christakis and Fowler’s network analytics study showed that obesity spreads through social ties. In another study, they explain that smoking cessation is a collective behavior. Our intuition before analyzing end users’ privacy behavior was that privacy behavior might also be under the effect of network phenomena.

In a recent paper that appeared at the 2014 Workshop on Privacy in the Electronic Society, we present a novel method for quantifying privacy behavior of users by using machine learning classifiers and natural-language processing techniques including topic categorization, named entity recognition, and semantic classification. Following the intuition that some textual data is more private than others, we had Amazon Mechanical Turk workers label tweets of hundreds of users as private or not based on nine privacy categories that were influenced by Wang et al.’s Facebook regrets categories and Sleeper et al.’s Twitter regrets categories. These labels were used to associate a privacy score with each user to reflect the amount of private information they reveal. We trained a machine learning classifier based on the calculated privacy scores to predict the privacy scores of 2,000 Twitter users whose data were collected through the Twitter API.
[Read more…]