December 10, 2018

Archives for October 2018

The Third Workshop on Technology and Consumer Protection

Arvind Narayanan and I are pleased to announce that the Workshop on Technology and Consumer Protection (ConPro ’19) will return for a third year! The workshop will once again be co-located with the IEEE Symposium on Security and Privacy, occurring in May 2019.

ConPro is a forum for a diverse range of computer science research with consumer protection implications. Last year, papers covered topics ranging from online dating fraud to the readability of security guidance. Panelists and invited speakers explored topics from preventing caller-ID spoofing to protecting unique communities.

We see ConPro as a workshop in the classic sense, providing substantive feedback and new ideas. Presentations have sparked suggestions for follow-up work and collaboration opportunities. Attendees represent a wide range of research areas, spurring creative ideas and interesting conversation. For example, comments about crowdworker concerns this year led to discussion of best practices for research making use of those workers.

Although our community has grown, we aim to keep discussion and feedback a central part of the workshop. Our friends in the legal community have had some success with larger events focused on feedback and discussion, such as PLSC. We plan to take lessons from those cases.

The success of ConPro in past years—amazing research, attendees, discussion, and PCs—makes us excited for next year. The call for papers lists some relevant topics, but if you do computer science research with consumer protection implications, it’s relevant (but be sure those implications are clear). The submission deadline is January 23, 2019. We hope you’ll submit a paper and join us in San Francisco!

Ten ways to make voting machines cheat with plausible deniability

Summary:  Voting machines can be hacked; risk-limiting audits of paper ballots can detect incorrect outcomes, whether from hacked voting machines or programming inaccuracies; recounts of paper ballots can correct those outcomes; but some methods for producing paper ballots are more auditable and recountable than others.

A now-standard principle of computer-counted public elections is, use a voter-verified paper ballot, so that in case the voting machine cheats in counting the votes, the human doing an audit or recount can see the paper that the voter marked.  Why would the voting machine cheat?  Well, they’re computers, and any computer may have security vulnerabilities that permits an attacker to modify or replace its software.  We must presume that any voting machine might, at any time, be under the complete control of an attacker, an election thief.

There are several ways that voter-verified paper ballots can be implemented:

  1. Voter marks an optical-scan ballot with a pen, deposits into optical-scan voting machine for counting (and for saving in sealed ballot box).
  2. Voter uses a ballot-marking device (BMD), a computer with touchscreen/audio/sip-and-puff interfaces, which prints an optical-scan ballot, deposits into optical-scan voting machine for counting (and saving).
  3. Voter uses a DRE+VVPAT voting machine, that is, a Direct-Recording Electronic  “touchscreen” machine with a Voter-Verified Paper Audit Trail, which saves the VVPAT printouts in a ballot box.
  4. Voter uses an “all-in-one” voting machine: inserts blank paper into slot, voter uses touchscreen interface to mark ballot, machine ejects ballot from slot, voter  inspects printed ballot, voter reinserts printed ballot into same slot, where it is scanned (or is it?) and deposited into ballot box.

There’s also 1a (hand-marked optical-scan ballots, dropped into a precinct ballot box to be centrally counted instead of counted immediately by a precinct-located scanner), 1b (hand-marked optical-scan ballots, sent by mail) and 2a (BMD-marked optical-scan ballots, centrally counted).

In this article I will put on my “adversarial thinking” hat, and try to design ways that the attacker might try to cheat (and get away with it).  You might think that the voter-verified paper ballot will detect cheating, and therefore deter cheating or correct the result–but maybe that depends on which kind of technology is used! [Read more…]

User Perceptions of Smart Home Internet of Things (IoT) Privacy

by Noah Apthorpe

This post summarizes a research paper, authored by Serena Zheng, Noah Apthorpe, Marshini Chetty, and Nick Feamster from Princeton University, which is available here. The paper will be presented at the ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW) on November 6, 2018.

Smart home Internet of Things (IoT) devices have a growing presence in consumer households. Learning thermostats, energy tracking switches, video doorbells, smart baby monitors, and app- and voice-controlled lights, speakers, and other devices are all increasingly available and affordable. Many of these smart home devices continuously monitor user activity, raising privacy concerns that may pose a barrier to adoption.

In this study, we conducted 11 interviews of early adopters of smart home technology in the United States, investigating their reasons for purchasing smart-home IoT devices, perceptions of smart home privacy risks, and actions taken to protect their privacy from entities external to the home who create, manage, track, or regulate IoT devices and their data.

We recruited participants by posting flyers in the local area, emailing listservs, and asking through word of mouth. Our recruiting resulted in six female and five male interviewees, ranging from 23–45 years old. The majority of participants were from the Seattle metropolitan area, but included others from New Jersey, Colorado, and Texas. The participants came from a variety of living arrangements, including families, couples, and roommates. All participants were fairly affluent, technically skilled, and highly interested in new technology, fitting the profile of “early adopters.” Each interview began with a tour of the participant’s smart home, followed by a semi-structured conversation with specific questions from an interview guide and open-ended follow-up discussions on topics of interest to each participant.

The participants owned a wide variety of smart home devices and shared a broad range of experiences about how these devices have impacted their lives. They also expressed a range of privacy concerns, including intentional purchasing and device interaction decisions made based on privacy considerations. We performed open coding on transcripts of the interviews and identified four common themes:

  1. Convenience and connectedness are priorities for smart home device users. These values dictate privacy opinions and behaviors. Most participants cited the ability to stay connected to their homes, families, or pets as primary reasons for purchasing and using smart home devices. Values of convenience and connectedness outweighed other concerns, including obsolescence, security, and privacy. For example, one participant commented, “I would be willing to give up a bit of privacy to create a seamless experience, because it makes life easier.”
  2. User opinions about who should have access to their smart home data depend on perceived benefit from entities external to the home, such as device manufacturers, advertisers, Internet service providers, and the government. For example, participants felt more comfortable sharing their smart home data with advertisers if they believed that they would receive improved targeted advertising experiences.
  3. User assumptions about privacy protections are contingent on their trust of IoT device manufacturers. Participants tended to trust large technology companies, such as Google and Amazon, to have the technical means to protect their data, although they could not confirm if these companies actually performed encryption or anonymization. Participants also trusted home appliance and electronics brands, such as Philips and Belkin, although these companies have limited experience making Internet-connected appliances. Participants generally rationalized their reluctance to take extra steps to protect their privacy by referring to their trust in IoT device manufacturers to not do anything malicious with their data.
  4. Users are less concerned about privacy risks from devices that do not record audio or video. However, researchers have demonstrated that metadata from non-A/V smart home devices, such as lightbulbs and thermostats, can provide enough information to infer user activities, such as home occupancy, work routines, and sleeping patterns. Additional outreach is needed to inform consumers about non-A/V privacy risks.

Recommendations. These themes motivate recommendations for smart home device designers, researchers, regulators, and industry standards bodies. Participants’ desires for convenience and trust in IoT device manufacturers limit their willingness to take action to verify or enforce smart home data privacy. This means that privacy notifications and settings must be exceptionally clear and convenient, especially for smart home devices without screens. Improved cybersecurity and privacy regulation, combined with industry standards outlining best privacy practices, would also reduce the burden on users to manage their own privacy. We encourage follow-up studies examining the effects of smart home devices on privacy between individuals within a household and comparing perceptions of smart home privacy in different countries.

For more details about our interview findings and corresponding recommendations, please read our paper or see our presentation at CSCW 2018.

Full citation: Serena Zheng, Noah Apthorpe, Marshini Chetty, and Nick Feamster. 2018. User Perceptions of Smart Home IoT Privacy. In Proceedings of the ACM on Human-Computer Interaction, Vol. 2, CSCW, Article 200 (November 2018), 20 pages. https://doi.org/10.1145/3274469