December 19, 2018

The Rise of Artificial Intelligence: Brad Smith at Princeton University

What will artificial intelligence mean for society, jobs, and the economy?

Speaking today at Princeton University is Brad Smith, President and Chief Legal Officer of Microsoft. I was in the audience and live-blogged Brad’s talk.

CITP director Ed Felten introduces Brad’s lecture by saying that the tech industry is at a crossroads. With the rise of AI and big data, people have realized that the internet and technology are having a big, long-term effect on many people’s lives. At the same time, we’ve seen increased skepticism about technology and the role of the tech industry in society.

The good news, says Ed, is that plenty of people in the industry are up to the task of explaining what the industry does to cope with these problems in a productive way. What the industry needs now, says Ed, is what Brad offers: a thoughtful approach to the challenges that our society faces, acknowledges the role of tech companies, seeks constructive solutions, and takes responsibility that works across society. If there’s one thing we could to to help the tech industry cope with these questions, says Ed, it would be to clone Brad.

Imagining Artificial Intelligence in Thirty Years

Brad opens by mentioning the new book by his team: The Future Computed Artificial Intelligence and its Role in Society. While writing the book, they realized that it’s not helpful to think about change in the next year or two. Instead, we should be thinking about periods of ten to thirty years.

What was life like twenty years ago? In 1998, people often began their day without anything digital. They would put on a television, listen to the radio, and pull out a calendar. If you needed to call someone, you would use a land phone to reach them. At that time, the single common joke was about whether they could program their VCR machines.

In 2018, the first thing that many people reach for is their phone. Even if you manage to keep your phone in another room, you’ll find yourself reaching for your phone or sitting down in front of your laptop. You now use those devices to find out what happened in the world and with your friends.

What will the world look like in 2038? By that time, Brad argues that we’ll be living with artificial intelligence. Digital assistants are already part of our lives, but they’ll be more common at that time. Rather than looking at lots of apps, we’ll have a digital assistant that will talk to us and tell us what the traffic will be like for us. Twenty years from now, you’ll probably have your digital assistant talking to you as you shave or put on your makeup in the morning.

What is Artificial Intelligence?

To understand what that mean in our lives, we need to understand what artificial intelligence really is. Even today, computers can recognize people, and they can do more – they can make sense of someone’s emotions from their face. We’ve seen the same with the ability of computers to understand language, Brad says. Not only can computers recognize speech, they can also sift through knowledge, make sense of it, and reach conclusions.

In the world today, we read about AI and expect it all to arrive one day, says Brad. That’s not how it’s going to work- AI will become more and more part of our lives in pieces. He tells us about the BMW pedestrian alert, which allows cars to detect pedestrians, beep, signal to the driver, and apply its brakes. Brad also tells us about the Steno app, which records and transcribes. Microsoft now has a version of Skype that detects and auto-translates the conversation– something they’ve now integrated with Powerpoint. Spotify, Netflix, and iTunes all use artificial intelligence to deliver suggestions for the next TV show. None of these systems work with 100% perfection, but neither do human beings.  When asking about an AI system, we need to ask when computers will become as good as a human being.

What advances make AI real? Microsoft Amazon, Google, and others build data centers that are many football fields large in space. This enables companies to gather huge computational power and vast amounts of data. Because algorithms get better with more data, companies have an insatiable appetite for data.

The Challenges of Imagining the Future

All of this is exciting, says Brad, and could deliver huge promise for the world. But we can’t afford to look at this future with uncritical eyes. The world needs to make sense of the risks. As computers behave more like humans, what will that mean for real people? Many people like Stephen Hawking, Elon Musk, and others are warning us about that future. But there is no crystal ball. For a long time, says Brad, I’ve admired futurists, but if a futurist gets something wrong, probably nobody remembers they got it wrong. We may be able to discern patterns, but nobody has a crystal ball.

Learning from The History of the Automobile

How can we think about what may be coming? The first option is to learn from history– not because it repeats itself but because it provides insights. To illustrate this, Brad starts by talking about the transition from horses to automobiles. He shows us a photo of Bertha Benz, whose dowry paid for her husband Karl’s new business. One morning in 1888, she got up and left her husband a note saying that she was taking the car and driving the kids 70 kilometers to visit her mother. Before the day was over, she had to repair the car, but by the end of the day, they had reached her mother’s house. This stunt convinced the world that the automobile would be important to the future.

Next, Brad shows us a photo of New York City in 1905, with streets full of horses and hardly any cars. Twenty years later, there were no horses on the streets. The horse population declined and jobs involved in supporting them disappeared. These direct economic effects weren’t as important as the indirect effects. Consumer credit wasn’t necessarily connected to the automobile, but it was an indirect outcome. Once people wanted to buy cars, they needed a way to finance the cars. Advertising also changed: when people were driving past billboards at speed, advertisers invented logos to make their companies more recognizable.

How Institutions Evolve to Meet Technology & Economic Changes

The effects of the automobile weren’t all good. As the population of horses declined, farmers got smart and grew less hay. They shifted their acre-age to wheat and corn and the prices plummeted. Once the prices plummeted, farmers’ income plummeted. As the farmers fell behind on their loans, the rural banks tried to foreclose them, leading to broad financial collapse. Many of the things we take for granted today come from that experience: the FDIC and insurance regulation, farm subsidies, and many other parts of our infrastructure. With AI, we need to be prepared for changes as substantial.

Understanding the Impact of AI on the Economy

Brad tells us another story about how offices worked. In the 1980s, you handed someone a hand-written document and someone would type it for you. Between the 1980s and today, two big changes happened. First, secretarial staff went on the decline and the professional IT staff was born. Second, people realized that everyone needed to understand how to use computers.

As we think about how work will change, we need to ask what jobs AI will replace. To answer this question, let’s think about what computers can do well: vision, speech, language knowledge. Jobs involving decision-making are already being done by computers (radiology, call centers, fast food orders, auto drivers). Jobs involving translation and learning will also become automated, including machinery inspection and the work of paralegals. At Microsoft, the company used to have multiple people whose job was to inspect fire extinguishers. Now the company has devices that automatically record data on their status, reducing the work involved in maintaining them.

Some jobs are less likely to be replaced by AI, says Brad: anything that requires human understanding and empathy. Nurses, social workers, therapists, and teachers are more likely to be people who will use AI than be replaced by it. This may lead people to take on jobs that they take more satisfaction in doing.

Some of the most exciting developments for AI in the next five years will be in the area of disability. Brad shows us a project called “Seeing AI,” offers an app that describes a person’s surroundings using a phone camera. The app can read barcodes and identify food, identify currency bills, describe a scene, and read text in one’s surroundings. What’s exciting is what it can do for people. The project has already carried out 3 million tasks and it’s getting better and smarter as it goes. This system could be a game changer for people with blindness, says Brad.

Why Ethics Will Be a Growth Area for AI

What jobs will AI create? It’s easier to think about the jobs it will replace than what it will create. When young people in Kindergarten today enter the workplace, he says, the majority of jobs will be ones that don’t yet exist. Some of the new jobs will be ones that support AI to work: computer science, data science, and ethics. “Ultimately, the question is not only what computers *can* do” says Brad, “it’s what computers *should* do.” Under the ethics of AI, the fields of reliability/safety and privacy/security are well developed. Other important areas that are less well developed are research on fairness, inclusiveness. Two issues underly all the rest. Transparency is important because the world needs to know how those systems will work– people need to understand how they work.

AI Accountability and Transparency

Finally, one of the most important questions of our time is: “how do we ensure accountability of machines”- will we ensure that machines will be accountable to people, and will those people be accountable to other people? Only with accountability will be able to

What would it mean to create a hippocratic oath for AI developers? Brad asks: what does it take to train a new generation of people to work on AI with that kind of commitment and principle in mind? These aren’t just questions for people at big tech companies. As companies, governments, universities, and individuals take the building blocks of AI and use them, AI ethics are becoming important to every part of society.

Artificial Intelligence Policy

If we are to stay true to timeless values, says Brad, we need to ask the question about whether we only want ethical people to behave ethically, or everyone to behave ethically? That’s what law does; AI will create new questions for public policy and the evolution of the law. That’s why skilling up for the future isn’t just about science, technology, engineering, and math: as computers behave more like humans, the social sciences and humanities will become even more important. That’s why diversity in the tech industry is also important, says Brad.

How AI is Transforming the Liberal Arts, Engineering, and Agriculture

Brad encourages us to think about disciplines that AI can make more impactful: Ai is changing healthcare (cures for cancer), agriculture (precision farming), accessibility, and our environment. He concludes with two examples. First, Brad talks about the Princeton Geniza Lab, led by Marina Rustow, who are using AI to analyze documents that have been scattered all around the world. Using AI, researchers are joining these digitized fragments. Engineering isn’t only for the engineers– everybody in the liberal arts can benefit from learning a little bit of computer science and data science, and every engineer is going to need some more liberal arts in their future. Brad also  tells us about the AI for Earth project which provides seed funds to researchers who work on the future of the planet. Projects include smart grids in Norway that make energy usage more efficient, a project by the Singaporean government to do smart climate control in buildings, and a project in Tasmania that supports precision farming, saving 30% on irrigation costs.

These examples give us a glimpse on what it means to prepare for an AI powered future, says Brad. We’re also going to need to do more work: we may need a new social contract, because people are going to need to learn new skills, find new career pathways, create new labor rules and protections, and rethink the social safety net as these changes ripple throughout the economy.

Creating the Future of Artificial of Intelligence

Where will AI take us? Brad encourages students to think about the needs of the world and what AI has to offer. It’s going to take a whole generation to think through what AI has to offer and create that future, and he encourages today’s students to sieze that challenge.

How Tech is Failing Victims of Intimate Partner Violence: Thomas Ristenpart at CITP

What technology risks are faced by people who experience intimate partner violence? How is the security community failing them, and what questions might we need to ask to make progress on social and technical interventions?

Speaking Tuesday at CITP was Thomas Ristenpart (@TomRistenpart), an associate professor at Cornell Tech and a member of the Department of Computer Science at Cornell University. Before joining Cornell Tech in 2015, Thomas was an assistant professor at the University of Wisconsin-Madison. His research spans a wide range of computer security topics, including digital privacy and safety in intimate partner violence, alongside work on cloud computing security, confidentiality and privacy in machine learning, and topics in applied and theoretical cryptography.

Throughout this talk, I found myself overwhelmed by the scope of the challenges faced by so many people– and inspired by the way that Thomas and his collaborators have taken thorough, meaningful steps on this vital issue.

Understanding Intimate Partner Violence

Intimate partner violence (IPV) is a huge problem, says Thomas. 25% of women and 11% of men will experience rape, physical violence, and/or stalking by an intimate partner, according to the National Intimate Partner and Sexual Violence Survey. To put this question in context for tech companies, this means that 360 million Facebook users and 252 million Android users will experience this kind of violence.

Prior research over the years has shown that abusers are taking advantage of technology to harm victims in a wide range of ways, including spyware, harassment, and non-consensual photography. In a team with Nicki Dell, Diana FreedKaren Levy, Damon McCoy, Rahul Chatterjee, Peri Doerfler, and Sam Havron, Thomas and his collaborators have working with the New York City Mayor’s office to Combat Domestic Violence (NYC CDV).

To start, the researchers spent a year doing qualitative research with people who experience domestic violence. The research that Thomas is sharing today draws from that work.

The research team worked with the New York City Family Justice Centers, who offer a range of services for domestic violence, sex trafficking, and elder abuse victims– from civil and legal services to access to shelters, counseling, and support from nonprofits. The centers were a crucial resource for the researchers, since they connect nonprofits, government actors, and survivors and victims. Over seriesof year-long qualitative studies (see also this paper), researchers held 11 focus groups with 39 women who speak English and Spanish from 18-165. Most of them are no longer working with the abusive partner. They also held semi-structured interviews with 50 professionals working on IPV– case managers, social workers, attorneys/paralegals, and police officers. Together, this research represents the largest and most demographically diverse study to date on IPV.

Common Technology Attacks in Intimate Partner Violence Situations

The researchers spotted a range of common themes across clients of the NYC CDV. They talked about stalkers who accessed their phones and social media, installed spyware, took compromising images through the spyware, and then impersonating them to use the account to send compromising, intimate images to employers, family, and friends. Abusers are taking advantage of every possible technology to create problems through many modes. Overall, they identified four kinds of common attacks:

  • In ownership-based attacks, the abuser owns the account that the victim is using. This gives them immediate access to controlling the device. Often people will buy a device for someone else to gain a foothold in that person’s life and home.
  • In account/device compromise, someone compels, guesses, or otherwise compromises passwords.
  • Harmful messages or posts involve calling/texting/messaging the victim. This involves harassing a victim’s friends/family, and sometimes encouraging other people to harass that person by proxy.
  • Abusers also exposed private information: blackmailing someone by threat of exposure, sharing non-consensual intimate images, and creating fake profiles/advertisements for that person on other sites.

In many of these cases, abusers are re-purposing ordinary software for some kind of unhelpful purpose. For example, abusers use two-factor authentication to prevent victims from accessing and recovering access to their own account.

Non-Technical Infrastructures Aren’t Helping Victims & Professionals with Technical Issues

Thomas tells us that despite these risks, they didn’t find a single technologist in the network of support for people facing intimate partner violence. So it’s not surprising that these services don’t have any best practices for evaluating technology risks. On top of that, victims overwhelmingly report having insufficient technology understanding to deal with tech abuse.

Abusers are typically considered to be “more tech-savvy” than victims, and professionals overwhelmingly report having insufficient technology understanding to help with tech abuse. Many of them just google as they go.

Thomas also points out that the intersection of technology and intimate partner violence raises important legal and policy issues. First, digital abuse is usually not recognized as a form of abuse that warrants a protection order. When someone goes to a family court, they have to convince a judge to get a protection order- and judges aren’t convinced by digital harassment– even though the protection order can legally restrict an abuser from sending the message. Second, when an abuser creates a fake account on a site like Tinder and creates “come rape me” style ads, the abuser is technically the legal owner of the account, so it can be difficult to take down the ads, especially for smaller websites that don’t respond to copyright takedown requests.

Technical Mechanisms are Failing Too: Context Undermines Existing Security Systems

Abusers aren’t the sophisticated cyber-operatives that people sometimes talk about at security conferences. Instead, researchers saw two classes of attacks: (a) UI-bound adversaries: an adversarial but authenticated user who interacts with the system via the normal user interface, and (b) Spyware adversaries, who installs/repurposes commodity software for surveillance of the victim. Neither of these require technical sophistication.

Why are these so effective? Thomas says that the reason is that the threat models and the assumptions in the security world don’t match threats. For example, many systems are designed to protect from a stranger on the internet who doesn’t know the victim personally and connects from elsewhere. With intimate partner violence, the attacker knows the victim personally, they can guess or compel disclosure, they may connect from the victim’s computer or same home, and may own the account or device that’s being used. The abuser is often an earner who pays for accounts and devices.

The same problems apply with fake accounts and detection of abusive content. Many fake social media profiles obviously belong to the abuser but survivors are rarely able to prove it. When abusers send hurtful, abusive messages, someone who lacks the content may not be able to detect it. Outside of the context of IPV, a picture of a gun might be just a picture of a gun- but in context, it can be very threatening.

Common Advice Also Fails Victims

Much of the common advice just won’t work. Sometimes people are urged to delete their account. You can’t just shut off contact with an abuser- you might be legally obligated to communicate (shared custody of children). You can’t get new devices because the abuser pays for phones, family plan, and/or children’s devices (which is a vector of surveillance). People can’t necessarily get off social media, because they need it to get access to their friends and family. On top of that, any of these actions could escalate abuse; victims are very worried about cutting off access or uninstalling spyware because they’re worried about further violence from the abuser.

Many Makers of Spyware Promote their Software for Intimate Partner Surveillance

Next, Thomas tells us about intimate partner surveillance (IPS) from a new paper led by Diana Freed on How Intimate Partner Abusers Exploit Technology. Shelters and family justice centers have had problems where someone shows up with software on their phone that allowed the abuser to track them, kick down a door, and endanger the victim. No one could name a single product that was used by abusers, partly because our ability to diagnose spyware from a technical perspective is limited. On the other hand, if you google “track my girlfriend,” you will find a host of companies that are peddling spyware.

To study the range of spyware systems, Thomas and his colleagues used “snowball” searching and used auto-complete to look for other queries that other people were searching. From a set of roughly 27k urls, they investigated 100 randomly sampled URLs. They found that 60% were related to intimate partner surveillance: how-to blogs, Q&A forums, news articles, app websites, and links to apps on the Google Play Store and the Apple App Store. Many of the professional-grade spyware providers provide apps directly through app stores, as well as “off-store” apps. They labeled a thousand of the apps they found and discovered that about 28% of them were potential IPS tools.

The researchers found overt tools for intimate partner surveillance apps, as well as systems for safety, theft-tracking, child tracking, and employee tracking that were repurposed for abuse. In many cases, it’s hard to point to a single piece of software and say that it’s bad. While apps sometimes purport to provide services to parents to track children, searches for intimate partner violence also surface paid ads to products that don’t directly claim to be for use within intimate partners. Ever since a ruling from the FTC, companies work to preserve plausible deniability.

In an audit study the researchers emailed customer support for 11 apps (on-store and off-store) posing as an abuser. They received nine responses. Eight of them condoned intimate partner violence and gave them advice on making the app hard to find. Only one indicated that it could be illegal.

Many of these systems have rich capabilities: location tracking, texts, call recordings, media contents, app usage, internet activity logs, keylogging, geographic tracking. All of the off-store systems have covert features to hide the fact that the app is installed. Even some of the Google Play Store apps have features to make the apps covert.

Early Steps for Supporting Victims: Detecting Spyware

What’s the current state of the art? Right now, practitioners tell people that if your battery runs unusually low, they may be a victim of spyware– not very effective. Do spyware removal tools work? They had high but not perfect detection rates for off-store intimate-purpose surveillance systems. However they did a poor job at detecting on-store spyware tools.

 

Thomas recaps what they learned from this study: There’s a large ecosystem of spyware apps, the dual use of these apps creates a significant challenge, many developers are condoning intimate partner surveillance, and existing anti-spyware technologies are insufficient at detecting tools.

Based on this work, Thomas and his collaborators are working with the NYC Mayor’s office and the National Network to end Domestic Violence to develop ways to detect spyware, to develop new surveys of technology risks, and find new kinds of interventions.

Thomas concludes with an appeal to companies and computer scientists that we pay more attention to the needs of the most vulnerable people affected by our work, volunteer for organizations that support victims, and develop new approaches to protect people in these all-too-common situations.

Workshop on Technical Applications of Contextual Integrity

The theory of contextual integrity (CI) has inspired work across the legal, privacy, computer science and HCI research communities.  Recognizing common interests and common challenges, the time seemed ripe for a meeting to discuss what we have learned from the projects using CI and how to move forward to leverage CI for enhancing privacy preserving systems and policies. On 11 December, 2017  the Center for Information Technology Policy hosted an inaugural workshop on Technical Applications of Contextual Integrity. The workshop gathered over twenty researchers from Princeton University, New York University, Cornell Tech, University of Maryland, Data & Society, and AI Now to present their ongoing and completed projects, discuss and share ideas, and explore successes and challenges when using the CI framework. The meeting, which included faculty, postdocs, and graduate students, was kicked off with a welcome and introduction by Ed Felten, CITP Director.

The agenda comprised of two main parts. In the first half of the workshop, representatives of various projects gave a short presentation on the status of their work, describe any challenges encountered, and lessons learned in the process. The second half included a planning session of a full day event to take place in the Spring to allow for a bigger discussion and exchange of ideas.

The workshop presentations touched on a wide variety of topics which included: ways operationalizing CI, discovering contextual norms behind children’s online activities, capturing users’ expectation towards smart toys and smart-home devices, as well as demonstrating how CI can be used to analyze regulation acts, applying CI to establish research ethics guidelines, conceptualizing privacy within common government arrangement.

More specifically:

Yan Shvartzshnaider discussed Verifiable and ACtionable Contextual Integrity Norms Engine (VACCINE), a framework for building adaptable and modular Data Leakage Prevention (DLP) systems.

Darakshan Mir discussed a framework for community-based participatory framework for discovery of contextual informational norms in small and veranubale communities.

Sebastian Benthall shared the key takeaways from conducting a survey on existing computer science literature work that uses Contextual Integrity.

Paula Kift discussed how the theory of contextual Integrity can be used to analyze the recently passed Cybersecurity Information Sharing Act (CISA) to reveals some fundamental gaps in the way it conceptualizes privacy.

Ben Zevenbergen talked about his work on applying the theory of contextual integrity to help establish guidelines for Research Ethics.

Madelyn Sanfilippo discussed conceptualizing privacy within a commons governance arrangement using Governing Knowledge Commons (GKC) framework.

Priya Kumar presented recent work on using the Contextual Integrity to identify gaps in children’s online privacy knowledge.

Sarah Varghese and Noah Apthorpe discussed their works on discovering privacy norms in IoT Devices using Contextual Integrity.

The roundtable discussion covered a wide range of open questions such as what are the limitations of CI as a theory, possible extensions, integration into other frameworks, conflicting interpretations of the CI parameters, possible research directions, and interesting collaboration ideas.

This a first attempt to see how much interest there is from the wider research community in a CI-focused event. We were overwhelmed with the incredible response! The participants expressed huge interest in the bigger event in Spring 2018 and put forward a number of suggestions for the format of the workshop.  The initial idea is to organize the bigger workshop as a co-joint event with an established conference, another suggestion was to have it as part of a hands-on workshop that brings together industry and academia. We are really excited about the event that will bring together a large sample of CI-related research work both academically and geographically which will allow a much broader discussion. 

The ultimate goal of this and other future initiatives is to foster communication between the various communities of researchers and practitioners using the theory of CI as a framework to reason about privacy and a language for sharing of ideas.

For the meantime, please check out the http://privaci.info website that will serve as a central repository for news, up to date related work for the community. We will be updating it in coming months.

We look forward to your feedback and suggestions. If you’re interested in hearing about the Spring workshop or presenting your work, want to help or have any suggestion please get in touch!

Twitter: @privaci_way

Email: