March 26, 2019

Bridging Tech-Military AI Divides in an Era of Tech Ethics: Sharif Calfee at CITP

In a time when U.S. tech employees are organizing against corporate-military collaborations on AI, how can the ethics and incentives of military, corporate, and academic research be more closely aligned on AI and lethal autonomous weapons?

Speaking today at CITP was Captain Sharif Calfee, a U.S. Naval Officer who serves as a surface warfare officer. He is a graduate of the U.S. Naval Academy and U.S. Naval Postgraduate School and a current MPP student at the Woodrow Wilson School.

Afloat, Sharif most recently served as the commanding officer, USS McCAMPBELL (DDG 85), an Aegis guided missile destroyer. Ashore, Sharif was most recently selected for the Federal Executive Fellowship program and served as the U.S. Navy fellow to the Center for Strategic & Budgetary Assessments (CSBA), a non-partisan, national security policy analysis think-tank in Washington, D.C..

Sharif spoke to CITP today with some of his own views (not speaking for the U.S. government) about how research and defense can more closely collaborate on AI.

Over the last two years, Sharif has been working on ways for the Navy to accelerate AI and adopt commercial systems to get more unmanned systems into the fleet. Toward this goal, he recently interviewed 160 people at 50 organizations. His talk today is based on that research.

Sharif next tells us about a rift between the U.S. government and companies/academia in AI. This rift is a symptom, he tells us, of a growing “civil-military divide” in the US. In previous generations, big tech companies have worked closely with the U.S. military, and a majority of elected representatives in Congress had prior military experience. That’s no longer true. As there’s a bifurcation in the experiences of Americans who serve in the military versus those who have. This lack of familiarity, he says, complicates moments when companies and academics discuss the potential of working with and for the U.S. military.

Next, Sharif says that conversations about tech ethics in the technology industry are creating a conflict that making it difficult for the U.S. military to work with them. He tells us about Project Maven, a project that Google and the Department of Defense worked on together to analyze drone footage using AI. Their purpose was to reduce the number of casualties to civilians who are not considered battlefield combatants. This project, which wasn’t secret, burst into public awareness after a New York Times article and a letter from over three thousand employees. Google declined to renew the DOD contract and update their motto.

U.S. Predator Drone (via Wikimedia Commons)

On the heels of their project Maven decision, Google also faced criticism for working with the Chinese government to provide services in China in ways that enabled certain kinds of censorship. Suddenly, Google found themselves answering questions about why they were collaborating with China on AI and not with the U.S. military.

How do we resolve this impasse in collaboration?

  • The defense acquisition process is hard for small, nimble companies to engage in
  • Defense contracts are too slow, too expensive, too bureaucratic, and not profitable
  • Companies aren’t not necessarily interested in the same type of R&D products as the DOD wants
  • National security partnerships with gov’t might affect opportunities in other international markets.
  • The Cold War is “ancient history” for the current generation
  • Global, international corporations don’t want to take sides on conflicts
  • Companies and employees seek to create good. Government R&D may conflict with that ethos

Academics also have reasons not to work for the government:

  • Worried about how their R&D will be utilized
  • Schools of faculty may philoisophically disagree with the government
  • Universities are incubators of international talent, and government R&D could be divisive, not inclusive
  • Government R&D is sometimes kept secret, which hurts academic careers

Faced with this, according to Sharif, the U.S. government is sometimes baffled by people’s ideological concerns. Many in the government remember the Cold War and knew people who lived and fought in World War Two. They can sometimes be resentful about a cold shoulder from academics and companies, especially since the military funded the foundational work in computer science and AI.

Sharif tells us that R&D reached an inflection point in the 1990s. During the Cold War, new technologies were developed through defense funding (the internet, GPS, nuclear technology) and then they reached industry. Now the reverse happens. Now technologies like AI are being developed by the commercial sector and reaching government. That flow is not very nimble. DOD acquisition systems are designed for projects that take 91 months to complete (like a new airplane), while companies adopt AI technologies in 6-9 months (see this report by the Congressional Research Service).

Conversations about policy and law also constrain the U.S. government from developing and adopting lethal autonomous weapons systems, says Sharif. Even as we have important questions about the ethical risks of AI, Sharif tells us that other governments don’t have the same restrictions. He asks us to imagine what would have happened if nuclear weapons weren’t developed first by the U.S..

How can divides between the U.S. government and companies/academia be bridged? Sharif suggests:

  • The U.S. government must substantially increase R&D funding to help regain influence
  • Establish a prestigious DOD/Government R&D one-year fellowship program with top notch STEM grads prior to joining the commercial sector
  • Expand on the Defense Innovation Unit
  • Elevate the Defense Innovation Board in prominence and expand the project to create conversations that bridge between ideological divides. Organize conversations at high levels and middle management levels to accelerate this familiarization.
  • Increase DARPA and other collaborations with commercial and academic sectors
  • Establish joint DOD and Commercial Sector exchange programs
  • Expand the number of DOD research fellows and scientists present on university campuses in fellowship programs
  • Continue to reform DOD acquisition processes to streamline for sectors like AI

Sharif has also recommended to the U.S. Navy that they create an Autonomy Project Office to enable the Navy to better leverage R&D. The U.S. Navy has used structures like this for previous technology transformations on nuclear propulsion, the Polaris submarine missiles, naval aviation, and the Aegis combat system.

At the end of the day, says Sharif, what happens in a conflict where the U.S. does not have the technological overmatch and is overmatched by someone else? What are the real life consequences? That’s what’s at stake in collaborations between researchers, companies, and the U.S. department of defense.

Princeton Dialogues of AI and Ethics: Launching case studies

Summary: We are releasing four case studies on AI and ethics, as part of the Princeton Dialogues on AI and Ethics.

The impacts of rapid developments in artificial intelligence (“AI”) on society—both real and not yet realized—raise deep and pressing questions about our philosophical ideals and institutional arrangements. AI is currently applied in a wide range of fields—such as medical diagnosis, criminal sentencing, online content moderation, and public resource management—but it is only just beginning to realize its potential to influence practically all areas of human life, including geopolitical power balances. As these technologies advance and increasingly come to mediate our everyday lives, it becomes necessary to consider how they may reflect prevailing philosophical perspectives and preferences. We must also assess how the architectural design of AI technologies today might influence human values in the future. This step is essential in order to identify the positive opportunities presented by AI and unleash these technologies’ capabilities in the most socially advantageous way possible while being mindful of potential harms. Critics question the extent to which individual engineers and proprietors of AI should take responsibility for the direction of these developments, or whether centralized policies are needed to steer growth and incentives in the right direction. What even is the right direction? How can it be best achieved?

Princeton’s University Center for Human Values (UCHV) and the Center for Information Technology Policy (CITP) are excited to announce a joint research project, “The Princeton Dialogues on AI and Ethics,” in the emerging field of artificial intelligence (broadly defined) and its interaction with ethics and political theory. The aim of this project is to develop a set of intellectual reasoning tools to guide practitioners and policy makers, both current and future, in developing the ethical frameworks that will ultimately underpin their technical and legislative decisions. More than ever before, individual-level engineering choices are poised to impact the course of our societies and human values. And yet there have been limited opportunities for AI technology actors, academics, and policy makers to come together to discuss these outcomes and their broader social implications in a systematic fashion. This project aims to provide such opportunities for interdisciplinary discussion, as well as in-depth reflection.

We convened two invitation-only workshops in October 2017 and March 2018, in which philosophers, political theorists, and machine learning experts met to assess several real-world case studies that elucidate common ethical dilemmas in the field of AI. The aim of these workshops was to facilitate a collaborative learning experience which enabled participants to dive deeply into the ethical considerations that ought to guide decision-making at the engineering level and highlight the social shifts they may be affecting. The first outcomes of these deliberations have now been published in the form of case studies. To access these educational materials, please see our dedicated website https://aiethics.princeton.edu. These cases are intended for use across university departments and in corporate training in order to equip the next generation of engineers, managers, lawyers, and policy makers with a common set of reasoning tools for working on AI governance and development.

In March 2018, we also hosted a public conference, titled “AI & Ethics,” where interested academics, policy makers, civil society advocates, and private sector representatives from diverse fields came to Princeton to discuss topics related to the development and governance of AI: “International Dimensions of AI” and “AI and Its Democratic Frontiers”. This conference sought to use the ethics and engineering knowledge foundations developed through the initial case studies to inspire discussion on AI technology’s wider social effects.

This project is part of a wider effort at Princeton University to investigate the intersection between AI technology, politics, and philosophy. There is a particular emphasis on the ways in which the interconnected forces of technology and its governance simultaneously influence and are influenced by the broader social structures in which they are situated. The Princeton Dialogues on AI and Ethics makes use of the university’s exceptional strengths in computer science, public policy, and philosophy. The project also seeks opportunities for cooperation with existing projects in and outside of academia.

The Rise of Artificial Intelligence: Brad Smith at Princeton University

What will artificial intelligence mean for society, jobs, and the economy?

Speaking today at Princeton University is Brad Smith, President and Chief Legal Officer of Microsoft. I was in the audience and live-blogged Brad’s talk.

CITP director Ed Felten introduces Brad’s lecture by saying that the tech industry is at a crossroads. With the rise of AI and big data, people have realized that the internet and technology are having a big, long-term effect on many people’s lives. At the same time, we’ve seen increased skepticism about technology and the role of the tech industry in society.

The good news, says Ed, is that plenty of people in the industry are up to the task of explaining what the industry does to cope with these problems in a productive way. What the industry needs now, says Ed, is what Brad offers: a thoughtful approach to the challenges that our society faces, acknowledges the role of tech companies, seeks constructive solutions, and takes responsibility that works across society. If there’s one thing we could to to help the tech industry cope with these questions, says Ed, it would be to clone Brad.

Imagining Artificial Intelligence in Thirty Years

Brad opens by mentioning the new book by his team: The Future Computed Artificial Intelligence and its Role in Society. While writing the book, they realized that it’s not helpful to think about change in the next year or two. Instead, we should be thinking about periods of ten to thirty years.

What was life like twenty years ago? In 1998, people often began their day without anything digital. They would put on a television, listen to the radio, and pull out a calendar. If you needed to call someone, you would use a land phone to reach them. At that time, the single common joke was about whether they could program their VCR machines.

In 2018, the first thing that many people reach for is their phone. Even if you manage to keep your phone in another room, you’ll find yourself reaching for your phone or sitting down in front of your laptop. You now use those devices to find out what happened in the world and with your friends.

What will the world look like in 2038? By that time, Brad argues that we’ll be living with artificial intelligence. Digital assistants are already part of our lives, but they’ll be more common at that time. Rather than looking at lots of apps, we’ll have a digital assistant that will talk to us and tell us what the traffic will be like for us. Twenty years from now, you’ll probably have your digital assistant talking to you as you shave or put on your makeup in the morning.

What is Artificial Intelligence?

To understand what that mean in our lives, we need to understand what artificial intelligence really is. Even today, computers can recognize people, and they can do more – they can make sense of someone’s emotions from their face. We’ve seen the same with the ability of computers to understand language, Brad says. Not only can computers recognize speech, they can also sift through knowledge, make sense of it, and reach conclusions.

In the world today, we read about AI and expect it all to arrive one day, says Brad. That’s not how it’s going to work- AI will become more and more part of our lives in pieces. He tells us about the BMW pedestrian alert, which allows cars to detect pedestrians, beep, signal to the driver, and apply its brakes. Brad also tells us about the Steno app, which records and transcribes. Microsoft now has a version of Skype that detects and auto-translates the conversation– something they’ve now integrated with Powerpoint. Spotify, Netflix, and iTunes all use artificial intelligence to deliver suggestions for the next TV show. None of these systems work with 100% perfection, but neither do human beings.  When asking about an AI system, we need to ask when computers will become as good as a human being.

What advances make AI real? Microsoft Amazon, Google, and others build data centers that are many football fields large in space. This enables companies to gather huge computational power and vast amounts of data. Because algorithms get better with more data, companies have an insatiable appetite for data.

The Challenges of Imagining the Future

All of this is exciting, says Brad, and could deliver huge promise for the world. But we can’t afford to look at this future with uncritical eyes. The world needs to make sense of the risks. As computers behave more like humans, what will that mean for real people? Many people like Stephen Hawking, Elon Musk, and others are warning us about that future. But there is no crystal ball. For a long time, says Brad, I’ve admired futurists, but if a futurist gets something wrong, probably nobody remembers they got it wrong. We may be able to discern patterns, but nobody has a crystal ball.

Learning from The History of the Automobile

How can we think about what may be coming? The first option is to learn from history– not because it repeats itself but because it provides insights. To illustrate this, Brad starts by talking about the transition from horses to automobiles. He shows us a photo of Bertha Benz, whose dowry paid for her husband Karl’s new business. One morning in 1888, she got up and left her husband a note saying that she was taking the car and driving the kids 70 kilometers to visit her mother. Before the day was over, she had to repair the car, but by the end of the day, they had reached her mother’s house. This stunt convinced the world that the automobile would be important to the future.

Next, Brad shows us a photo of New York City in 1905, with streets full of horses and hardly any cars. Twenty years later, there were no horses on the streets. The horse population declined and jobs involved in supporting them disappeared. These direct economic effects weren’t as important as the indirect effects. Consumer credit wasn’t necessarily connected to the automobile, but it was an indirect outcome. Once people wanted to buy cars, they needed a way to finance the cars. Advertising also changed: when people were driving past billboards at speed, advertisers invented logos to make their companies more recognizable.

How Institutions Evolve to Meet Technology & Economic Changes

The effects of the automobile weren’t all good. As the population of horses declined, farmers got smart and grew less hay. They shifted their acre-age to wheat and corn and the prices plummeted. Once the prices plummeted, farmers’ income plummeted. As the farmers fell behind on their loans, the rural banks tried to foreclose them, leading to broad financial collapse. Many of the things we take for granted today come from that experience: the FDIC and insurance regulation, farm subsidies, and many other parts of our infrastructure. With AI, we need to be prepared for changes as substantial.

Understanding the Impact of AI on the Economy

Brad tells us another story about how offices worked. In the 1980s, you handed someone a hand-written document and someone would type it for you. Between the 1980s and today, two big changes happened. First, secretarial staff went on the decline and the professional IT staff was born. Second, people realized that everyone needed to understand how to use computers.

As we think about how work will change, we need to ask what jobs AI will replace. To answer this question, let’s think about what computers can do well: vision, speech, language knowledge. Jobs involving decision-making are already being done by computers (radiology, call centers, fast food orders, auto drivers). Jobs involving translation and learning will also become automated, including machinery inspection and the work of paralegals. At Microsoft, the company used to have multiple people whose job was to inspect fire extinguishers. Now the company has devices that automatically record data on their status, reducing the work involved in maintaining them.

Some jobs are less likely to be replaced by AI, says Brad: anything that requires human understanding and empathy. Nurses, social workers, therapists, and teachers are more likely to be people who will use AI than be replaced by it. This may lead people to take on jobs that they take more satisfaction in doing.

Some of the most exciting developments for AI in the next five years will be in the area of disability. Brad shows us a project called “Seeing AI,” offers an app that describes a person’s surroundings using a phone camera. The app can read barcodes and identify food, identify currency bills, describe a scene, and read text in one’s surroundings. What’s exciting is what it can do for people. The project has already carried out 3 million tasks and it’s getting better and smarter as it goes. This system could be a game changer for people with blindness, says Brad.

Why Ethics Will Be a Growth Area for AI

What jobs will AI create? It’s easier to think about the jobs it will replace than what it will create. When young people in Kindergarten today enter the workplace, he says, the majority of jobs will be ones that don’t yet exist. Some of the new jobs will be ones that support AI to work: computer science, data science, and ethics. “Ultimately, the question is not only what computers *can* do” says Brad, “it’s what computers *should* do.” Under the ethics of AI, the fields of reliability/safety and privacy/security are well developed. Other important areas that are less well developed are research on fairness, inclusiveness. Two issues underly all the rest. Transparency is important because the world needs to know how those systems will work– people need to understand how they work.

AI Accountability and Transparency

Finally, one of the most important questions of our time is: “how do we ensure accountability of machines”- will we ensure that machines will be accountable to people, and will those people be accountable to other people? Only with accountability will be able to

What would it mean to create a hippocratic oath for AI developers? Brad asks: what does it take to train a new generation of people to work on AI with that kind of commitment and principle in mind? These aren’t just questions for people at big tech companies. As companies, governments, universities, and individuals take the building blocks of AI and use them, AI ethics are becoming important to every part of society.

Artificial Intelligence Policy

If we are to stay true to timeless values, says Brad, we need to ask the question about whether we only want ethical people to behave ethically, or everyone to behave ethically? That’s what law does; AI will create new questions for public policy and the evolution of the law. That’s why skilling up for the future isn’t just about science, technology, engineering, and math: as computers behave more like humans, the social sciences and humanities will become even more important. That’s why diversity in the tech industry is also important, says Brad.

How AI is Transforming the Liberal Arts, Engineering, and Agriculture

Brad encourages us to think about disciplines that AI can make more impactful: Ai is changing healthcare (cures for cancer), agriculture (precision farming), accessibility, and our environment. He concludes with two examples. First, Brad talks about the Princeton Geniza Lab, led by Marina Rustow, who are using AI to analyze documents that have been scattered all around the world. Using AI, researchers are joining these digitized fragments. Engineering isn’t only for the engineers– everybody in the liberal arts can benefit from learning a little bit of computer science and data science, and every engineer is going to need some more liberal arts in their future. Brad also  tells us about the AI for Earth project which provides seed funds to researchers who work on the future of the planet. Projects include smart grids in Norway that make energy usage more efficient, a project by the Singaporean government to do smart climate control in buildings, and a project in Tasmania that supports precision farming, saving 30% on irrigation costs.

These examples give us a glimpse on what it means to prepare for an AI powered future, says Brad. We’re also going to need to do more work: we may need a new social contract, because people are going to need to learn new skills, find new career pathways, create new labor rules and protections, and rethink the social safety net as these changes ripple throughout the economy.

Creating the Future of Artificial of Intelligence

Where will AI take us? Brad encourages students to think about the needs of the world and what AI has to offer. It’s going to take a whole generation to think through what AI has to offer and create that future, and he encourages today’s students to sieze that challenge.