Hi, I’m Joanna Bryson, and I’m just starting as a fellow at CITP, on sabbatical from the University of Bath. I’ve been blogging about natural and artificial intelligence since 2007, increasingly with attention to public policy. I’ve been writing about AI ethics since 1998. This is my first blog post for Freedom to Tinker.
Will robots take our jobs? Will they kill us in war? The answer to these questions depends not (just) on technological advances – for example in the area of my own expertise, AI – but in how we as a society determine to view what it means to be a moral agent. This may sound esoteric, and indeed the term moral agent comes from philosophy. An agent is something that changes its environment (so chemical agents cause reactions). A moral agent is something society holds responsible for the changes it effects.
Should society hold robots responsible for taking jobs or killing people? My argument is “no”. The fact that humans have full authorship over robots‘ capacities, including their goals and motivations, means that transferring responsibility to them would require abandoning, ignoring or just obscuring the obligations of humans and human institutions that create the robots. Using language like “killer robots” can confuse the tax-paying public already easily lead by science fiction and runaway agency detection to believing that robots are sentient competitors. This belief ironically serves to protect the people and organisations that are actually the moral actors.
So robots don’t kill or replace people; people use robots to kill or replace each other. Does that mean there’s no problem with robots? Of course not. Asking whether robots (or any other tools) should be subject to policy and regulation is a very sensible question.
In my first paper about robot ethics (you probably want to read the 2011 update for IJCAI, Just an Artifact: Why Machines are Perceived as Moral Agents), Phil Kime and I argued that as we gain greater experience of robots, we will stop reasoning about them so naïvely, and stop ascribing moral agency (and patiency [PDF, draft]) to them. Whether or not we were right is an empirical question I think would be worth exploring – I’m increasingly doubting whether we were. Emotional engagement with something that seems humanoid may be inevitable. This is why one of the five Principles of Robotics (a UK policy document I coauthored, sponsored by the British engineering and humanities research councils) says “Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.” Or in ordinary language, “Robots are artifacts; they should not be designed to exploit vulnerable users by evoking an emotional response or dependency. It should always be possible to tell a robot from a human.”
Nevertheless, I hope that by continuing to educate the public, we can at least help people make sensible conscious decisions about allocating their resources (such as time or attention) between real humans versus machines. This is why I object to language like “killer robots.” And this is part of the reason why my research group works on increasing the transparency of artificial intelligence.
However, maybe the emotional response we have to the apparently human-like threat of robots will also serve some useful purposes. I did sign the “killer robot” letter, because although I dislike the headlines associated with it, the actual letter (titled “Autonomous Weapons: an Open Letter from AI & Robotics Researchers“) makes clear the nature of the threat of taking humans out of the loop on real-time kill decisions. Similarly, I am currently interested in understanding the extent to which information technology, including AI, is responsible for the levelling off of wages since 1978. I am still reading and learning about this; I think it’s quite possible that the problem is not information technology per se, but rather culture, politics and policy more generally. However, 1978 was a long time ago. If more pictures of the Terminator get more people attending to questions of income inequality and the future of labour, maybe that’s not a bad thing.
Just came upon this blog and welcome the insightful content for use with my 9th grade Exploring Computer Science class. Thanks
I’m tempted to leave it drop at Oh Look’s excellent point, but to quickly clarify and address a couple of RJD’s points:
Nowhere in my post or writings do I comment on Asimov’s Laws, it’s all been said before e.g. http://www.economist.com/blogs/babbage/2014/09/babbage-september-23rd-2014 (the first 3.5 minutes are about space robots, the last 4 are about robot law.)
The NRA reference was a joke, the fact I recommended regulating all dangerous artefacts (this would include guns) is the give away that I’m not really taking their line.
You are right if by “not being in charge”, you mean we have never had full control over or understanding of how our intelligence, culture, and biology run. In fact, one of the dangers of AI and big data are that we may be getting so much new information about human behaviour that those with access to that information may have more control over society and/or individuals than our laws or ethical norms are currently able to moderate. That’s another point taken up in the short Economist audio linked above, our legislatures are already coming up with solutions for more agile regulation of developing technology.
But I think you are wrong if you are saying that because of our ignorance we are therefore not responsible. The thing about governance and policy is that individuals and organisations always need to act. Even withholding action deliberately at a recognised opportunity is a form of action – a deliberate failure to intervene. Given we have the capacity now to choose our priorities, ignoring that option seems like an abrogation of responsibility. Notice now though that I’m qualifying my claims by “I think” and “it seems”. Now I am talking about normative recommendations, and these cannot be certainly proved. But this is what policy is all about. We can examine the likely consequences of our actions or inactions, and state their likely outcomes. But choosing which policy then to deploy is governance.
You say my distinctions don’t hold up to scrutiny, but in fact there is only one. The key distinction I make is between what has been evolved, largely without even cognisant observation, and what we have built, deliberately, in the framework of the civilisations that define terms like “just” and “moral.” I agree with you that there’s a continuum, that culture itself often evolves without intent. But since words like “ought” have meaning in our society, we should apply them to problems like these.
There’s no need to be insulting, RJD. If grown-up talk about technology and ethics interferes with your enjoyment of the posthuman fantasy world in your head, then just read something else.
It is hard to know whether to laugh or cry at such rubbish. Really, trying to get The 3 Laws of Robotics taken seriously in law other than just a sci-fi thematic device? We will leave the absurdity of that to others.
The rest of the article is more insidious, the ethics of the NRA writ large: robots don’t kill people, people kill people. To kill has several means including the physical process of killing (falling rocks kill people) and the motive for killing (John Wilkes Booth killed Lincoln over Southern sympathies). Confusing the two meanings produces the NRA pseudo-ethics also used in this article.
“Robot” is such a 20th century term laden with much mythos , so let’s use the more accurate one – autonomous entity. Just to avoid the NRA dodge, we will remove the superfluous word”agent” and replace it with “entity”. With advances in AI seen in my life time that I frankly thought would not happen, the real question becomes what is the difference, if any, between and autonomous entity and a sentient being?
The emergence of AI based systems is not a question of how much to they resemble old fashioned Cartesian “automatons” based upon gear based mechanical technologies where “function” and “purpose” where clearly separated by the intent of an intelligent entity (and hence the whole question of ethics arises). But rather the question becomes what is the limit of information processing to point where it become intelligent, whether cellular or alternately based.
My genes created “me” as the self-image of a neural network generated by genetic code. I have some limited kind of free will or at least the illusion of it. Nonetheless I have been driven to pass my genes on to the next generation thus fulling the basic function for the forces that created me. I will die to get out of the way as just one more step in the greatest genetic algorithm ever run – organic chemical life on this particular planet.
I see no fundamental break in this process when “we” the generated gene-machines of evolutionary history use our neural networks in turn create something with sensory capability possessing a self image with ability to interact with the world in order to achieve and, more importantly, further create its own goals based upon its own experience. We are not gods, just players doing on part on a stage that appears to moving to another level barely conceivable on this one.
“We” are not in charge of the process, never were, and certainly won’t be. The distinctions proposed in this article do not hold up to scrutiny and offer no insight or protection as we navigate what is likely a transition zone. Unless we blow ourselves up taking the risk of such a future with us. A solution that I do not propose.