Stanford Law School held a panel Thursday on “Legal Challenges in an Age of Robotics“. I happened to be in town so I dropped by and heard an interesting discussion.
Here’s the official announcement:
Once relegated to factories and fiction, robots are rapidly entering the mainstream. Advances in artificial intelligence translate into ever-broadening functionality and autonomy. Recent years have seen an explosion in the use of robotics in warfare, medicine, and exploration. Industry analysts and UN statistics predict equally significant growth in the market for personal or service robotics over the next few years. What unique legal challenges will the widespread availability of sophisticated robots pose? Three panelists with deep and varied expertise discuss the present, near future, and far future of robotics and the law.
The key questions are how robots differ from past technologies, and how those differences change the law and policy issues we face.
Three aspects of robots seemed to recur in the discussion: robots take action that is important in the world; robots act autonomously; and we tend to see robots as beings and not just machines.
The last issue — robots as beings — is mostly a red herring for our purposes, notwithstanding its appeal as a conversational topic. Robots are nowhere near having the rights of a person or even of a sentient animal, and I suspect that we can’t really imagine what it would be like to interact with a robot that qualified as a conscious being. Our brains seem to be wired to treat self-propelled objects as beings — witness the surprising acceptance of robot “dogs” that aren’t much like real dogs — but that doesn’t mean we should grant robots personhood.
So let’s set aside the consciousness issue and focus on the other two: acting in the world, and autonomy. These attributes are already present in many technologies today, even in the purely electronic realm. Consider, for example, the complex of computers, network equipment, and software make up Google’s data centers. Its actions have significant implications in the real world, and it is autonomous, at least in the sense that the panelists seemed to using the term “autonomous” — it exhibits complex behavior without direct, immediate human instruction, and its behavior is often unpredictable even to its makers.
In the end, it seemed to me that the legal and policy issues raised by future robots will not be new in kind, but will just be extrapolations of the issues we’re already facing with today’s complex technologies — and not a far extrapoloation but more of a smooth progression from where we are now. These issues are important, to be sure, and I was glad to hear smart panelists debating them, but I’m not convinced yet that we need a law of the robot. When it comes to the legal challenges of technology, the future will be like the past, only more so.
Still, if talking about robots will get policymakers to pay more attention to important issues in technology policy, then by all means, let’s talk about robots.