Stanford Law School held a panel Thursday on “Legal Challenges in an Age of Robotics“. I happened to be in town so I dropped by and heard an interesting discussion.
Here’s the official announcement:
Once relegated to factories and fiction, robots are rapidly entering the mainstream. Advances in artificial intelligence translate into ever-broadening functionality and autonomy. Recent years have seen an explosion in the use of robotics in warfare, medicine, and exploration. Industry analysts and UN statistics predict equally significant growth in the market for personal or service robotics over the next few years. What unique legal challenges will the widespread availability of sophisticated robots pose? Three panelists with deep and varied expertise discuss the present, near future, and far future of robotics and the law.
The key questions are how robots differ from past technologies, and how those differences change the law and policy issues we face.
Three aspects of robots seemed to recur in the discussion: robots take action that is important in the world; robots act autonomously; and we tend to see robots as beings and not just machines.
The last issue — robots as beings — is mostly a red herring for our purposes, notwithstanding its appeal as a conversational topic. Robots are nowhere near having the rights of a person or even of a sentient animal, and I suspect that we can’t really imagine what it would be like to interact with a robot that qualified as a conscious being. Our brains seem to be wired to treat self-propelled objects as beings — witness the surprising acceptance of robot “dogs” that aren’t much like real dogs — but that doesn’t mean we should grant robots personhood.
So let’s set aside the consciousness issue and focus on the other two: acting in the world, and autonomy. These attributes are already present in many technologies today, even in the purely electronic realm. Consider, for example, the complex of computers, network equipment, and software make up Google’s data centers. Its actions have significant implications in the real world, and it is autonomous, at least in the sense that the panelists seemed to using the term “autonomous” — it exhibits complex behavior without direct, immediate human instruction, and its behavior is often unpredictable even to its makers.
In the end, it seemed to me that the legal and policy issues raised by future robots will not be new in kind, but will just be extrapolations of the issues we’re already facing with today’s complex technologies — and not a far extrapoloation but more of a smooth progression from where we are now. These issues are important, to be sure, and I was glad to hear smart panelists debating them, but I’m not convinced yet that we need a law of the robot. When it comes to the legal challenges of technology, the future will be like the past, only more so.
Still, if talking about robots will get policymakers to pay more attention to important issues in technology policy, then by all means, let’s talk about robots.
I believe there are certain ways, whereby you can actually develop an algorithm to assign specific weight to certain facts in order to come up with final conclusion. While it’s not that straightforward, using robots is actually possible and the question is that how we want to utilize it? Come on, if search engine can rely on robots, why not laws since these are all basically facts and figures involved? Plus why do we need to assign them conscious being? No emotional factor should sets in and cloud the judgment.
Thanks for mentioning our panel, Ed. It was great to have you there.
I would draw a distinction between “robots as beings,” which I agree is a red herring, to robots getting perceived as beings. That we are hardwired to perceive robots (and other anthropomorphic technology) as roughly human is socially and perhaps legally relevant. For instance, if we think of a robot as a person capable of evaluating us and bring it into our home, this can have repercussions for privacy in the sense of opportunities for solitude. (I write about this in an upcoming article in Penn State Law Review.) Relatedly, there is credible evidence that soldiers are risking their lives to rescue robots under fire from the enemy in Afghanistan.
In short, I wouldn’t dismiss the importance of our tendency to form attachments to machines that resemble us, even if you’re inclined, as I am, to put off thinking about robots as rights-bearing entities. Best,
Ryan
I think the issue may well become more complex than tha. t The issue of liability when robots damage property or injur ehumans is bound to become more complex as technology evolves. For simple robots, one can assign liability to the owner, operator, or manufacturer in some proportion. But autonomous robots of the future will inevitably have to learn vast amounts of knowedge to function well. They will have to learn how to use their sense organs to “perceive” their environment, learn how to use their sensors effectors in combination to do complex coordinated tasks, will have to do background tasks like mapping and inventorying their environments, and will have learn the job that they are assigned, learn to recognize individual humans by voice and sight, cooperate with other robots, take commands from humans, and obey complex safety constraints that are either built in and/or learned. When robots have those capabilities and so much of their behavior is determined or strongly influenced by learning, who is to say where the lability lies for an accident? The manufacturer shouldn’t be responsible unless there is a defect of design or manufacture or initial programming. The operator is not at fault because there is none. Perhaps the trainers or owners share responsibility. But what if the robots is attacked by malicious code of unknown origin, or at least that allegation cannot be ruled out? Then how do we assign libility? I am not saying we will never systematize law in this area, but I can see it getting very complex.
Sometimes “well, the decision was made by an algorithm, there was no malice involved” gets you off a legal hook, sometimes (e.g. medical devices) it gets you onto one. Sometimes the tech is a blameless tool (which somehow also manages to deflect blame from its users and creators). Sometimes it’s an agent for whose actions the users or creators are strictly responsible. Depends on the particular venue of action. Will talking about robotics help policymakers reconcile these two apparently opposing viewpoints?