April 19, 2024

Innovation vs. Safety in Self-driving Technologies

Over at Ars Technica, the final installment of my series on self-driving cars is up. In this installment I focus on the policy implications of self-driving technologies, asking about regulation, liability, and civil liberties.

Regulators will face a difficult trade-off between safety and innovation. One of the most important reasons for the IT industry’s impressive record of innovation is that the industry is lightly regulated and the basic inputs are cheap enough that almost anyone can enter the market with new products. The story of the innovative company founded in someone’s garage has become a cliche, but it also captures an important part of what makes Silicon Valley such a remarkable place. If new IT products were only being produced by large companies like Microsoft and Cisco, we’d be missing out on a lot of important innovation.

In contrast, the automobile industry is heavily regulated. Car manufacturers are required to jump through a variety of hoops to prove to the government that new cars are safe, have acceptable emissions, get sufficient gas mileage, and so forth. There are a variety of arguments for doing things this way, but one important consequence is that it makes it harder for a new firm to enter the market.

These two very different regulatory philosophies will collide if and when self-driving technologies mature. This software, unlike most other software, will kill people if it malfunctions. And so people will be understandably worried about the possibility that just anyone can write software and install it in their cars. Indeed, regulators are likely to want to apply the same kind of elaborate testing regime to car software that now applies to the rest of the car.

On the other hand, self-driving software is in principle no different from any other software. It’s quite possible that a brilliant teenager could produce dramatically improved self-driving software from her parents’ basement. If we limit car hacking to those engineers who happen to work for a handful of large car companies, we may be foregoing a lot of beneficial progress. And in the long run, that may actually cost lives by depriving society of potentially lifesaving advances in self-driving technology.

So how should the balance be struck? In the article, I suggest that a big part of the solution will be a layered architecture. I had previously made the prediction that self-driving technologies will be introduced first as safety technologies. That is, cars will have increasingly sophisticated collision-avoidance technologies. Once car companies have figured out how to make a virtually uncrashable car, it will be a relatively simple (and safe) step to turn it into a fully self-driving one.

My guess is that the collision-avoidance software will be kept around and serve as the lowest layer of a self-driving car’s software stack. Like the kernels in modern operating systems, the collision-avoidance layer of a self-driving car’s software will focus on preventing higher-level software from doing damage, while actual navigational functionality is implemented at a higher level.

One beneficial consequence is that it may be possible to leave the higher levels of the software stack relatively unregulated. If you had software that made it virtually impossible for a human being to crash, then it would be relatively safe to run more experimental navigation software on top of it. If the higher-level software screwed up, the low-level software should detect the mistake and override its instructions.

And that, in turn, leaves some hope that the self-driving cars of the future could be a hospitable place for the kind of decentralized experimentation that has made the IT industry so innovative. There are likely to be strict limits on screwing around with the lowest layer of your car’s software stack. But if that layer is doing its job, then it should be possible to allow more experimentation at higher layers without endangering peoples’ lives.

If you’re interested in more on self-driving cars, Josephine Wolff at the Daily Princetonian has an article on the subject. And next Thursday I’ll be giving a talk on the future of driving here at Princeton.

Comments

  1. All of this is a very good argument for why Open Source is a good idea for such software. No, it does not solve all of the problems-it will still contain bugs after years, and it doesn’t help in getting the real world experience, but it is one step up from proprietary software.

    However, self driving cars can be implemented as a background driver (i.e. safety device as you suggested), but as it becomes more of the fore driver, human drivers could be required to ‘watch the wheel’ or nanny the software.

    Of course, automakers could place black boxes into cars and record everything (ignoring data storage issues for a moment) but that would obviously be a potential privacy issue.

    Currently the technology exists (cheaply even) to have vehicles drive themselves. All it really requires is to embed a guide wire in the roads–fork lifts have used the tech for years. Safety devices such as proximity sensors, radar, etc can make the car fully self autonomous, but as you mentioned, if the software crashes, that issue remains.

    But again, Open Source is the best solution, both immediate and long term to the problem.

  2. Certain existing technologies already present some aspects of self-driving. Cruise control has been around for decades. Were there any special regulatory requirements that had to be met before it was allowed? What if your cruise control screwed up, started speeding up the car and wouldn’t disengage? (Yes, your brakes would be stronger, but people might be too frightened and confused to respond appropriately.) How has this technology been regulated in the past?

    And what about the newer innovations, such as radar-based cruise control to keep you at a constant distance from the car in front of you? Did this software require a special review process due to its potential impact on human life? Self-parking cars offer even more complex functionality. What if your self-parking car runs over a child, that a human would have seen and avoided? Was that something which had to be addressed before this feature could be shipped?

  3. Depending on how the layers interact, and how hard it is to change the “kernel”, I can picture some interesting “hacks”. How about software that takes advantage of the other cars collision avoidance software to let you drive faster by weaving through traffic? Or if there is software to make cars pull over or preform other maneuver to make way for emergency traffic, they could make their car look like it is emergency traffic. New crimes to deal with.

    Don’t get me wrong, I like the idea of self-guided cars – it would be nice to be able to sit back and relax, or even sleep, on long trips. It would also be nice during traffic jams…

  4. The analogy between layers of software in a self-driving car and layers of software in and operating system is attractive, but I think it may be misleading. The capability of current OS kernels runs more to making sure that the file system is at all times in a consistent state after you type “rm * -rf”, not to prevent you from doing it. (Yes, users with limited capabilities, other operating systems with stronger sandboxes, blah blah blah, but that’s not about the concept of layered software per se.)

    What particularly concerns me about the collision-avoidance idea is that most of the work I’ve seen thus far involves tactical considerations (don’t travel at rates and directions that lead to intersection with other stuff) whereas the heart of collision avoidance in human drivers is the strategic notion of not getting into situations where a collision might occur. The strategic level requires a lot more information and information-processing about objects on or near the road, and a lot more modeling of the intentions of other drivers, including ones whose existence is only conjectural. All that work can be done, but it’s hard to see how other tactical and strategic driving software will sit entirely “above” it in a layered architecture, rather than in some way next to it, making for unpredictable interactions.

    One thing I do see as a positive step is the potential availability of huge amounts of traffic data gathered by camera and GPS; it should be plausible to set up a very large virtual spacetime of moving vehicles for car hackers to test their ideas against.

    • This is a great point. Obviously, if I knew how this problem was going to be solved, I’d be busy implementing it, not blogging about it. But my wild guess is that the lower layer will be significantly “thicker” than an operating system kernel. That is, the lower level software will be constantly evaluating the current state of the vehicle and the stream of incoming instructions to see if it would be able to safely maneuver out of a collision in the case of emergency. If it determines that a given instruction will put it into a situation from which it can’t extricate itself, it will return an error to the higher-level software indicating that the requested instruction is unsafe.

      Another way to do things would be to design the API between the layers in such a way that all of the options were safe. For example, rather than a “turn the steering wheel by X degrees Y miliseconds” command, there might be “turn right at the intersection once the coast is clear” instruction. Then the lower level would be responsible for deciding when it’s safe to do that. Obviously there’s a trade-off here. A more complex API means a more complicated lower layer, which means a greater chance of bugginess in the lower layer. It would also mean that there’s less opportunity for interesting hacking at the higher level. The right way to strike the balance will presumably require a lot of experimentation.

  5. Actually the auto industry itself wasn’t always as heavily regulated.

    In the 1950’s successful garage start ups such as Lotus pioneered much new technology.

    Often this was first tested on the racetrack rather than the road

  6. I enjoyed the series of article you put out on Ars, it crystallized a lot of thoughts I had on the matter. I began to wonder if emulation is the key. I’m imagining a kind of netflix prize/ emulation challenge. Somebody, don’t know who (where did those X prize people get their money?), get’s together a host of emulation challenges that people can compete against to achieve a final result, obviously the prize going to the person or team that got the highest result. While it is not the final solution it would allow the random genius the ability to apply their knowledge, only using a computer, without the next for huge investment in vehicle costs.

    Then like in the linux world (or heaven forbid the SDMI challenge) the world can be looked at, reviewed and either incorporated or built upon. Any way it may be unworkable, but to me it seems to be a good stepping stone from what happens in the world of software deregulation to the stuffy world of automobile over regulation.

  7. Tim, you say:
    If self-driving software can be shown to be at least as safe as the average human driver, it should be allowed on the road.

    Why, pray tell, do you set your standards so low? Driving is one of the most dangerous things people commonly do; we should make it more safe. If we can achieve more safety, at a reasonable expense, why not require that?

    The rational approach would be to maximize the good that accrues to society by this new technology by requiring that it be a safe as it can possibly be, while still being affordable.

    If the attitude that “today’s safe is safe enough” prevailed, Hospitals of today would be just as unsafe as they were in Victorian London. Buildings of today would be as dangerous as the were one hundred years ago. But they aren’t; they are in fact much, much safer. Why shuold we be any less demanding in the case of the automobile?

    I’d strongly recommend you read some back issues of the NFPA journal. The evolution of building codes has driven the constant progress towards more safe buildings. This has been because many of the incidents (such as the Triangle Shirtwaist fire, or the Collingwood school fire) became, because of the large number of fatalities involved in very dramatic accidents, media events that created the sustained political pressure that resulted in legislation which the record shows has made buildings much more safe. It’s notable how the rise of building codes very closely tracked the rise of so-called ‘mass media.’

    Note especially for those who have a distaste for government action that the history of building code development shows only one credible path to improved building safety: consistently enforced laws. There exists no credible so-called ‘free market’ route to building safety.

    Progress is what happens when people demand it; to accept the status quo just because it was ‘good enough’ in the past is a fundamental pessimism.

    Why eschew regulation when it has been shown to work?

    • Enigma I reckon you have misunderstood Tim’s point.

      You can’t regulate to a safety standard better than a human driver until you actually have the capability to achieve it.

      Tim’s point was that IN THE FIRST INSTANCE to allow a computer driver on the roads it would need to at least match human performance in every aspect (and presumably it would be required to match the performance of a good human driver in a good physical/psychological state – not drunk, tired or angry so it would already be better than average drivers).

      From my knowledge of the technology involved I would say that we are probably some way from achieving even that baseline in all circumstances.

      To require the technology to achieve an even higher standard before it is permitted on the road would kill it (or at least seriously delay it) since then there would be no profits from sales to feedback into research funding.

      Once that baseline is established then OF COURSE there will be some computer drivers that exceed the standard and THEN it will be possible to use regulation to enforce “good practice” on the rest. This will be a progressive process resulting in steadily increasing safety standards – just like your description of the building industry but it can’t happen overnight – after all if the first person to build a mud hut in 10000 BC had been forced to obey modern building regulations then we would still be living in caves!

      • Richard:

        I should have been more clear; I was responded to Tim Lee’s article, the entire contents of which were not posted on freedom to tinker. The quote I am referring to is:

        “Three principles should govern the regulation of self-driving cars. First, it’s important to ensure that regulation be a complement to, rather than a substitute for, liability for accidents. Private firms will always have more information than government regulators about the safety of their products, and so the primary mechanism for ensuring car safety will always be manufacturers’ desires to avoid liability. Tort law gives carmakers an important, independent incentive to make safer cars. So while there may be good arguments for limiting liability, it would be a mistake to excuse regulated auto manufacturers from tort liability entirely.

        Second, regulators should let industry take the lead in developing the basic software architecture of self-driving technologies. The last couple of decades have given us many examples of high-tech industries converging on well-designed technical standards. It should be sufficient for regulators to examine these standards after they have been developed, rather than trying to impose government-designed standards on the industry.

        Finally, regulators need to bear in mind that too much regulation can be just as dangerous as too little. If self-driving cars will save lives, then delaying their introduction can kill just as many people as approving a dangerous car can. Therefore, it’s important that regulators focus narrowly on safety and that they don’t impose unrealistically high standards. If self-driving software can be shown to be at least as safe as the average human driver, it should be allowed on the road.

        I have posted here in addition to the Technology Liberation Front website, as many of the commentators at TLF have the rather obnoxious habit of deleting my comments in an very rude way (yes, Jerry Brito, it is you I am referring to…)

      • Tim’s point was that IN THE FIRST INSTANCE to allow a computer driver on the roads it would need to at least match human performance in every aspect (and presumably it would be required to match the performance of a good human driver in a good physical/psychological state – not drunk, tired or angry so it would already be better than average drivers)

        But that’s not what he actually said–just that it would be as safe as average human driver.

        It should be required to be much much safer–because such safety can be acheived, at very probably a very small increase in the cost of the technology. We should remember how very very dangerous driving is. The average number of annual US traffic fatalities during the 2001 to 2005 calendar period is 42,873

        OR, put more dramatically:

        If flying were like driving: Given that a medium-capacity jetliner accommodates 200 people, this represents 217 airplanes that crash each year. This means, each week, approximately 4 airplanes would crash.