November 26, 2024

Innovation vs. Safety in Self-driving Technologies

Over at Ars Technica, the final installment of my series on self-driving cars is up. In this installment I focus on the policy implications of self-driving technologies, asking about regulation, liability, and civil liberties.

Regulators will face a difficult trade-off between safety and innovation. One of the most important reasons for the IT industry’s impressive record of innovation is that the industry is lightly regulated and the basic inputs are cheap enough that almost anyone can enter the market with new products. The story of the innovative company founded in someone’s garage has become a cliche, but it also captures an important part of what makes Silicon Valley such a remarkable place. If new IT products were only being produced by large companies like Microsoft and Cisco, we’d be missing out on a lot of important innovation.

In contrast, the automobile industry is heavily regulated. Car manufacturers are required to jump through a variety of hoops to prove to the government that new cars are safe, have acceptable emissions, get sufficient gas mileage, and so forth. There are a variety of arguments for doing things this way, but one important consequence is that it makes it harder for a new firm to enter the market.

These two very different regulatory philosophies will collide if and when self-driving technologies mature. This software, unlike most other software, will kill people if it malfunctions. And so people will be understandably worried about the possibility that just anyone can write software and install it in their cars. Indeed, regulators are likely to want to apply the same kind of elaborate testing regime to car software that now applies to the rest of the car.

On the other hand, self-driving software is in principle no different from any other software. It’s quite possible that a brilliant teenager could produce dramatically improved self-driving software from her parents’ basement. If we limit car hacking to those engineers who happen to work for a handful of large car companies, we may be foregoing a lot of beneficial progress. And in the long run, that may actually cost lives by depriving society of potentially lifesaving advances in self-driving technology.

So how should the balance be struck? In the article, I suggest that a big part of the solution will be a layered architecture. I had previously made the prediction that self-driving technologies will be introduced first as safety technologies. That is, cars will have increasingly sophisticated collision-avoidance technologies. Once car companies have figured out how to make a virtually uncrashable car, it will be a relatively simple (and safe) step to turn it into a fully self-driving one.

My guess is that the collision-avoidance software will be kept around and serve as the lowest layer of a self-driving car’s software stack. Like the kernels in modern operating systems, the collision-avoidance layer of a self-driving car’s software will focus on preventing higher-level software from doing damage, while actual navigational functionality is implemented at a higher level.

One beneficial consequence is that it may be possible to leave the higher levels of the software stack relatively unregulated. If you had software that made it virtually impossible for a human being to crash, then it would be relatively safe to run more experimental navigation software on top of it. If the higher-level software screwed up, the low-level software should detect the mistake and override its instructions.

And that, in turn, leaves some hope that the self-driving cars of the future could be a hospitable place for the kind of decentralized experimentation that has made the IT industry so innovative. There are likely to be strict limits on screwing around with the lowest layer of your car’s software stack. But if that layer is doing its job, then it should be possible to allow more experimentation at higher layers without endangering peoples’ lives.

If you’re interested in more on self-driving cars, Josephine Wolff at the Daily Princetonian has an article on the subject. And next Thursday I’ll be giving a talk on the future of driving here at Princeton.

Bandwidth Needs and Engineering Tradeoffs

Tom Lee wonders about a question that Ed has pondered in the past: how much bandwidth does one human being need?

I’m suspicious of estimates of exploding per capita bandwidth consumption. Yes, our bandwidth needs will continue to increase. But the human nervous system has its own bandwidth limits, too. Maybe there’ll be one more video resolution revolution — HDTV2, let’s say (pending the invention of a more confusing acronym). But to go beyond that will require video walls — they look cool in Total Recall, but why would you pay for something larger than your field of view? — or three-dimensional holo-whatnots. I’m sure the latter will be popularized eventually, but I’ll probably be pretty old and confused by then.

The human fovea has a finite number of neurons, and we’re already pretty good at keeping them busy. Personally, I think that household bandwidth use is likely to level off sometime in the next decade or two — there’s only so much data that a human body can use. Our bandwidth expenses as a percentage of income will then start to fall, both because the growth in demand has slowed and because income continues to rise, but also because the resource itself will continue to get cheaper as technology improves.

When thinking about this question, I think it’s important to remember that engineering is all about trade-offs. It’s often possible to substitute one kind of computing resource for another. For example, compression replaces bandwidth or storage with increased computation. Similarly, caching substitutes storage for bandwidth. We recently had a talk by Vivek Pai, a researcher here at Princeton who has been using aggressive caching algorithms to improve the quality of Internet access in parts of Africa where bandwidth is scarce.

So even if we reach the point where our broadband connections are fat enough to bring in as much information as the human nervous system can process, that doesn’t mean that more bandwidth wouldn’t continue to be valuable. Higher bandwidth means more flexibility in the design of online applications. In some cases, it might make more sense to bring raw data into the home and do calculations locally. In other cases, it might make more sense to pre-render data on a server farm and bring the finished image into the home.

One key issue is latency. People with cable or satellite TV service are used to near-instantaneous, flawless video content, which is difficult to stream reliably over a packet-switched network. So the television of the future is likely to be a peer-to-peer client that downloads anything it thinks its owner might want to see and caches it for later viewing. This isn’t strictly necessary, but it would improve the user experience. Likewise, there may be circumstances where users want to quickly load up their portable devices with several gigabytes of data for later offline viewing.

Finally, and probably most importantly, higher bandwidth allows us to economize on the time of the engineers building online applications. One of the consistent trends in the computer industry has been towards greater abstraction. There was a time when everyone wrote software in machine language. Now, a lot of software is written in high-level languages like Java, Perl, or Python that run slower but make life a lot easier for programmers. A decade ago, people trying to build rich web applications had to waste a lot of time optimizing their web applications to achieve acceptable performance on the slow hardware of the day. Today, computers are fast enough that developers can use high-level frameworks that are much more powerful but consume a lot more resources. Developers spend more time adding new features and less time trying to squeeze better performance out of the features they already have. Which means users get more and better applications.

The same principle is likely to apply to increased bandwidth, even beyond the point where we all have enough bandwidth to stream high-def video. Right now, web developers need to pay a fair amount of attention to whether data is stored on the client or the server and how to efficiently transmit it from one place to another. A world of abundant bandwidth will allow developers to do whatever makes the most sense computationally without worrying about the bandwidth constraints. Of course, I don’t know exactly what those frameworks will look like or what applications they will enable, but I don’t think it’s too much of a stretch to think that we’ll be able to continue finding uses for higher bandwidth for a long time.

Louisiana Re-enfranchises Independent Voters

Two weeks ago I wrote that independent voters were disenfranchised in the Louisiana Congressional primaries: unclear or incorrect instructions by the Secretary of State to the pollworkers caused thousands of independent voters to be incorrectly precluded from voting in the open Democratic primary on October 4th.

Today I am told that Secretary of State Jay Dardenne has corrected the problem. Earl Schmitt, a “Commissioner in Charge” (head precinct pollworker) in the 15th ward of New Orleans, reports that all pollworkers were recently brought in for a two-hour training meeting. They were given clear instructions that independent voters are to be given a ticket marked “Democrat” that permits them to vote in today’s Democratic runoff primary election. (Because of a hurricane, the original September 6th primary was postponed to October 4th, and both parties’ runoff primaries are being held today, along with the Obama vs. McCain presidential election. The Democratic Party is permitting independents to vote in their primary; the Republican Party is not. The general election for congressional seats in Louisiana will be December 6th.)

I am happy that the Secretary of State moved quickly to retrain pollworkers. It’s not that no harm was done–after all, those independent voters might have made a difference in which candidates advanced to the runoff–but better late than never, in improving the administration of our elections.

Clarification: Only 2 of Louisiana’s 7 congressional districts required a runoff primary; the other 5 held their congressional general election on Nov. 4th.

Election 2008: What Might Go Wrong

Tomorrow, as everyone knows, is Election Day in the U.S. With all the controversy over electronic voting, and the anticipated high turnout, what can we expect to see? What problems might be looming? Here are my predictions.

Long lines to vote: Polling places will be strained by the number of voters. In some places the wait will be long – especially where voting requires the use of machines. Many voters will be willing and able to wait, but some will have to leave without casting votes. Polls will be kept open late, and results will be reported later than expected, because of long lines.

Registration problems: Quite a few voters will arrive at the polling place to find that they are not on the voter rolls, because of official error, or problems with voter registration databases, or simply because the voter went to the wrong polling place. New voters will be especially likely to have such problems. Voters who think they should be on the rolls in a polling place can file provisional ballots there. Afterward, officials must judge whether each provisional voter was in fact eligible, a time-consuming process which, given the relative flood of provisional ballots, will strain official resources.

Voting machine problems: Electronic voting machines will fail somewhere. This is virtually inevitable, given the sheer number of machines and polling places, the variety of voting machines, and the often poor reliability and security engineering of the machines. If we’re lucky, the problems can be addressed using a paper trail or other records. If not, we’ll have a mess on our hands.

How serious the mess might be depends on how close the election is. If the margin of victory is large, as some polls suggest it may be, then it will be easy to write off problems as “minor” and move on to the next stage in our collective political life. If the election is close, we could see a big fight. The worse case is an ultra-close election like in 2000, with long lines, provisional ballots, or voting machine failures putting the outcome in doubt.

Regardless of what happens on Election Day, the next day — Wednesday, November 5 — will be a good time to get started on improving the next election. We have made some progress since 2004 and 2006. If we keep working, our future elections can be better and safer than this one.

Federal Circuit Reins in Business Method Patents

This has been a big year for patent law in the technology industry. A few weeks ago I wrote about the Supreme Court’s Quanta v. LG decision. Now the United States Court of Appeals for the Federal Circuit, which has jurisdiction over all patent appeals, has handed down a landmark ruling in the case of In Re Bilski. The case dealt with the validity of patents on business methods, and a number of public interest organizations had filed amicus briefs. I offer my take on the decision in a story for Ars Technica. In a nutshell, the Federal Circuit rejected the patent application at issue in the case and signaled a newfound skepticism of “business method” patents.

The decision is surprising because the Federal Circuit has until recently been strongly in favor of expanding patent rights. During the 1990s, it handed down its Alappat and State Street decisions, which gave a green light to patents on software and business methods, two categories of innovation that had traditionally been regarded as ineligible for patent protection. Even as the evidence mounted earlier this decade that these patents were hindering, rather than promoting, technological innovation, the Federal Circuit showed no sign of backing down.

Now, however, the Federal Circuit’s attitude seems to have changed. The biggest factor, I suspect, is that after a quarter century of ignoring patent law, the Supreme Court has handed down a series of unanimous decisions overturning Federal Circuit precedents and harshly criticizing the court’s permissive patent jurisprudence. That, combined with the avalanche of bad press, seems to have convinced the Federal Circuit that the standards for patenting needed to be tightened up.

However, as Ben Klemens writes, Bilski is the start of an argument about the patentability of abtract inventions, not its end. The Federal Circuit formally abandoned the extremely permissive standard it established in State Street, reverting to the Supreme Court’s rule that an invention must be tied to a specific machine or a transformation of matter. But it deferred until future decisions the precise details of how closely an idea has to be tied to a specific machine in order to be eligible for patentability. We know, for example, that a software algorithm (which is ultimately just a string of 1s and 0s) cannot be patented. But what if I take that string of 1s and 0s and write it onto a hard drive, which certainly is a machine. Does this idea-machine hybrid become a patentable invention? As Ben points out, we don’t know because the Federal Circuit explicitly deferred this question to future cases.

Still, there are a lot of hopeful signs here for those of us who would like to see an end to patents on software and business methods. The decision looks in some detail at the Supreme Court’s trio of software patent cases from the late 1970s and early 1980s, and seems conscious of the disconnect between those decisions and the Federal Circuit’s more recent precedents. Software and business method patents have developed a lot of institutional inertia over the last 15 years, so we’re unlikely to see a return to the rule that software and business methods are never patentable. But it’s safe to say that it’s going to start getting a lot harder to obtain patents on software and business methods.