December 22, 2024

Does Your House Need a Tail?

Thus far, the debate over broadband deployment has generally been between those who believe that private telecom incumbents should be in charge of planning, financing and building next-generation broadband infrastructure, and those who advocate a larger role for government in the deployment of broadband infrastructure. These proposals include municipal-owned networks and a variety of subsidies and mandates at the federal level for incumbents to deploy faster broadband.

Tim Wu and Derek Slater have a great new paper out that approaches the problem from a different perspective: that broadband deployments could be planned and financed not by government or private industry, but by consumers themselves. That might sound like a crazy idea at first blush, but Wu and Slater do a great job of explaining how it might work. The key idea is “condominium fiber,” an arrangement in which a number of neighboring households pool their resources to install fiber to all the homes in their neighborhoods. Once constructed, each home would own its own fiber strand, while the shared costs of maintaining the “trunk” cable from the individual homes to a central switching location would be managed in the same way that condominium and homeowners’ associations currently manage the shared areas of condos and gated communities. Indeed, in many cases the developer of a new condominium tower or planned community could lay fiber along with water and power lines, and the fiber would be just one of the shared resources that would be managed collectively by the homeowners.

If that sounds strange, it’s important to remember that there are plenty of examples where things that were formerly rented became owned. For example, fifty years ago in the United States no one owned a telephone. The phone was owned by Ma Bell and if yours broke they’d come and install a new one. But that changed, and now people own their phones and the wiring inside their homes, with your phone company owning the cable outside the home. One way to think about Slater and Wu’s “homes with tails” concept is that it’s just shifting that line of demarcation again. Under their proposal, you’d own the wiring inside your home and the line from you to your broadband provider.

Why would someone want to do such a thing? The biggest advantage, from my perspective, is that it could solve the thorny problem of limited competition in the “last mile” of broadband deployment. Right now, most customers have two options for high-speed Internet access. Getting more options using the traditional, centralized investment model is going to be extremely difficult because it costs a lot to deploy new infrastructure all the way to customers’ homes. But if customers “brought their own” fiber, then the barrier to entry would be much lower. New providers would simply need to bring a single strand of fiber to a neighborhood’s centralized point of presence in order to offer service to all customers in that neighborhood. So it would be much easier to imagine a world in which customers had numerous options to choose from.

The challenge is solving the chicken-and-egg problem: customer owned fiber won’t be attractive until there are several providers to choose from, but it doesn’t make sense for new firms to enter this market until there are a significant number of neighborhoods with customer-owned fiber. Wu and Slater suggest several ways this chicken-and-egg problem might be overcome, but I think it will remain a formidable challenge. My guess is that at least at the outset, the customer-owned model will work best in new residential construction projects, where the costs of deploying fiber will be very low (because they’ll already be digging trenches for power and water).

But the beauty of their model is that unlike a lot of other plans to encourage broadband deployment, this isn’t an all-or-nothing choice. We don’t have to convince an entire nation, state, or even city to sign onto a concept like this. All you need is a neighborhood with a few dozen early-adopting consumers and an ISP willing to serve them. Virtually every cutting-edge technology is taken up by a small number of early adopters (who pay high prices for the privilege of being the first with a new technology) before it spreads to the general public, and the same model is likely to apply to customer-owned fiber. If the concept is viable, someone will figure out how to make it work, and their example will be duplicated elsewhere. So I don’t know if customer-owned fiber is the wave of the future, but I do hope that people start experimenting with it.

You can check out their paper here. You can also check out an article I wrote for Ars Technica this summer that is based on conversations with Slater, Wu, and other pioneers in this area.

How Fragile Is the Internet?

With Barack Obama’s election, we’re likely to see a revival of the network neutrality debate. Thus far the popular debate over the issue has produced more heat than light. On one side have been people who scoff at the very idea of network neutrality, arguing either that network neutrality is a myth or that we’d be better off without it. On the other are people who believe the open Internet is hanging on by its fingernails. These advocates believe that unless Congress passes new regulations quickly, major network providers will transform the Internet into a closed network where only their preferred content and applications are available.

One assumption that seems to be shared by both sides in the debate is that the Internet’s end-to-end architecture is fragile. At times, advocates on both sides debate seem to think that AT&T, Verizon, and Comcast have big levers in their network closets labeled “network neutrality” that they will set to “off” if Congress doesn’t stop them. In a new study for the Cato Institute, I argue that this assumption is unrealistic. The Internet has the open architecture it has for good technical reasons. The end-to-end principle is deeply embedded in the Internet’s architecture, and there’s no straightforward way to change it without breaking existing Internet applications.

One reason is technical. Advocates of regulation point to a technology called deep packet inspection as a major threat to the Internet’s open architecture. DPI allows network owners to look “inside” Internet packets, reconstructing the web page, email, or other information as it comes across the wire. This is an impressive technology, but it’s also important to remember its limitations. DPI is inherently reactive and brittle. It requires human engineers to precisely describe each type of traffic that is to be blocked. That means that as the Internet grows ever more complex, more and more effort would be required to keep DPI’s filters up to date. It also means that configuration problems will lead to the accidental blocking of unrelated traffic.

The more fundamental reason is economic. The Internet works as well as it does precisely because it is decentralized. No organization on Earth has the manpower that would have been required to directly manage all of the content and applications on the Internet. Networks like AOL and Compuserve that were managed that way got bogged down in bureaucracy while they were still a small fraction of the Internet’s current size. It is not plausible that bureaucracies at Comcast, AT&T, or Verizon could manage their TCP/IP networks the way AOL ran its network a decade ago.

Of course what advocates of regulation fear is precisely that these companies will try to manage their networks this way, fail, and screw the Internet up in the process. But I think this underestimates the magnitude of the disaster that would befall any network provider that tried to convert their Internet service into a proprietary network. People pay for Internet access because they find it useful. A proprietary Internet would be dramatically less useful than an open one because network providers would inevitably block an enormous number of useful applications and websites. A network provider that deliberately broke a significant fraction of the content or applications on its network would find many fewer customers willing to pay for it. Customers that could switch to a competitor would. Some others would simply cancel their home Internet service and rely instead on Internet access at work, school, libraries, etc. And many customers that had previously taken higher-speed Internet service would downgrade to basic service. In short, even in an environment of limited competition, reducing the value of one’s product is rarely a good business strategy.

This isn’t to say that ISPs will never violate network neutrality. A few have done so already. The most significant was Comcast’s interference with the BitTorrent protocol last year. I think there’s plenty to criticize about what Comcast did. But there’s a big difference between interfering with one networking protocol and the kind of comprehensive filtering that network neutrality advocates fear. And it’s worth noting that even Comcast’s modest interference with network neutrality provoked a ferocious response from customers, the press, and the political process. The Comcast/BitTorrent story certainly isn’t going to make other ISPs think that more aggressive violations of network neutrality would be a good business strategy.

So it seems to me that new regulations are unnecessary to protect network neutrality. They are likely to be counterproductive as well. As Ed has argued, defining network neutrality precisely is surprisingly difficult, and enacting a ban without a clear definition is a recipe for problems. In addition, there’s a real danger of what economists call regulatory capture—that industry incumbents will find ways to turn regulatory authority to their advantage. As I document in my study, this is what happened with 20th-century regulation of the railroad, airline, and telephone industries. Congress should proceed carefully, lest regulations designed to protect consumers from telecom industry incumbents wind up protecting incumbents from competition instead.

Innovation vs. Safety in Self-driving Technologies

Over at Ars Technica, the final installment of my series on self-driving cars is up. In this installment I focus on the policy implications of self-driving technologies, asking about regulation, liability, and civil liberties.

Regulators will face a difficult trade-off between safety and innovation. One of the most important reasons for the IT industry’s impressive record of innovation is that the industry is lightly regulated and the basic inputs are cheap enough that almost anyone can enter the market with new products. The story of the innovative company founded in someone’s garage has become a cliche, but it also captures an important part of what makes Silicon Valley such a remarkable place. If new IT products were only being produced by large companies like Microsoft and Cisco, we’d be missing out on a lot of important innovation.

In contrast, the automobile industry is heavily regulated. Car manufacturers are required to jump through a variety of hoops to prove to the government that new cars are safe, have acceptable emissions, get sufficient gas mileage, and so forth. There are a variety of arguments for doing things this way, but one important consequence is that it makes it harder for a new firm to enter the market.

These two very different regulatory philosophies will collide if and when self-driving technologies mature. This software, unlike most other software, will kill people if it malfunctions. And so people will be understandably worried about the possibility that just anyone can write software and install it in their cars. Indeed, regulators are likely to want to apply the same kind of elaborate testing regime to car software that now applies to the rest of the car.

On the other hand, self-driving software is in principle no different from any other software. It’s quite possible that a brilliant teenager could produce dramatically improved self-driving software from her parents’ basement. If we limit car hacking to those engineers who happen to work for a handful of large car companies, we may be foregoing a lot of beneficial progress. And in the long run, that may actually cost lives by depriving society of potentially lifesaving advances in self-driving technology.

So how should the balance be struck? In the article, I suggest that a big part of the solution will be a layered architecture. I had previously made the prediction that self-driving technologies will be introduced first as safety technologies. That is, cars will have increasingly sophisticated collision-avoidance technologies. Once car companies have figured out how to make a virtually uncrashable car, it will be a relatively simple (and safe) step to turn it into a fully self-driving one.

My guess is that the collision-avoidance software will be kept around and serve as the lowest layer of a self-driving car’s software stack. Like the kernels in modern operating systems, the collision-avoidance layer of a self-driving car’s software will focus on preventing higher-level software from doing damage, while actual navigational functionality is implemented at a higher level.

One beneficial consequence is that it may be possible to leave the higher levels of the software stack relatively unregulated. If you had software that made it virtually impossible for a human being to crash, then it would be relatively safe to run more experimental navigation software on top of it. If the higher-level software screwed up, the low-level software should detect the mistake and override its instructions.

And that, in turn, leaves some hope that the self-driving cars of the future could be a hospitable place for the kind of decentralized experimentation that has made the IT industry so innovative. There are likely to be strict limits on screwing around with the lowest layer of your car’s software stack. But if that layer is doing its job, then it should be possible to allow more experimentation at higher layers without endangering peoples’ lives.

If you’re interested in more on self-driving cars, Josephine Wolff at the Daily Princetonian has an article on the subject. And next Thursday I’ll be giving a talk on the future of driving here at Princeton.

Bandwidth Needs and Engineering Tradeoffs

Tom Lee wonders about a question that Ed has pondered in the past: how much bandwidth does one human being need?

I’m suspicious of estimates of exploding per capita bandwidth consumption. Yes, our bandwidth needs will continue to increase. But the human nervous system has its own bandwidth limits, too. Maybe there’ll be one more video resolution revolution — HDTV2, let’s say (pending the invention of a more confusing acronym). But to go beyond that will require video walls — they look cool in Total Recall, but why would you pay for something larger than your field of view? — or three-dimensional holo-whatnots. I’m sure the latter will be popularized eventually, but I’ll probably be pretty old and confused by then.

The human fovea has a finite number of neurons, and we’re already pretty good at keeping them busy. Personally, I think that household bandwidth use is likely to level off sometime in the next decade or two — there’s only so much data that a human body can use. Our bandwidth expenses as a percentage of income will then start to fall, both because the growth in demand has slowed and because income continues to rise, but also because the resource itself will continue to get cheaper as technology improves.

When thinking about this question, I think it’s important to remember that engineering is all about trade-offs. It’s often possible to substitute one kind of computing resource for another. For example, compression replaces bandwidth or storage with increased computation. Similarly, caching substitutes storage for bandwidth. We recently had a talk by Vivek Pai, a researcher here at Princeton who has been using aggressive caching algorithms to improve the quality of Internet access in parts of Africa where bandwidth is scarce.

So even if we reach the point where our broadband connections are fat enough to bring in as much information as the human nervous system can process, that doesn’t mean that more bandwidth wouldn’t continue to be valuable. Higher bandwidth means more flexibility in the design of online applications. In some cases, it might make more sense to bring raw data into the home and do calculations locally. In other cases, it might make more sense to pre-render data on a server farm and bring the finished image into the home.

One key issue is latency. People with cable or satellite TV service are used to near-instantaneous, flawless video content, which is difficult to stream reliably over a packet-switched network. So the television of the future is likely to be a peer-to-peer client that downloads anything it thinks its owner might want to see and caches it for later viewing. This isn’t strictly necessary, but it would improve the user experience. Likewise, there may be circumstances where users want to quickly load up their portable devices with several gigabytes of data for later offline viewing.

Finally, and probably most importantly, higher bandwidth allows us to economize on the time of the engineers building online applications. One of the consistent trends in the computer industry has been towards greater abstraction. There was a time when everyone wrote software in machine language. Now, a lot of software is written in high-level languages like Java, Perl, or Python that run slower but make life a lot easier for programmers. A decade ago, people trying to build rich web applications had to waste a lot of time optimizing their web applications to achieve acceptable performance on the slow hardware of the day. Today, computers are fast enough that developers can use high-level frameworks that are much more powerful but consume a lot more resources. Developers spend more time adding new features and less time trying to squeeze better performance out of the features they already have. Which means users get more and better applications.

The same principle is likely to apply to increased bandwidth, even beyond the point where we all have enough bandwidth to stream high-def video. Right now, web developers need to pay a fair amount of attention to whether data is stored on the client or the server and how to efficiently transmit it from one place to another. A world of abundant bandwidth will allow developers to do whatever makes the most sense computationally without worrying about the bandwidth constraints. Of course, I don’t know exactly what those frameworks will look like or what applications they will enable, but I don’t think it’s too much of a stretch to think that we’ll be able to continue finding uses for higher bandwidth for a long time.

Federal Circuit Reins in Business Method Patents

This has been a big year for patent law in the technology industry. A few weeks ago I wrote about the Supreme Court’s Quanta v. LG decision. Now the United States Court of Appeals for the Federal Circuit, which has jurisdiction over all patent appeals, has handed down a landmark ruling in the case of In Re Bilski. The case dealt with the validity of patents on business methods, and a number of public interest organizations had filed amicus briefs. I offer my take on the decision in a story for Ars Technica. In a nutshell, the Federal Circuit rejected the patent application at issue in the case and signaled a newfound skepticism of “business method” patents.

The decision is surprising because the Federal Circuit has until recently been strongly in favor of expanding patent rights. During the 1990s, it handed down its Alappat and State Street decisions, which gave a green light to patents on software and business methods, two categories of innovation that had traditionally been regarded as ineligible for patent protection. Even as the evidence mounted earlier this decade that these patents were hindering, rather than promoting, technological innovation, the Federal Circuit showed no sign of backing down.

Now, however, the Federal Circuit’s attitude seems to have changed. The biggest factor, I suspect, is that after a quarter century of ignoring patent law, the Supreme Court has handed down a series of unanimous decisions overturning Federal Circuit precedents and harshly criticizing the court’s permissive patent jurisprudence. That, combined with the avalanche of bad press, seems to have convinced the Federal Circuit that the standards for patenting needed to be tightened up.

However, as Ben Klemens writes, Bilski is the start of an argument about the patentability of abtract inventions, not its end. The Federal Circuit formally abandoned the extremely permissive standard it established in State Street, reverting to the Supreme Court’s rule that an invention must be tied to a specific machine or a transformation of matter. But it deferred until future decisions the precise details of how closely an idea has to be tied to a specific machine in order to be eligible for patentability. We know, for example, that a software algorithm (which is ultimately just a string of 1s and 0s) cannot be patented. But what if I take that string of 1s and 0s and write it onto a hard drive, which certainly is a machine. Does this idea-machine hybrid become a patentable invention? As Ben points out, we don’t know because the Federal Circuit explicitly deferred this question to future cases.

Still, there are a lot of hopeful signs here for those of us who would like to see an end to patents on software and business methods. The decision looks in some detail at the Supreme Court’s trio of software patent cases from the late 1970s and early 1980s, and seems conscious of the disconnect between those decisions and the Federal Circuit’s more recent precedents. Software and business method patents have developed a lot of institutional inertia over the last 15 years, so we’re unlikely to see a return to the rule that software and business methods are never patentable. But it’s safe to say that it’s going to start getting a lot harder to obtain patents on software and business methods.