April 24, 2014

avatar

Inject New Energy into Problem Solving – Principle #8 for Fostering Civic Engagement Through Digital Technologies

In response to my recent post arguing that the Federal government needs to use the social web more effectively as a tool for improving information sharing between the Federal government and the public, Michael Herz from the Benjamin N. Cardozo School of Law reached out and directed me to a comprehensive report he recently authored for the consideration of the Administrative Conference of the United States entitled “Using Social Media in Rulemaking: Possibilities and Barriers.” One of Mr. Herz’s colleagues described the report’s tone as one of “skeptical optimism.” Mr. Herz asked me specifically about the role of social media in the Federal agency rulemaking process. In short, I generally agree with his statement that “social media culture is at odds with the fundamental characteristics of notice-and-comment rulemaking” because filing insightful comments requires “time, thought, study of the agency proposal and rationale, articulating reasons rather than…off-the-top-of-one’s-head assertions of a bottom line.” Social media, we both agree, however, is a valuable tool for Federal agencies to use to inform the public – particularly those people or groups whom the agency believes may have a vested interest in ongoing rulemakings.

Our e-mail exchange has me thinking now about why many governments and residents are embracing technology-based solutions for urban problems whereas the Federal government, as exemplified by the problems with the Affordable Care Act implementation, has not been as effective in using the Internet, wireless technology and social media to deliver services to the public. Today, I will discuss three reasons why it is easier to inject new energy into technology-based problem solving in local communities.
[Read more...]

avatar

The New Ambiguity of "Open Government"

David Robinson and I have just released a draft paper—The New Ambiguity of “Open Government”—that describes, and tries to help solve, a key problem in recent discussions around online transparency. As the paper explains, the phrase “open government” has become ambiguous in a way that makes life harder for both advocates and policymakers, by combining the politics of transparency with the technologies of open data. We propose using new terminology that is politically neutral: the word adaptable to describe desirable features of data (and the word inert to describe their absence), separately from descriptions of the governments that use these technologies.

Clearer language will serve everyone well, and we hope this paper will spark a conversation among those who focus on civic transparency and innovation. Thanks to Justin Grimes and Josh Tauberer, for their helpful insight and discussions as we drafted this paper.

Download the full paper here.

Abstract:

“Open government” used to carry a hard political edge: it referred to politically sensitive disclosures of government information. The phrase was first used in the 1950s, in the debates leading up to passage of the Freedom of Information Act. But over the last few years, that traditional meaning has blurred, and has shifted toward technology.

Open technologies involve sharing data over the Internet, and all kinds of governments can use them, for all kinds of reasons. Recent public policies have stretched the label “open government” to reach any public sector use of these technologies. Thus, “open government data” might refer to data that makes the government as a whole more open (that is, more transparent), but might equally well refer to politically neutral public sector disclosures that are easy to reuse, but that may have nothing to do with public accountability. Today a regime can call itself “open” if it builds the right kind of web site—even if it does not become more accountable or transparent. This shift in vocabulary makes it harder for policymakers and activists to articulate clear priorities and make cogent demands.

This essay proposes a more useful way for participants on all sides to frame the debate: We separate the politics of open government from the technologies of open data. Technology can make public information more adaptable, empowering third parties to contribute in exciting new ways across many aspects of civic life. But technological enhancements will not resolve debates about the best priorities for civic life, and enhancements to government services are no substitute for public accountability.

avatar

Bilski and the Value of Experimentation

The Supreme Court’s long-awaited decision in Bilski v. Kappos brought closure to this particular patent prosecution, but not much clarity to the questions surrounding business method patents. The Court upheld the Federal Circuit’s conclusion that the claimed “procedure for instructing buyers and sellers how to protect against the risk of price fluctuations in a discrete section of the economy” was unpatentable, but threw out the “machine-or-transformation” test the lower court had used. In its place, the Court’s majority gave us a set of “clues” which future applicants, Sherlock Holmes-like, must use to discern the boundaries separating patentable processes from unpatentable “abstract ideas.”

The Court missed an opportunity to throw out “business method” patents, where a great many of these abstract ideas are currently claimed, and failed to address the abstraction of many software patents. Instead, Justice Kennedy’s majority seemed to go out of its way to avoid deciding even the questions presented, simultaneously appealing to the new technological demands of the “Information Age”

As numerous amicus briefs argue, the machine-or-transformation test would create uncertainty as to the patentability of software, advanced diagnostic medicine techniques, and inventions based on linear programming, data compression, and the manipulation of digital signals.

and yet re-ups the uncertainty on the same page:

It is important to emphasize that the Court today is not commenting on the patentability of any particular invention, let alone holding that any of the above-mentioned technologies from the Information Age should or should not receive patent protection.

The Court’s opinion dismisses the Federal Circuit’s brighter line test for “machine-or-transformation” in favor of hand-waving standards: a series of “clues,” “tools” and “guideposts” toward the unpatentable “abstract ideas.” While Kennedy notes that “This Age puts the possibility of innovation in the hands of more people,” his opinion leaves all of those people with new burdens of uncertainty — whether they seek patents or reject patent’s exclusivity but risk running into the patents of others. No wonder Justice Stevens, who concurs in the rejection of Bilski’s application but would have thrown business method patents out with it, calls the whole thing “less than pellucid.”

The one thing the meandering makes clear is that while the Supreme Court doesn’t like the Federal Circuit’s test (despite the Federal Circuit’s attempt to derive it from prior Supreme Court precedents), neither do the Supremes want to propose a new test of their own. The decision, like prior patent cases to reach the Supreme Court, points to larger structural problems: the lack of a diverse proving-ground for patent cases.

Since 1982, patent cases, unlike most other cases in our federal system, have all been appealed to one court, United States Court of Appeals for the Federal Circuit. Thus while copyright appeals, for example, are heard in the circuit court for the district in which they originate (one of twelve regional circuits), all patent appeals are funneled to the Federal Circuit. And while its judges may be persuaded by other circuits’ opinions, one circuit is not bound to follow its fellows, and may “split” on legal questions. Consolidation in the Federal Circuit deprives the Supreme Court of such “circuit splits” in patent law. At most, it may have dissents from the Federal Circuit’s panel or en banc decision. If it doesn’t like the test of the Federal Circuit, the Supreme Court has no other appellate court to which to turn.

Circuit splits are good for judicial decisionmaking. They permit experimentation and dialogue around difficult points of law. (The Supreme Court hears fewer than 5% of the cases appealed to it, but is twice as likely to take cases presenting inter-circuit splits.) Like the states in the federal system, multiple circuits provide a “laboratory [to] try novel social and economic experiments.” Diverse judges examining the same law, as presented in differing circumstances, can analyze it from different angles (and differing policy perspectives). The Supreme Court considering an issue ripened by the analysis of several courts is more likely to find a test it can support, less likely to have to craft one from scratch or abjure the task. At the cost of temporary non-uniformity, we may get empirical evidence toward better interpretation.

At a time when “harmonization” is pushed as justification for treaties(and a uniform ratcheting-up of intellectual property regimes), the Bilski opinion suggests again that uniformity is overrated, especially if it’s uniform murk.

avatar

iPad: The Disneyland of Computers

Tech commentators have a love/hate relationship with Apple’s new iPad. Those who try it tend to like it, but many dislike its locked-down App Store which only allows Apple-approved apps. Some people even see the iPad as the dawn of a new relationship between people and computers.

To me, the iPad is Disneyland.

I like Disneyland. It’s clean, safe, and efficient. There are lots of entertaining things to do. Kids can drive cars; adults can wear goofy hats with impunity. There’s a parade every afternoon, and an underground medical center in case you get sick.

All of this is possible because of central planning. Every restaurant and store on Disneyland’s Main Street is approved in advance by Disney. Every employee is vetted by Disney. Disneyland wouldn’t be Disneyland without central planning.

I like to visit Disneyland, but I wouldn’t want to live there.

There’s a reason the restaurants in Disneyland are bland and stodgy. It’s not just that centralized decision processes like Disney’s have trouble coping with creative, nimble, and edgy ideas. It’s also that customers know who’s in charge, so any bad dining experience will be blamed on Disney, making Disney wary of culinary innovation. In Disneyland the trains run on time, but they take you to a station just like the one you left.

I like living in a place where anybody can open a restaurant or store. I like living in a place where anybody can open a bookstore and sell whatever books they want. Here in New Jersey, the trains don’t always run on time, but they take you to lots of interesting places.

The richness of our cultural opportunities, and the creative dynamism of our economy, are only possible because of a lack of central planning. Even the best central planning process couldn’t hope to keep up with the flow of new ideas.

The same is true of Apple’s app store bureaucracy: there’s no way it can keep up with the flow of new ideas — no way it can offer the scope and variety of apps that a less controlled environment can provide. And like the restaurants of Disneyland, the apps in Apple’s store will be blander because customers will blame the central planner for anything offensive they might say.

But there’s a bigger problem with the argument offered by central planning fanboys. To see what it is, we need to look more carefully at why Disneyland succeeded when so many centrally planned economies failed so dismally.

What makes Disneyland different is that it is an island of central planning, embedded in a free society. This means that Disneyland can seek its suppliers, employees, and customers in a free economy, even while it centrally plans its internal operations. This can work well, as long as Disneyland doesn’t get too big — as long as it doesn’t try to absorb the free society around it.

The same is true of Apple and the iPad. The whole iPad ecosystem, from the hardware to Apple’s software to the third-party app software, is only possible because of the robust free-market structures that create and organize knowledge, and mobilize workers, in the technology industry. If Apple somehow managed to absorb the tech industry into its centrally planned model, the result would be akin to Disneyland absorbing all of America. That would be enough to frighten even the most rabid fanboy, but fortunately it’s not at all likely. The iPad, like Disneyland, will continue to be an island of central planning in a sea of decentralized innovation.

So, iPad users, enjoy your trip to Disneyland. I understand why you’re going there, and I might go there one day myself. But don’t forget: there’s a big exciting world outside, and you don’t want to miss it.

avatar

Information Technology Policy in the Obama Administration, One Year In

[Last year, I wrote an essay for Princeton's Woodrow Wilson School, summarizing the technology policy challenges facing the incoming Obama Administration. This week they published my follow-up essay, looking back on the Administration's first year. Here it is.]

Last year I identified four information technology policy challenges facing the incoming Obama Administration: improving cybersecurity, making government more transparent, bringing the benefits of technology to all, and bridging the culture gap between techies and policymakers. On these issues, the Administration’s first-year record has been mixed. Hopes were high that the most tech-savvy presidential campaign in history would lead to an equally transformational approach to governing, but bold plans were ground down by the friction of Washington.

Cybersecurity : The Administration created a new national cybersecurity coordinator (or “czar”) position but then struggled to fill it. Infighting over the job description — reflecting differences over how to reconcile security with other economic goals — left the czar relatively powerless. Cyberattacks on U.S. interests increased as the Adminstration struggled to get its policy off the ground.

Government transparency: This has been a bright spot. The White House pushed executive branch agencies to publish more data about their operations, and created rules for detailed public reporting of stimulus spending. Progress has been slow — transparency requires not just technology but also cultural changes within government — but the ship of state is moving in the right direction, as the public gets more and better data about government, and finds new ways to use that data to improve public life.

Bringing technology to all: On the goal of universal access to technology, it’s too early to tell. The FCC is developing a national broadband plan, in hopes of bringing high-speed Internet to more Americans, but this has proven to be a long and politically difficult process. Obama’s hand-picked FCC chair, Julius Genachowski, inherited a troubled organization but has done much to stabilize it. The broadband plan will be his greatest challenge, with lobbyists on all sides angling for advantage as our national network expands.

Closing the culture gap: The culture gap between techies and policymakers persists. In economic policy debates, health care and the economic crisis have understandably taken center stage, but there seems to be little room even at the periphery for the innovation agenda that many techies had hoped for. The tech policy discussion seems to be dominated by lawyers and management consultants, as in past Administrations. Too often, policymakers still see techies as irrelevant, and techies still see policymakers as clueless.

In recent days, creative thinking on technology has emerged from an unlikely source: the State Department. On the heels of Google’s surprising decision to back away from the Chinese market, Secretary of State Clinton made a rousing speech declaring Internet freedom and universal access to information as important goals of U.S. foreign policy. This will lead to friction with the Chinese and other authoritarian governments, but our principles are worth defending. The Internet can a powerful force for transparency and democratization, around the world and at home.

avatar

Search Neutrality ? Net Neutrality

Sunday’s New York Times featured a provocative op-ed arguing in addition to regulating “net neutrality” the FCC should also effectuate “search neutrality” – requiring search providers rank results without consideration of business entities. The author heaps particular scorn upon Google for promoting its own context-relevant services (i.e. maps and weather) at the fore of search results. Others have already reviewed the proposal, leveled implementation critiques, and criticized the author’s gripes with his own site. My aim here is to rebut the piece’s core argument: the analogy of search neutrality to net neutrality. Clearly both are debates about the promotion of innovation and competition through a level playing field. But beyond this commonality the parallel breaks down.

Net neutrality advocates call for regulation because ISP discrimination could render innovative services either impossible to implement owing to traffic restrictions or too expensive to deploy owing to traffic pricing. Consumers cannot “vote with their dollars” for a nondiscriminatory ISP since most locales have few providers and the market is hard to break into. Violations of net neutrality, the argument goes, threaten to nip entire industries in the bud and rob the economy of growth.

Violations of search neutrality, on the other hand, at most increase marketing costs for an innovative or competitive offering. Consumers are more than clever enough to seek and use an alternative to a weaker Google offering (Yelp vs. Google restaurant reviews, anyone?). The author of the op-ed cites Google Maps’ dethroning of MapQuest as evidence of the power of search non-neutrality; on the contrary, I would contend users flocked to Google’s service because it was, well, better. If Google Maps featured MapQuest’s clunky interface and vice versa, would you use it? A glance at historical map site statistics empirically rebuts the author’s claim. The mid-May 2007 introduction of Google’s context-relevant (“universal”) search does not appear correlated with any irregular shift in map site traffic.

Moreover, unlike with net neutrality search consumers stand ready to “vote with their [ad] dollars.” Should Google consistently favor its own services to the detriment of search result quality, consumers can effortlessly shift to any of its numerous competitors. It is no coincidence Google sinks enormous manpower into improving result quality.

There may also be a benefit to the increase in marketing costs from existing violations of search neutrality, like Google’s map and weather offerings. If a service would have to be extensively marketed to compete with Google’s promoted offering – say, a current weather site vs. searching for “Stanford weather” – the market is sending a signal that consumers don’t care about the marginal quality of the product, and the non-Google provider should quit the market.

There is merit to the observation that violations of search neutrality are, on the margin, slightly anti-competitive. But this issue is dwarfed by the potential economy-scale implications of net neutrality. The FCC should not deviate in its rulemaking.

avatar

DARPA Pays MIT to Pay Someone Who Recruited Someone Who Recruited Someone Who Recruited Someone Who Found a Red Balloon

DARPA, the Defense Department’s research arm, recently sponsored a “Network Challenge” in which groups competed to find ten big red weather balloons that were positioned in public places around the U.S. The first team to discover where all the balloons were would win $40,000.

A team from MIT won, using a clever method of sharing the cash with volunteers. MIT let anyone join their team, and they paid money to the members who found balloons, as well as the people who recruited the balloon-finders, and the people who recruited the balloon-finder-finders. For example, if Alice recruited Bob, and Bob recruited Charlie, and Charlie recruited Diane, and Diane found a balloon, then Alice would get $250, Bob would get $500, Charlie would get $1000, and Diane would get $2000. Multi-level marketing meets treasure hunting! It’s the Amway of balloon-hunting!

On DARPA’s side, this was inspired by the famous Grand Challenge and Urban Challenge, in which teams built autonomous cars that had to drive themselves safely through a desert landscape and then a city.

The autonomous-car challenges have obvious value, both for the military and in ordinary civilian life. But it’s hard to say the same for the balloon-hunting challenge. Granted, the balloon-hunting prize was much smaller, but it’s still hard to avoid the impression that the balloon hunt was more of a publicity stunt than a spur to research. We already knew that the Internet lets people organize themselves into effective groups at a distance. We already knew that people like a scavenger hunt, especially if you offer significant cash prizes. And we already knew that you can pay Internet strangers to do jobs for you. But how are we going to apply what we learned in the balloon hunt?

The autonomous-car challenge has value because it asks the teams to build something that will eventually have practical use. Someday we will all have autonomous cars, and they will have major implications for our transportation infrastructure. The autonomous-car challenge helped to bring that day closer. But will the day ever come when all, or even many, of us will want to pay large teams of people to find things for us?

(There’s more to be said about the general approach of offering challenge prizes as an alternative to traditional research funding, but that’s a topic for another day.)

avatar

Wu on Zittrain's Future of the Internet

Related to my previous post about the future of open technologies, Tim Wu has a great review of Jonathan Zittrain’s book. Wu reviews the origins of the 20th century’s great media empires, which steadily consolidated once-fractious markets. He suggests that the Internet likely won’t meet the same fate. My favorite part:

In the 2000s, AOL and Time Warner took the biggest and most notorious run at trying to make the Internet more like traditional media. The merger was a bet that unifying content and distribution might yield the kind of power that Paramount and NBC gained in the 1920s. They were not alone: Microsoft in the 1990s thought that, by owning a browser (Explorer), dial-in service (MSN), and some content (Slate), it could emerge as the NBC of the Internet era. Lastly, AT&T, the same firm that built the first radio network, keeps signaling plans to assert more control over “its pipes,” or even create its own competitor to the Internet. In 2000, when AT&T first announced its plans to enter the media market, a spokesman said: “We believe it’s very important to have control of the underlying network.”

Yet so far these would-be Zukors and NBCs have crashed and burned. Unlike radio or film, the structure of the Internet stoutly resists integration. AOL tried, in the 1990s, to keep its users in a “walled garden” of AOL content, but its users wanted the whole Internet, and finally AOL gave in. To make it after the merger, AOL-Time Warner needed to build a new garden with even higher walls–some way for AOL to discriminate in favor of Time Warner content. But AOL had no real power over its users, and pretty soon it did not have many of them left.

I think the monolithic media firms of the 20th century ultimately owed their size and success to economies of scale in the communication technologies of their day. For example, a single newspaper with a million readers is a lot cheaper to produce and distribute than ten newspapers with 100,000 readers each. And so the larger film studios, newspapers, broadcast networks, and so on were able to squeeze out smaller players. Once one newspaper in a given area began reaping the benefits of scale, it made it difficult for its competitors to turn a profit, and a lot of them went out of business or got acquired at firesale prices.

On the Internet, distributing content is so cheap that economies of scale in distribution just don’t matter. On a per-reader basis, my personal blog certainly costs more to operate than CNN. But the cost is so small that it’s simply not a significant factor in deciding whether to continue publishing it. Even if the larger sites capture the bulk of the readership and advertising revenue, that doesn’t preclude a “long tail” of small, often amateur sites with a wide variety of different content.

avatar

Economic Growth, Censorship, and Search Engines

Economic growth depends on an ability to access relevant information. Although censorship prevents access to certain information, the direct consequences of censorship are well-known and somewhat predictable. For example, blocking access to Falun Gong literature is unlikely to harm a country’s consumer electronics industry. On the web, however, information of all types is interconnected. Blocking a web page might have an indirect impact reaching well beyond that page’s contents. To understand this impact, let’s consider how search results are affected by censorship.

Search engines keep track of what’s available on the web and suggest useful pages to users. No comprehensive list of web pages exists, so search providers check known pages for links to unknown neighbors. If a government blocks a page, all links from the page to its neighbors are lost. Unless detours exist to the page’s unknown neighbors, those neighbors become unreachable and remain unknown. These unknown pages can’t appear in search results — even if their contents are uncontroversial.

When presented with a query, search engines respond with relevant known pages sorted by expected usefulness. Censorship also affects this sorting process. In predicting usefulness, search engines consider both the contents of pages and the links between pages. Links here are like friendships in a stereotypical high school popularity contest: the more popular friends you have, the more popular you become. If your friend moves away, you become less popular, which makes your friends less popular by association, and so on. Even people you’ve never met might be affected.

“Popular” web pages tend to appear higher in search results. Censoring a page distorts this popularity contest and can change the order of even unrelated results. As more pages are blocked, the censored view of the web becomes increasingly distorted. As an aside, Ed notes that blocking a page removes more than just the offending material. If censors block Ed’s site due to an off-hand comment on Falun Gong, he also loses any influence he has on information security.

These effects would typically be rare and have a disproportionately small impact on popular pages. Google’s emphasis on the long tail, however, suggests that considerable value lies in providing high-quality results covering even less-popular pages. To avoid these issues, a government could allow limited individuals full web access to develop tools like search engines. This approach seems likely to stifle competition and innovation.

Countries with greater censorship might produce lower-quality search engines, but Google, Yahoo, Microsoft, and others can provide high-quality search results in those countries. These companies can access uncensored data, mitigating the indirect effects of censorship. This emphasizes the significance of measures like the Global Network Initiative, which has a participant list that includes Google, Yahoo, and Microsoft. Among other things, the initiative provides guidelines for participants regarding when and how information access may be restricted. The effectiveness of this specific initiative remains to be seen, but such measures may provide leading search engines with greater leverage to resist arbitrary censorship.

Search engines are unlikely to be the only tools adversely impacted by the indirect effects of censorship. Any tool that relies on links between information (think social networks) might be affected, and repressive states place themselves at a competitive disadvantage in developing these tools. Future developments might make these points moot: in a recent talk at the Center, Ethan Zuckerman mentioned tricks and trends that might make censorship more difficult. In the meantime, however, governments that censor information may increasingly find that they do so at their own expense.

avatar

How Fragile Is the Internet?

With Barack Obama’s election, we’re likely to see a revival of the network neutrality debate. Thus far the popular debate over the issue has produced more heat than light. On one side have been people who scoff at the very idea of network neutrality, arguing either that network neutrality is a myth or that we’d be better off without it. On the other are people who believe the open Internet is hanging on by its fingernails. These advocates believe that unless Congress passes new regulations quickly, major network providers will transform the Internet into a closed network where only their preferred content and applications are available.

One assumption that seems to be shared by both sides in the debate is that the Internet’s end-to-end architecture is fragile. At times, advocates on both sides debate seem to think that AT&T, Verizon, and Comcast have big levers in their network closets labeled “network neutrality” that they will set to “off” if Congress doesn’t stop them. In a new study for the Cato Institute, I argue that this assumption is unrealistic. The Internet has the open architecture it has for good technical reasons. The end-to-end principle is deeply embedded in the Internet’s architecture, and there’s no straightforward way to change it without breaking existing Internet applications.

One reason is technical. Advocates of regulation point to a technology called deep packet inspection as a major threat to the Internet’s open architecture. DPI allows network owners to look “inside” Internet packets, reconstructing the web page, email, or other information as it comes across the wire. This is an impressive technology, but it’s also important to remember its limitations. DPI is inherently reactive and brittle. It requires human engineers to precisely describe each type of traffic that is to be blocked. That means that as the Internet grows ever more complex, more and more effort would be required to keep DPI’s filters up to date. It also means that configuration problems will lead to the accidental blocking of unrelated traffic.

The more fundamental reason is economic. The Internet works as well as it does precisely because it is decentralized. No organization on Earth has the manpower that would have been required to directly manage all of the content and applications on the Internet. Networks like AOL and Compuserve that were managed that way got bogged down in bureaucracy while they were still a small fraction of the Internet’s current size. It is not plausible that bureaucracies at Comcast, AT&T, or Verizon could manage their TCP/IP networks the way AOL ran its network a decade ago.

Of course what advocates of regulation fear is precisely that these companies will try to manage their networks this way, fail, and screw the Internet up in the process. But I think this underestimates the magnitude of the disaster that would befall any network provider that tried to convert their Internet service into a proprietary network. People pay for Internet access because they find it useful. A proprietary Internet would be dramatically less useful than an open one because network providers would inevitably block an enormous number of useful applications and websites. A network provider that deliberately broke a significant fraction of the content or applications on its network would find many fewer customers willing to pay for it. Customers that could switch to a competitor would. Some others would simply cancel their home Internet service and rely instead on Internet access at work, school, libraries, etc. And many customers that had previously taken higher-speed Internet service would downgrade to basic service. In short, even in an environment of limited competition, reducing the value of one’s product is rarely a good business strategy.

This isn’t to say that ISPs will never violate network neutrality. A few have done so already. The most significant was Comcast’s interference with the BitTorrent protocol last year. I think there’s plenty to criticize about what Comcast did. But there’s a big difference between interfering with one networking protocol and the kind of comprehensive filtering that network neutrality advocates fear. And it’s worth noting that even Comcast’s modest interference with network neutrality provoked a ferocious response from customers, the press, and the political process. The Comcast/BitTorrent story certainly isn’t going to make other ISPs think that more aggressive violations of network neutrality would be a good business strategy.

So it seems to me that new regulations are unnecessary to protect network neutrality. They are likely to be counterproductive as well. As Ed has argued, defining network neutrality precisely is surprisingly difficult, and enacting a ban without a clear definition is a recipe for problems. In addition, there’s a real danger of what economists call regulatory capture—that industry incumbents will find ways to turn regulatory authority to their advantage. As I document in my study, this is what happened with 20th-century regulation of the railroad, airline, and telephone industries. Congress should proceed carefully, lest regulations designed to protect consumers from telecom industry incumbents wind up protecting incumbents from competition instead.