April 18, 2014

avatar

The Next Step towards an Open Internet

Now that the FCC has finally acted to safeguard network neutrality, the time has come to take the next step toward creating a level playing field on the rest of the Information Superhighway. Network neutrality rules are designed to ensure that large telecommunications companies do not squelch free speech and online innovation. However, it is increasingly evident that broadband companies are not the only threat to the open Internet. In short, federal regulators need to act now to safeguard social network neutrality.

The time to examine this issue could not be better. Facebook is the dominant social network in countries other than Brazil, where everybody uses Friendster or something. Facebook has achieved near-monopoly status in the social networking market. It now dominates the web, permeating all aspects of the information landscape. More than 2.5 million websites have integrated with Facebook. Indeed, there is evidence that people are turning to social networks instead of faceless search engines for many types of queries.

Social networks will soon be the primary gatekeepers standing between average Internet users and the web’s promise of information utopia. But can we trust them with this new-found power? Friends are unlikely to be an unbiased or complete source of information on most topics, creating silos of ignorance among the disparate components of the social graph. Meanwhile, social networks will have the power to make or break Internet businesses built atop the enormous quantity of referral traffic they will be able to generate. What will become of these businesses when friendships and tastes change? For example, there is recent evidence that social networks are hastening the decline of the music industry by promoting unknown artists who provide their music and streaming videos for free.

Social network usage patterns reflect deep divisions of race and class. Unregulated social networks could rapidly become virtual gated communities, with users cut off from others who could provide them with a diversity of perspectives. Right now, there’s no regulation of the immense decision-influencing power that friends have, and there are no measures in place to ensure that friends provide a neutral and balanced set of viewpoints. Fortunately, policy-makers have a rare opportunity to preempt the dangerous consequences of leaving this new technology to develop unchecked.

The time has come to create a Federal Friendship Commission to ensure that the immense power of social networks is not abused. For example, social network users who have their friend requests denied currently have no legal recourse. Users should have the option to appeal friend rejections to the FFC to verify that they don’t violate social network neutrality. Unregulated social networks will give many users a distorted view of the world dominated by the partisan, religious, and cultural prejudices of their immediate neighbors in the social graph. The FFC can correct this by requiring social networks to give equal time to any biased wall post.

However, others have suggested lighter-touch regulation, simply requiring each person to have friends of many races, religions, and political persuasions. Still others have suggested allowing information harms to be remedied through direct litigation—perhaps via tort reform that recognizes a new private right of action against violations of the “duty to friend.” As social networking software will soon be found throughout all aspects of society, urgent intervention is needed to forestall “The Tyranny of The Farmville.”

Of course, social network neutrality is just one of the policy tools regulators should use to ensure a level playing field. For example, the Department of Justice may need to more aggressively employ its antitrust powers to combat the recent dangerous concentration of social networking market share on popular micro-blogging services. But enacting formal social network neutrality rules is an important first step towards a more open web.

avatar

iPad: The Disneyland of Computers

Tech commentators have a love/hate relationship with Apple’s new iPad. Those who try it tend to like it, but many dislike its locked-down App Store which only allows Apple-approved apps. Some people even see the iPad as the dawn of a new relationship between people and computers.

To me, the iPad is Disneyland.

I like Disneyland. It’s clean, safe, and efficient. There are lots of entertaining things to do. Kids can drive cars; adults can wear goofy hats with impunity. There’s a parade every afternoon, and an underground medical center in case you get sick.

All of this is possible because of central planning. Every restaurant and store on Disneyland’s Main Street is approved in advance by Disney. Every employee is vetted by Disney. Disneyland wouldn’t be Disneyland without central planning.

I like to visit Disneyland, but I wouldn’t want to live there.

There’s a reason the restaurants in Disneyland are bland and stodgy. It’s not just that centralized decision processes like Disney’s have trouble coping with creative, nimble, and edgy ideas. It’s also that customers know who’s in charge, so any bad dining experience will be blamed on Disney, making Disney wary of culinary innovation. In Disneyland the trains run on time, but they take you to a station just like the one you left.

I like living in a place where anybody can open a restaurant or store. I like living in a place where anybody can open a bookstore and sell whatever books they want. Here in New Jersey, the trains don’t always run on time, but they take you to lots of interesting places.

The richness of our cultural opportunities, and the creative dynamism of our economy, are only possible because of a lack of central planning. Even the best central planning process couldn’t hope to keep up with the flow of new ideas.

The same is true of Apple’s app store bureaucracy: there’s no way it can keep up with the flow of new ideas — no way it can offer the scope and variety of apps that a less controlled environment can provide. And like the restaurants of Disneyland, the apps in Apple’s store will be blander because customers will blame the central planner for anything offensive they might say.

But there’s a bigger problem with the argument offered by central planning fanboys. To see what it is, we need to look more carefully at why Disneyland succeeded when so many centrally planned economies failed so dismally.

What makes Disneyland different is that it is an island of central planning, embedded in a free society. This means that Disneyland can seek its suppliers, employees, and customers in a free economy, even while it centrally plans its internal operations. This can work well, as long as Disneyland doesn’t get too big — as long as it doesn’t try to absorb the free society around it.

The same is true of Apple and the iPad. The whole iPad ecosystem, from the hardware to Apple’s software to the third-party app software, is only possible because of the robust free-market structures that create and organize knowledge, and mobilize workers, in the technology industry. If Apple somehow managed to absorb the tech industry into its centrally planned model, the result would be akin to Disneyland absorbing all of America. That would be enough to frighten even the most rabid fanboy, but fortunately it’s not at all likely. The iPad, like Disneyland, will continue to be an island of central planning in a sea of decentralized innovation.

So, iPad users, enjoy your trip to Disneyland. I understand why you’re going there, and I might go there one day myself. But don’t forget: there’s a big exciting world outside, and you don’t want to miss it.

avatar

TV Everywhere: Collusion Anywhere?

FreePress and the National Cable and Telecom Association (NCTA) are talking past each other about TV Everywhere, a new initiative from the cable TV industry. FreePress says TV Everywhere is the cable industry’s collusive attempt to limit competition; the NCTA says it’s an exciting new product opportunity for consumers. Let’s unpack this issue and see who might have a point, and who is blowing smoke.

We’re at a critical point in the history of television. In recent years, most people have gotten TV shows from a traditional cable or satellite service. Now more and more people are getting shows on the Internet. Cable companies need to adapt, somehow, or become dinosaurs.

Which brings us to TV Everywhere. The idea, according to the NCTA, is for cable companies to offer their residential subscribers online access to the same shows they get at home. Existing consumers get more, at no extra charge — who would complain about that? — but only if they keep buying traditional cable service.

FreePress tells a different story, in which cable industry companies have agreed among themselves that this is their sole Internet distribution strategy. If such an agreement exists, it is problematic — it looks like a classic market division agreement, which is bad for consumers and (as I understand it) presumptively illegal.

To understand why this would be bad, consider an analogy. Suppose there are only two pizza restaurants in Princeton, Alice’s Pizza and Bob’s Pizza, and neither one offers home delivery. Customers want delivery, so both restaurants are considering how to provide it. Alice and Bob meet, and they agree that Alice’s will only deliver to customers east of Nassau Street, and Bob’s will only deliver to customers west of Nassau Street. Alice and Bob have divided the market. Customers suffer because of the lack of competition.

Now obviously Alice and Bob are free to set reasonable limits on where they will deliver. Some customers may be too far away, or too difficult to deliver to for some reason. But customers would rightly complain if Alice and Bob agreed to divide the market. Even if we didn’t have smoking-gun evidence of an agreement, there might be very strong circumstantial evidence, for example if Alice offered to deliver to places five miles away while refusing to deliver to homes directly across the street from her Nassau Street restaurant, or if Alice and Bob’s restaurants were right next to each other but had totally disjoint delivery areas.

Notice too that Alice and Bob can’t get off the hook by pointing out that they are offering a new service — delivery — that they had never offered before. The problem is not that they are offering a new service, but that they have agreed not to offer certain other services.

How does this analogy apply to cable TV? Alice and Bob are like the cable companies, which are considering expanding beyond their traditional service. Home delivery of pizza is like Internet delivery of TV shows. As the cable industry expands to offer TV shows on the Internet, are they open to competing against each other, or have they agreed not to do so? If the cable companies have made an agreement to offer online TV shows only to their own residential customers, that looks like an agreement to divide the market — each company will be offering its product only in the limited geographic areas where it has a cable TV license.

So the key question — really the only one that matters, as far as I can see — is whether the cable companies have agreed not to compete. FreePress says, or strongly implies, that there is such an agreement. NCTA says there is not.

Who is right? Unfortunately the publicly available facts are consistent with either theory. Maybe TV Everywhere is just the first step and the cable companies will soon enough be competing with each other to distribute shows to Internet customers wherever they may be. Or maybe the companies have decided as a group to restrict themselves to TV Everywhere style services within geographic limits (or to otherwise restrict business models or prices).

At this point we can’t tell who is right. FreePress offers indirect but suggestive circumstantial evidence that questionable discussions might have occurred within the cable industry. The NCTA mostly just changes the subject, talking about the complexity of their industry and praising cable companies for offering shows on the Internet at all.

Unfortunately, public discourse about industry structure often confuses issues like this. We often say things like “the cable industry is worried about X” or “the cable industry wants Y”. That could be a kind of shorthand, meaning that the individual companies in the industry, facing competitive pressures, generally tend to worry about X or to want Y — perfectly reasonable market behavior. Or it could reflect an assumption that the industry acts as a unit, which of course is problematic. This ambiguity is especially common in political/policy debates, to our detriment. We’d be better off talking saying things like “cable companies worry about X” or “cable companies want Y”, just to remind ourselves that these are supposed to be independent actors who decide independently what they want.

For now, I’d say the cable companies bear watching. As the companies lay out their Internet strategies and products, I hope the antitrust authorities are watching closely. If the cable companies are really acting as competing companies, this will be obvious from their actions.

avatar

New York AG Files Antitrust Suit Against Intel

Yesterday, New York’s state Attorney General filed what could turn out to be a major antitrust suit against Intel. The suit accuses Intel of taking illegal steps to exclude a competitor, AMD, from the market.

All we have so far is the NYAG’s complaint, which tells one side of the case. Intel will have ample opportunity to respond, and the NYAG will ultimately have the burden of backing up its allegations with proof — so caution is in order at this point. Still, the complaint lays out the shape of the NYAG’s case.

The case concerns the market for x86-compatible microprocessors, which are the “brains” of most personal computers. Intel dominates this market but a rival company, AMD, has long been trying to build market share. The complaint offers a long narrative of Intel’s (and AMD’s) relationships with major PC makers (“OEMs”, in the jargon) such as Dell, HP, and IBM — the customers who buy x86 processors from Intel and AMD.

The crux of the case is the allegation that Intel paid OEMs to not buy from AMD. This is reminiscent of one aspect of the big Microsoft antitrust case of 1998, in which one of the DOJ’s claims was that Microsoft had paid people not to do business with Netscape.

I’ll leave it to the experts to debate the economic niceties, but as I understand it there is a distinction between paying someone to buy more of your product (e.g. giving a volume discount) as opposed to paying someone to buy less of your rival’s product. The former is generally fine, but if you have monopoly power the latter is suspect.

As the NYAG tells it, Intel tried to pretend the payments were for something else, but the participants knew what was really going on: that the payments would stop if an OEM started buying more from AMD. The evidence on this point could turn out to be important. Does the NYAG have “smoking gun” emails in which Intel made this explicit? Does the evidence show that OEMs understood the arrangement as the NYAG claims? I assume there’s a huge trove of email evidence that both sides will be digesting.

It will be interesting to watch this case develop. Thanks to tools like RECAP, many of the case documents will be available to the public. Stay tuned for more improvements to RECAP that will provide even better access.

avatar

iPhone Apps: Apple Picks a Little, Talks a Little

Last week Apple, in an incident destined for the textbooks, rejected an iPhone app called Eucalyptus, which lets you download and read classic public-domain books from Project Gutenberg. The rejection meant that nobody could download or use the app (without jailbreaking their phone). Apple’s rationale? Some of the books, in Apple’s view, were inappropriate.

Apple’s behavior put me in mind of the Pick-a-Little Ladies from the classic musical The Music Man. These women, named for their signature song “Pick a Little, Talk a Little,” condemn Marian the Librarian for having inappropriate books in her library:

Maud: Professor, her kind of woman doesn’t belong on any committee. Of course, I shouldn’t tell you this but she advocates dirty books.

Harold: Dirty books?!

Alma: Chaucer!

Ethel: Rabelais!

Eulalie: Balzac!

This is pretty much the scene we saw last week, with the Eucalyptus app in the role of Marian — providing works by Chaucer, Rabelais, and Balzac — and Apple in the role of the Pick-a-Little Ladies. Visualize Steve Jobs, in his black turtleneck and jeans, transported back to 1912 Iowa and singing along with these frumpy busybodies.

Later in The Music Man, the Pick-a-Little Ladies decide that Marian is all right after all, and they praise her for offering great literature. (“The Professor told us to read those books, and we simply adored them all!”) In the same way, Apple, after the outcry over its muzzling of Eucalyptus, reverse course and un-rejected Eucalyptus. Now we can all get Chaucer! Rabelais! Balzac! on our iPhones.

But there is one important difference between Apple and the Pick-a-Little Ladies. Apple had the power to veto Eucalyptus, but the Ladies couldn’t stop Marian from offering dirty books. The Ladies were powerless because Old Man Mason had cleverly bequeathed the library building to the town but the books to Marian. In today’s terms, Mason had jailbroken the library.

All of this highlights the downside of Apple’s controlling strategy. It’s one thing to block apps that are fraudulent or malicious, but Apple has gone beyond this to set itself up as the arbiter of good taste in iPhone apps. If you were Apple, would you rather be the Pick-a-Little Ladies, pretending to sit in judgement over the town, or Old Man Mason, letting people make their own choices?

avatar

European Antitrust Fines Against Intel: Possibly Justified

Last week the European Commission competition authorities charged Intel with anticompetitive behavior in the market for microprocessor chips, and levied a €1.06 billion ($1.45 billion) fine on the company. Some commentators attacked the ruling as ridiculous on its face. I disagree. Let me explain why the European action, though not conclusively justified at this point, is at least plausible.

The starting point of any competition analysis is to recall the purpose of competition law: not to protect rival firms (such as AMD in this case), but to protect competition for the benefit of consumers. The key is to understand what is fair competition and what is not. If a firm dominates a market, and even drives other firms out, but does so by producing better products at better prices, they deserve applause. If a dominant firm takes steps that are aimed more at undermining competition than at serving customers, then they may be crossing the line into anticompetitive behavior.

To do even a superficial analysis in a single blog post, we’re going to have to make some assumptions. First, for the sake of this post let’s accept as true the EC’s claims about Intel’s specific actions. Second, let’s set aside the details of European law and instead ask whether Intel’s actions were fair and justified. Third, let’s assume that there is a single market for processor chips, in the sense that any processor chip can be used in any system. A serious analysis would have to consider carefully all of these factors, but these assumptions will help us get started.

With all that in mind, does the EC have a plausible case against Intel?

First we have to ask whether Intel has monopoly power. Economists define monopoly power as the ability to raise prices above the competitive level without losing money as a result. We know that Intel has high market share, but that by itself does not imply monopoly power. Presumably the EC will argue that there is a significant barrier to entry which keeps new firms out of the microprocessor market, and that this barrier to entry plus Intel’s high market share adds up to monopoly power. This is at least plausible, and there isn’t space here to dissect that argument in detail, so let’s accept it for the sake of our analysis.

Now: having monopoly power, did Intel abuse that power by acting anticompetitively?

The EC accused Intel of two anticompetitive strategies. First, the EC says that Intel gave PC makers discounts if they agreed to ship Intel chips in 100% of their systems, or 80% of their systems. Is this anticompetitive? It’s hard to say. Volume discounts are common in many industries, but this is not a typical volume discount. The price goes down when the customer buys more Intel chips — that’s a typical volume discount — but the price of Intel chips also goes up when the customer buys more competing chips — which is unusual and might have anticompetitive effects. Whether Intel has a competitive justification for this remains to be seen.

Second, and more troubling, the EC says that “Intel awarded computer manufacturers payments – unrelated to any particular purchases from Intel – on condition that these computer manufacturers postponed or cancelled the launch of specific AMD-based products and/or put restrictions on the distribution of specific AMD-based products.” This one seems hard for Intel to justify. A firm with monopoly power, spending money to block competitor’s distribution channels, is a classic anticompetitive strategy.

None of this establishes conclusively that Intel broke the law, or that the EC’s fine is justified. We made a lot of assumptions along the way, and we would have to reconsider each of them carefully, before we could conclude that the EC’s argument is correct. We would also need to give Intel a chance to offer pro-competitive justifications for their behavior. But despite all of these caveats, I think we can conclude that although it is far from proven at this point, the EC’s case should be taken seriously.

avatar

Cheap CAPTCHA Solving Changes the Security Game

ZDNet’s “Zero Day” blog has an interesting post on the gray-market economy in solving CAPTCHAs.

CAPTCHAs are those online tests that ask you to type in a sequence of characters from a hard-to-read image. By doing this, you prove that you’re a real person and not an automated bot – the assumption being that bots cannot decipher the CAPTCHA images reliably. The goal of CAPTCHAs is to raise the price of access to a resource, by requiring a small quantum of human attention, in the hope that legitimate human users will be willing to expend a little attention but spammers, password guessers, and other unwanted users will not.

It’s no surprise, then, that a gray market in CAPTCHA-solving has developed, and that that market uses technology to deliver CAPTCHAs efficiently to low-wage workers who solve many CAPTCHAs per hour. It’s no surprise, either, that there is vigorous competition between CAPTCHA-solving firms in India and elsewhere. The going rate, for high-volume buyers, seems to be about $0.002 per CAPTCHA solved.

I would happily pay that rate to have somebody else solve the CAPTCHAs I encounter. I see two or three CAPTCHAs a week, so this would cost me about twenty-five cents a year. I assume most of you, and most people in the developed world, would happily pay that much to never see CAPTCHAs. There’s an obvious business opportunity here, to provide a browser plugin that recognizes CAPTCHAs and outsources them to low-wage solvers – if some entrepreneur can overcome transaction costs and any legal issues.

Of course, the fact that CAPTCHAs can be solved for a small fee, and even that most users are willing to pay that fee, does not make CAPTCHAs useless. They still do raise the cost of spamming and other undesired behavior. The key question is whether imposing a $0.002 fee on certain kinds of accesses deters enough bad behavior. That’s an empirical question that is answerable in principle. We might not have the data to answer it in practice, at least not yet.

Another interesting question is whether it’s good public policy to try to stop CAPTCHA-solving services. It’s not clear whether governments can actually hinder CAPTCHA-solving services enough to raise the price (or risk) of using them. But even assuming that governments can raise the price of CAPTCHA-solving, the price increase will deter some bad behavior but will also prevent some beneficial transactions such as outsourcing by legitimate customers. Whether the bad behavior deterred outweighs the good behavior deterred is another empirical question we probably can’t answer yet.

On the first question – the impact of cheap CAPTCHA-solving – we’re starting a real-world experiment, like it or not.

avatar

iPhone Apps Show Industry the Benefits of Openness

Today’s New York Times reports on the impact of Apple’s decision to allow third-party application software on the iPhone:

In the first 10 days after Apple opened its App Store for the iPhone, consumers downloaded more than 25 million applications, ranging from games like Super Monkey Ball to tools like New York City subway maps. It was nothing short of revolutionary, not only because the number was so high but also because iPhone users could do it at all.

Consumers have long been frustrated with how much control carriers — AT&T, Verizon Wireless, Sprint and the like — have exerted over what they could download to their mobile phones. But in the last nine months, carriers, software developers and cellphone makers have embraced a new attitude of openness toward consumers.

The App Store makes a big difference to me as a new iPhone user – the device would be much less useful without third-party applications. The value of third-party applications and the platforms that enable them is a commonplace outside the mobile phone world. It’s good to see it finally seeping into what Walt Mossberg famously calls “the Soviet Ministries”.

But before we declare victory in the fight for open mobile devices, let’s remember how far the iPhone still has to go. Although a broad range of applications is available in the App Store, the Store is still under Apple’s control and no app can appear there without Apple’s blessing. Apple has been fairly permissive so far, but that could change, and in any case there will inevitably be conflicts between what users and developers want and what Apple wants.

One of Apple’s reasons for opening the App Store must have been the popularity of unauthorized (by Apple) iPhone apps, and the phenomenon of iPhone jailbreaking to enable those apps. Apple’s previous attempt to limit iPhone apps just didn’t work. Faced with the possibility that jailbreaking would become the norm, Apple had little choice but to offer an authorized distribution path for third-party apps.

It’s interesting to note that this consumer push for openness came on the iPhone, which was already the most open of the market-leading mobile phones because it had an up-to-date Web browser. You might have expected less open phones to be jailbroken first, as their users had the most to gain from new applications.

Why was the iPhone the focus of openness efforts? For several reasons, I think. First, iPhone users were already more attuned to the advantages of good application software on mobile phones – that’s one of the reasons they bought iPhones in the first place. Second, Apple’s reputation for focusing on improving customer experience led people to expect more and better applications as the product matured. Third, the iPhone came with an all-you-can-eat Internet access plan, so users didn’t have to worry that new apps would run up their bandwidth bill. And finally, the fact that the iPhone was nearer to being open, having a more sophisticated operating system and browser, made it easier to jallbreak.

This last is an important point, and it argues against claims by people like Jonathan Zittrain that almost-open “appliances” will take the place of today’s open computers. Generally, the closer a system is to being open, the more practical autonomy end users will have to control it, and the more easily unauthorized third-party apps can be built for it. An almost-open system must necessarily be built by starting with an open technical infrastructure and then trying to lock it down; but given the limits of real-world lockdown technologies, this means that customers will be able to jailbreak the system.

In short, nature abhors a functionality vacuum. Design your system to remove functionality, and users will find a way to restore that functionality. Like Apple, appliance vendors are better off leading this parade than trying to stop it.

avatar

Government Data and the Invisible Hand

David Robinson, Harlan Yu, Bill Zeller, and I have a new paper about how to use infotech to make government more transparent. We make specific suggestions, some of them counter-intuitive, about how to make this happen. The final version of our paper will appear in the Fall issue of the Yale Journal of Law and Technology. The best way to summarize it is to quote the introduction:

If the next Presidential administration really wants to embrace the potential of Internet-enabled government transparency, it should follow a counter-intuitive but ultimately compelling strategy: reduce the federal role in presenting important government information to citizens. Today, government bodies consider their own websites to be a higher priority than technical infrastructures that open up their data for others to use. We argue that this understanding is a mistake. It would be preferable for government to understand providing reusable data, rather than providing websites, as the core of its online publishing responsibility.

In the current Presidential cycle, all three candidates have indicated that they think the federal government could make better use of the Internet. Barack Obama’s platform explicitly endorses “making government data available online in universally accessible formats.” Hillary Clinton, meanwhile, remarked that she wants to see much more government information online. John McCain, although expressing excitement about the Internet, has allowed that he would like to delegate the issue, possible to a vice-president.

But the situation to which these candidates are responding – the wide gap between the exciting uses of Internet technology by private parties, on the one hand, and the government’s lagging technical infrastructure on the other – is not new. The federal government has shown itself consistently unable to keep pace with the fast-evolving power of the Internet.

In order for public data to benefit from the same innovation and dynamism that characterize private parties’ use of the Internet, the federal government must reimagine its role as an information provider. Rather than struggling, as it currently does, to design sites that meet each end-user need, it should focus on creating a simple, reliable and publicly accessible infrastructure that “exposes” the underlying data. Private actors, either nonprofit or commercial, are better suited to deliver government information to citizens and can constantly create and reshape the tools individuals use to find and leverage public data. The best way to ensure that the government allows private parties to compete on equal terms in the provision of government data is to require that federal websites themselves use the same open systems for accessing the underlying data as they make available to the public at large.

Our approach follows the engineering principle of separating data from interaction, which is commonly used in constructing websites. Government must provide data, but we argue that websites that provide interactive access for the public can best be built by private parties. This approach is especially important given recent advances in interaction, which go far beyond merely offering data for viewing, to offer services such as advanced search, automated content analysis, cross-indexing with other data sources, and data visualization tools. These tools are promising but it is far from obvious how best to combine them to maximize the public value of government data. Given this uncertainty, the best policy is not to hope government will choose the one best way, but to rely on private parties with their vibrant marketplace of engineering ideas to discover what works.

To read more, see our preprint on SSRN.

avatar

The Microsoft Case: The Second Browser War

Today I’ll wrap up my series of posts looking back at the Microsoft Case, by looking at the Second Browser War that is now heating up.

The First Browser War, of course, started in the mid-1990s with the rise of Netscape and its Navigator browser. Microsoft was slow to spot the importance the Web and raced to catch up. With version 3 of its Internet Explorer browser, released in 1996, Microsoft reached technical parity with Netscape. This was not enough to capture market share – most users stuck with the familiar Navigator – and Microsoft responded by adopting the tactics that provoked the antitrust case. With the help of these tactics, Microsoft won the first browser war, capturing the lion’s share of the browser market as Navigator was sold to AOL and then faded into obscurity.

On its way over the cliff, Netscape spun off an open source version of its browser, dubbing it Mozilla, after the original code name for Netscape’s browser. Over time, the Mozilla project released other software and renamed its browser as Mozilla Firefox. Microsoft, basking in its browser-war victory and high market share, moved its attention elsewhere as Firefox improved steadily. Now Firefox market share is around 15% and growing, and many commentators see Firefox as technically superior to current versions of Internet Explorer. Lately, Microsoft is paying renewed attention to Internet Explorer and the browser market. This may be the start of a Second Browser War.

It’s interesting to contrast the Second Browser War with the First. I see four main differences.

First, Firefox is an open-source project where Navigator was not. The impact of open source here is not in its zero price – in the First Browser War, both browsers had zero price – but in its organization. Firefox is developed and maintained by a loosely organized coalition of programmers, many of whom work for for-profit companies. There is also a central Mozilla organization, which has its own revenue stream (coming mostly from Google in exchange for Firefox driving search traffic to Google), but the central organization plays a much smaller role in browser development than Netscape did. Mozilla, not needing to pay all of its developers from browser revenue, has a much lower “burn rate” than Netscape did and is therefore much more resistant to attacks on its revenue stream. Indeed, the Firefox technology will survive, and maybe even prosper, even if the central organization is destroyed. In short, an open source competitor is much harder to kill.

The second difference is that this time Microsoft starts with most of the market share, whereas before it had very little. Market share tends to be stable – customers stick with the familiar, unless they have a good reason to switch – so the initial leader has a significant advantage. Microsoft might be able to win the Second Browser War, at least in a market-share sense, just by maintaining technical parity.

The third difference is that technology has advanced a lot in the intervening decade. One implication is that web-based applications are more widespread and practical than before. (But note that participants in the First Browser War probably overestimated the practicality of web-based apps.) This has to be a big issue for Microsoft – the rise of web-based apps reduce its Windows monopoly power – so if anything Microsoft has a stronger incentive to fight hard in the new browser war.

The final difference is that the Second Browser War will be fought in the shadow of the antitrust case. Microsoft will not use all the tactics it used last time but will probably focus more on technical innovation to produce a browser that is at least good enough that customers won’t switch to Firefox. If Firefox responds by innovating more itself, the result will be an innovation race that will benefit consumers.

The First Browser War brought a flood of innovation, along with some unsavory tactics. If the Second Browser War brings us the same kind of innovation, in a fair fight, we’ll all be better off, and the browsers of 2018 will be better than we expected.