April 16, 2024

European Antitrust Fines Against Intel: Possibly Justified

Last week the European Commission competition authorities charged Intel with anticompetitive behavior in the market for microprocessor chips, and levied a €1.06 billion ($1.45 billion) fine on the company. Some commentators attacked the ruling as ridiculous on its face. I disagree. Let me explain why the European action, though not conclusively justified at this point, is at least plausible.

The starting point of any competition analysis is to recall the purpose of competition law: not to protect rival firms (such as AMD in this case), but to protect competition for the benefit of consumers. The key is to understand what is fair competition and what is not. If a firm dominates a market, and even drives other firms out, but does so by producing better products at better prices, they deserve applause. If a dominant firm takes steps that are aimed more at undermining competition than at serving customers, then they may be crossing the line into anticompetitive behavior.

To do even a superficial analysis in a single blog post, we’re going to have to make some assumptions. First, for the sake of this post let’s accept as true the EC’s claims about Intel’s specific actions. Second, let’s set aside the details of European law and instead ask whether Intel’s actions were fair and justified. Third, let’s assume that there is a single market for processor chips, in the sense that any processor chip can be used in any system. A serious analysis would have to consider carefully all of these factors, but these assumptions will help us get started.

With all that in mind, does the EC have a plausible case against Intel?

First we have to ask whether Intel has monopoly power. Economists define monopoly power as the ability to raise prices above the competitive level without losing money as a result. We know that Intel has high market share, but that by itself does not imply monopoly power. Presumably the EC will argue that there is a significant barrier to entry which keeps new firms out of the microprocessor market, and that this barrier to entry plus Intel’s high market share adds up to monopoly power. This is at least plausible, and there isn’t space here to dissect that argument in detail, so let’s accept it for the sake of our analysis.

Now: having monopoly power, did Intel abuse that power by acting anticompetitively?

The EC accused Intel of two anticompetitive strategies. First, the EC says that Intel gave PC makers discounts if they agreed to ship Intel chips in 100% of their systems, or 80% of their systems. Is this anticompetitive? It’s hard to say. Volume discounts are common in many industries, but this is not a typical volume discount. The price goes down when the customer buys more Intel chips — that’s a typical volume discount — but the price of Intel chips also goes up when the customer buys more competing chips — which is unusual and might have anticompetitive effects. Whether Intel has a competitive justification for this remains to be seen.

Second, and more troubling, the EC says that “Intel awarded computer manufacturers payments – unrelated to any particular purchases from Intel – on condition that these computer manufacturers postponed or cancelled the launch of specific AMD-based products and/or put restrictions on the distribution of specific AMD-based products.” This one seems hard for Intel to justify. A firm with monopoly power, spending money to block competitor’s distribution channels, is a classic anticompetitive strategy.

None of this establishes conclusively that Intel broke the law, or that the EC’s fine is justified. We made a lot of assumptions along the way, and we would have to reconsider each of them carefully, before we could conclude that the EC’s argument is correct. We would also need to give Intel a chance to offer pro-competitive justifications for their behavior. But despite all of these caveats, I think we can conclude that although it is far from proven at this point, the EC’s case should be taken seriously.

Cheap CAPTCHA Solving Changes the Security Game

ZDNet’s “Zero Day” blog has an interesting post on the gray-market economy in solving CAPTCHAs.

CAPTCHAs are those online tests that ask you to type in a sequence of characters from a hard-to-read image. By doing this, you prove that you’re a real person and not an automated bot – the assumption being that bots cannot decipher the CAPTCHA images reliably. The goal of CAPTCHAs is to raise the price of access to a resource, by requiring a small quantum of human attention, in the hope that legitimate human users will be willing to expend a little attention but spammers, password guessers, and other unwanted users will not.

It’s no surprise, then, that a gray market in CAPTCHA-solving has developed, and that that market uses technology to deliver CAPTCHAs efficiently to low-wage workers who solve many CAPTCHAs per hour. It’s no surprise, either, that there is vigorous competition between CAPTCHA-solving firms in India and elsewhere. The going rate, for high-volume buyers, seems to be about $0.002 per CAPTCHA solved.

I would happily pay that rate to have somebody else solve the CAPTCHAs I encounter. I see two or three CAPTCHAs a week, so this would cost me about twenty-five cents a year. I assume most of you, and most people in the developed world, would happily pay that much to never see CAPTCHAs. There’s an obvious business opportunity here, to provide a browser plugin that recognizes CAPTCHAs and outsources them to low-wage solvers – if some entrepreneur can overcome transaction costs and any legal issues.

Of course, the fact that CAPTCHAs can be solved for a small fee, and even that most users are willing to pay that fee, does not make CAPTCHAs useless. They still do raise the cost of spamming and other undesired behavior. The key question is whether imposing a $0.002 fee on certain kinds of accesses deters enough bad behavior. That’s an empirical question that is answerable in principle. We might not have the data to answer it in practice, at least not yet.

Another interesting question is whether it’s good public policy to try to stop CAPTCHA-solving services. It’s not clear whether governments can actually hinder CAPTCHA-solving services enough to raise the price (or risk) of using them. But even assuming that governments can raise the price of CAPTCHA-solving, the price increase will deter some bad behavior but will also prevent some beneficial transactions such as outsourcing by legitimate customers. Whether the bad behavior deterred outweighs the good behavior deterred is another empirical question we probably can’t answer yet.

On the first question – the impact of cheap CAPTCHA-solving – we’re starting a real-world experiment, like it or not.

iPhone Apps Show Industry the Benefits of Openness

Today’s New York Times reports on the impact of Apple’s decision to allow third-party application software on the iPhone:

In the first 10 days after Apple opened its App Store for the iPhone, consumers downloaded more than 25 million applications, ranging from games like Super Monkey Ball to tools like New York City subway maps. It was nothing short of revolutionary, not only because the number was so high but also because iPhone users could do it at all.

Consumers have long been frustrated with how much control carriers — AT&T, Verizon Wireless, Sprint and the like — have exerted over what they could download to their mobile phones. But in the last nine months, carriers, software developers and cellphone makers have embraced a new attitude of openness toward consumers.

The App Store makes a big difference to me as a new iPhone user – the device would be much less useful without third-party applications. The value of third-party applications and the platforms that enable them is a commonplace outside the mobile phone world. It’s good to see it finally seeping into what Walt Mossberg famously calls “the Soviet Ministries”.

But before we declare victory in the fight for open mobile devices, let’s remember how far the iPhone still has to go. Although a broad range of applications is available in the App Store, the Store is still under Apple’s control and no app can appear there without Apple’s blessing. Apple has been fairly permissive so far, but that could change, and in any case there will inevitably be conflicts between what users and developers want and what Apple wants.

One of Apple’s reasons for opening the App Store must have been the popularity of unauthorized (by Apple) iPhone apps, and the phenomenon of iPhone jailbreaking to enable those apps. Apple’s previous attempt to limit iPhone apps just didn’t work. Faced with the possibility that jailbreaking would become the norm, Apple had little choice but to offer an authorized distribution path for third-party apps.

It’s interesting to note that this consumer push for openness came on the iPhone, which was already the most open of the market-leading mobile phones because it had an up-to-date Web browser. You might have expected less open phones to be jailbroken first, as their users had the most to gain from new applications.

Why was the iPhone the focus of openness efforts? For several reasons, I think. First, iPhone users were already more attuned to the advantages of good application software on mobile phones – that’s one of the reasons they bought iPhones in the first place. Second, Apple’s reputation for focusing on improving customer experience led people to expect more and better applications as the product matured. Third, the iPhone came with an all-you-can-eat Internet access plan, so users didn’t have to worry that new apps would run up their bandwidth bill. And finally, the fact that the iPhone was nearer to being open, having a more sophisticated operating system and browser, made it easier to jallbreak.

This last is an important point, and it argues against claims by people like Jonathan Zittrain that almost-open “appliances” will take the place of today’s open computers. Generally, the closer a system is to being open, the more practical autonomy end users will have to control it, and the more easily unauthorized third-party apps can be built for it. An almost-open system must necessarily be built by starting with an open technical infrastructure and then trying to lock it down; but given the limits of real-world lockdown technologies, this means that customers will be able to jailbreak the system.

In short, nature abhors a functionality vacuum. Design your system to remove functionality, and users will find a way to restore that functionality. Like Apple, appliance vendors are better off leading this parade than trying to stop it.

Government Data and the Invisible Hand

David Robinson, Harlan Yu, Bill Zeller, and I have a new paper about how to use infotech to make government more transparent. We make specific suggestions, some of them counter-intuitive, about how to make this happen. The final version of our paper will appear in the Fall issue of the Yale Journal of Law and Technology. The best way to summarize it is to quote the introduction:

If the next Presidential administration really wants to embrace the potential of Internet-enabled government transparency, it should follow a counter-intuitive but ultimately compelling strategy: reduce the federal role in presenting important government information to citizens. Today, government bodies consider their own websites to be a higher priority than technical infrastructures that open up their data for others to use. We argue that this understanding is a mistake. It would be preferable for government to understand providing reusable data, rather than providing websites, as the core of its online publishing responsibility.

In the current Presidential cycle, all three candidates have indicated that they think the federal government could make better use of the Internet. Barack Obama’s platform explicitly endorses “making government data available online in universally accessible formats.” Hillary Clinton, meanwhile, remarked that she wants to see much more government information online. John McCain, although expressing excitement about the Internet, has allowed that he would like to delegate the issue, possible to a vice-president.

But the situation to which these candidates are responding – the wide gap between the exciting uses of Internet technology by private parties, on the one hand, and the government’s lagging technical infrastructure on the other – is not new. The federal government has shown itself consistently unable to keep pace with the fast-evolving power of the Internet.

In order for public data to benefit from the same innovation and dynamism that characterize private parties’ use of the Internet, the federal government must reimagine its role as an information provider. Rather than struggling, as it currently does, to design sites that meet each end-user need, it should focus on creating a simple, reliable and publicly accessible infrastructure that “exposes” the underlying data. Private actors, either nonprofit or commercial, are better suited to deliver government information to citizens and can constantly create and reshape the tools individuals use to find and leverage public data. The best way to ensure that the government allows private parties to compete on equal terms in the provision of government data is to require that federal websites themselves use the same open systems for accessing the underlying data as they make available to the public at large.

Our approach follows the engineering principle of separating data from interaction, which is commonly used in constructing websites. Government must provide data, but we argue that websites that provide interactive access for the public can best be built by private parties. This approach is especially important given recent advances in interaction, which go far beyond merely offering data for viewing, to offer services such as advanced search, automated content analysis, cross-indexing with other data sources, and data visualization tools. These tools are promising but it is far from obvious how best to combine them to maximize the public value of government data. Given this uncertainty, the best policy is not to hope government will choose the one best way, but to rely on private parties with their vibrant marketplace of engineering ideas to discover what works.

To read more, see our preprint on SSRN.

The Microsoft Case: The Second Browser War

Today I’ll wrap up my series of posts looking back at the Microsoft Case, by looking at the Second Browser War that is now heating up.

The First Browser War, of course, started in the mid-1990s with the rise of Netscape and its Navigator browser. Microsoft was slow to spot the importance the Web and raced to catch up. With version 3 of its Internet Explorer browser, released in 1996, Microsoft reached technical parity with Netscape. This was not enough to capture market share – most users stuck with the familiar Navigator – and Microsoft responded by adopting the tactics that provoked the antitrust case. With the help of these tactics, Microsoft won the first browser war, capturing the lion’s share of the browser market as Navigator was sold to AOL and then faded into obscurity.

On its way over the cliff, Netscape spun off an open source version of its browser, dubbing it Mozilla, after the original code name for Netscape’s browser. Over time, the Mozilla project released other software and renamed its browser as Mozilla Firefox. Microsoft, basking in its browser-war victory and high market share, moved its attention elsewhere as Firefox improved steadily. Now Firefox market share is around 15% and growing, and many commentators see Firefox as technically superior to current versions of Internet Explorer. Lately, Microsoft is paying renewed attention to Internet Explorer and the browser market. This may be the start of a Second Browser War.

It’s interesting to contrast the Second Browser War with the First. I see four main differences.

First, Firefox is an open-source project where Navigator was not. The impact of open source here is not in its zero price – in the First Browser War, both browsers had zero price – but in its organization. Firefox is developed and maintained by a loosely organized coalition of programmers, many of whom work for for-profit companies. There is also a central Mozilla organization, which has its own revenue stream (coming mostly from Google in exchange for Firefox driving search traffic to Google), but the central organization plays a much smaller role in browser development than Netscape did. Mozilla, not needing to pay all of its developers from browser revenue, has a much lower “burn rate” than Netscape did and is therefore much more resistant to attacks on its revenue stream. Indeed, the Firefox technology will survive, and maybe even prosper, even if the central organization is destroyed. In short, an open source competitor is much harder to kill.

The second difference is that this time Microsoft starts with most of the market share, whereas before it had very little. Market share tends to be stable – customers stick with the familiar, unless they have a good reason to switch – so the initial leader has a significant advantage. Microsoft might be able to win the Second Browser War, at least in a market-share sense, just by maintaining technical parity.

The third difference is that technology has advanced a lot in the intervening decade. One implication is that web-based applications are more widespread and practical than before. (But note that participants in the First Browser War probably overestimated the practicality of web-based apps.) This has to be a big issue for Microsoft – the rise of web-based apps reduce its Windows monopoly power – so if anything Microsoft has a stronger incentive to fight hard in the new browser war.

The final difference is that the Second Browser War will be fought in the shadow of the antitrust case. Microsoft will not use all the tactics it used last time but will probably focus more on technical innovation to produce a browser that is at least good enough that customers won’t switch to Firefox. If Firefox responds by innovating more itself, the result will be an innovation race that will benefit consumers.

The First Browser War brought a flood of innovation, along with some unsavory tactics. If the Second Browser War brings us the same kind of innovation, in a fair fight, we’ll all be better off, and the browsers of 2018 will be better than we expected.