November 28, 2024

Bill Gates: Is he an IP Maximalist, or an Open Access Advocate?

Maybe both. On July 20, the Wall Street Journal reported:

Frustrated that over two decades of research have failed to produce an AIDS vaccine, Microsoft Corp. Chairman Bill Gates is tying his foundation’s latest, biggest AIDS-vaccine grants to a radical concept: Those who get the money must first agree to share the results of their work in short order.

I can’t link to the full article because the Wall Street Journal – the only major American newspaper whose online operation is in the black – puts nearly all of its online content behind a paywall. But as it happens, there isn’t a great deal more to say on this topic because the Gates foundation has declined to specify the legal details of the sharing arrangement it will mandate.

Grant recipients and outside observers were unsure whether data-sharing requirements of the grants could pose potential legal or patent conflicts with Mr. Gates’s vow to respect intellectual property. Foundation officials said this week researchers would still be free to commercialize their discoveries, but they must develop access plans for people in the developing world.

The foundation declined to make its attorney available to address these concerns.

As David Bollier noted, the lack of detail from the Gates Foundation makes it difficult to know how the tradeoffs between sharing discoveries, on the one hand, and using IP to harness their value, on the other, will actually be made. But be that as it may, there seems to be a general question here about Mr. Gates’s views on intellectual property. As Mr. Bollier put it, it may appear that hell has frozen over: that Mr. Gates, whose business model depends on the IP regime he frequently and vigorously defends, is retreating from his support of extremely strong intellectual property rights.

But hell has (as usual) probably not frozen over. The appearance of an inherent conflict between support for strong intellectual property rights and support for open access is, in general, illusory. Why? Because the decision to be carefully selective in the exercise of one’s intellectual property rights is independent of the policy questions about exactly how far those rights should extend. If anything, the expansion of IP rights actually strengthens arguments for open access, creative commons licenses, and other approaches that carefully exercise a subset of the legally available rights.

If copyright, say, only extends to a specified handful of covered uses for the protected work, then an author or publisher may be well advised to reserve full control over all of those uses with an “all rights reserved” notice. But as the space of “reservable” rights, if you will, expands, the argument for reserving all of them necessarily weakens, since it depends on the case for reserving whichever right one happens to have the least reason to reserve.

And just as it is the case that stronger IP regimes strengthen the case for various forms of creative commons, open access and the like, the reverse is also true: The availability of these infrastructures and social norms for partial, selective “copyleft” strengthens the case for expansive IP regimes by reducing the frequency with which the inefficient reservations of rights made legally possible by such regimes will actually take place.

That, I think, may be Mr. Gates’s genius. By supporting open access (of some kind), he can show the way to a world in which stronger IP rights do not imply a horrifyingly inefficient “lockdown” of creativity and innovation.

The New Yorker Covers Wikipedia

Writing in this week’s New Yorker, Stacy Schiff takes a look at the Wikipedia phenomenon. One sign that she did well: The inevitable response page at Wikipedia is almost entirely positive. Schiff’s writing is typical of what makes the New Yorker great. It has rich historical context, apt portrayals of the key characters involved in the story, and a liberal sprinkling of humor, bons mots and surprising factual nuggets. It is also, as all New Yorker pieces are, rigorously fact-checked and ably edited.

Normally, I wouldn’t use FTT as a forum to “talk shop” about a piece of journalism. But in this case, the medium really is the message – the New Yorker’s coverage of Wikipedia is itself a showcase for some of the things old-line publications still do best. As soon as I saw Schiff’s article in my New Yorker table of contents (yes, I still read it in hard copy, and yes, I splurge on getting it mailed abroad to Oxford) I knew it would be a great test case. On the one hand, Wikipedia is the preeminent example of community-driven, user-generated content. Any coverage of Wikipedia, particularly any critical coverage, is guaranteed to be the target of harsh, well-informed scrutiny by the proud community of Wikipedians. On the other, The New Yorker’s writing is, indisputably, among the best out there, and its fact checking department is widely thought to be the strongest in the business.

When reading Wikipedia, one has to react to surprising claims by entertaining the possibility that they might not be true. The less plausible a claim sounds, the more skepticism one must have when considering it. In some cases, a glance at the relevant Talk page helps, since this can at least indicate whether or not the claim has been vetted by other Wikipedians. But not every surprising claim has backstory available on the relevant talk page, and not every reader has the time or inclination to go to that level of trouble for every dubious claim she encounters in Wikipedia. The upshot is that implausible or surprising claims in Wikipedia often get taken with a grain or more of salt, and not believed – and on the other hand, plausible-sounding falsehoods are, as a result of their seeming plausibility, less likely to be detected.

On the other hand, rigorous fact-checking (at least in the magazine context where I have done it and seen it) does not simply mean that someone is trying hard to get things right: It means that someone’s job depends on their being right, and it means that the particularly surprising claims in the fact-checked content in particular can be counted on to be well documented by the intense, aspiring, nervous young person at the fact checker’s desk. At TIME, for example, every single word that goes in to the magazine physically gets a check mark, on the fact-checkers’ copy, once its factual content has been verified, with the documentation of the fact’s truth filed away in an appropriate folder (the folders, in a holdover from an earlier era, are still called “carbons”). It is every bit as grueling as it sounds, and entirely worthwhile. The same system is in use across most of the Time, Inc. magazine publishing empire, which includes People, Fortune, and Sports Illustrated and represents a quarter of the U.S. consumer magazine market. It’s not perfect of course – reports of what someone said in a one-on-one interview, for example, can only ever be as good as the reporter’s notes or tape recording. But it is very, very good. In my own case, knowing what goes in to the fact-checking process at places like TIME and The New Yorker gives me a much higher level of confidence in their accuracy than I have when, as I often do, I learn something new from Wikipedia.

The guarantee of truth that backs up New Yorker copy gives its content a much deeper impact. Consider these four paragraphs from Schiff’s story:

The encyclopedic impulse dates back more than two thousand years and has rarely balked at national borders. Among the first general reference works was Emperor’s Mirror, commissioned in 220 A.D. by a Chinese emperor, for use by civil servants. The quest to catalogue all human knowledge accelerated in the eighteenth century. In the seventeen-seventies, the Germans, champions of thoroughness, began assembling a two-hundred-and-forty-two-volume masterwork. A few decades earlier, Johann Heinrich Zedler, a Leipzig bookseller, had alarmed local competitors when he solicited articles for his Universal-Lexicon. His rivals, fearing that the work would put them out of business by rendering all other books obsolete, tried unsuccessfully to sabotage the project.

It took a devious Frenchman, Pierre Bayle, to conceive of an encyclopedia composed solely of errors. After the idea failed to generate much enthusiasm among potential readers, he instead compiled a “Dictionnaire Historique et Critique,” which consisted almost entirely of footnotes, many highlighting flaws of earlier scholarship. Bayle taught readers to doubt, a lesson in subversion that Diderot and d’Alembert, the authors of the Encyclopédie (1751-80), learned well. Their thirty-five-volume work preached rationalism at the expense of church and state. The more stolid Britannica was born of cross-channel rivalry and an Anglo-Saxon passion for utility.

Wales’s first encyclopedia was the World Book, which his parents acquired after dinner one evening in 1969, from a door-to-door salesman. Wales—who resembles a young Billy Crystal with the neuroses neatly tucked in—recalls the enchantment of pasting in update stickers that cross-referenced older entries to the annual supplements. Wales’s mother and grandmother ran a private school in Huntsville, Alabama, which he attended from the age of three. He graduated from Auburn University with a degree in finance and began a Ph.D. in the subject, enrolling first at the University of Alabama and later at Indiana University. In 1994, he decided to take a job trading options in Chicago rather than write his dissertation. Four years later, he moved to San Diego, where he used his savings to found an Internet portal. Its audience was mostly men; pornography—videos and blogs—accounted for about a tenth of its revenues. Meanwhile, Wales was cogitating. In his view, misinformation, propaganda, and ignorance are responsible for many of the world’s ills. “I’m very much an Enlightenment kind of guy,” Wales told me. The promise of the Internet is free knowledge for everyone, he recalls thinking. How do we make that happen?

As an undergraduate, he had read Friedrich Hayek’s 1945 free-market manifesto, “The Use of Knowledge in Society,” which argues that a person’s knowledge is by definition partial, and that truth is established only when people pool their wisdom. Wales thought of the essay again in the nineteen-nineties, when he began reading about the open-source movement, a group of programmers who believed that software should be free and distributed in such a way that anyone could modify the code. He was particularly impressed by “The Cathedral and the Bazaar,” an essay, later expanded into a book, by Eric Raymond, one of the movement’s founders. “It opened my eyes to the possibility of mass collaboration,” Wales said.

After reading this copy, and knowing how The New Yorker works, one can be confident that a devious Frenchman named Pierre Bayle once really did propose an encyclopedia comprised entirely of errors. The narrative is put together well. It will keep people reading and will not cause confusion. Interested readers can follow up on a nugget like Wales’ exposure to the Hayek essay by reading it themselves (it’s online here).

I am not a Wikipedia denialist. It is, and will continue to be, an important and valuable resource. But the expensive, arguably old fashioned approach of The New Yorker and other magazines still delivers a level of quality I haven’t found, and do not expect to find, in the world of community-created content.

More on Meta-Freedom

Tim Lee comments:

The fact that you can waive your free speech rights via contract doesn’t mean that it would be a good idea to enact special laws backing up those contracts with criminal penalties. I think you’re missing an important middle ground here. The choice isn’t between no tinkering rights and constitutionally mandated tinkering rights. There’s a third option: the the law should neither restrict nor guarantee tinkering rights. You’re welcome to tinker, but you’re also welcome to contract away your freedom to tinker.

The DMCA sticks its thumbs on the “no tinkering” side of the scale by giving DRM creators rights beyond those available to parties in ordinary contract disputes, and by roping third parties into the DRM “contract” whether they’ve agreed to it or not. If I sign a NDA, and then I break it, the company can sue me. But they can’t have me thrown in jail. And they can’t necessarily sue the journalist to whom I divulged the NDA’d information.

But repealing the DMCA would not create an inalienable right to tinker. It would simply put the freedom to tinker on the same plane as all our other rights: you’d have the right to sign them away by contract, but in the absence of a contract you would retain them.

He is right. There is an important middle ground possible that calls for DMCA repeal without calling for the contracts that restrict tinkering rights to be unenforceable. There is certainly a great deal to be said in favor of such a position. I would still say that the mixture-of-motives issue applies, because when people are allowed to sign away their tinkering rights, many of them will, and this outcome will be particularly unwelcome among power users and technology policy activists.

The Freedom to Tinker with Freedom?

Doug Lay, commenting on my last post, pointed out that the Zune buyout would help make a world of DRM-enabled music services more attractive. “Where,” he asked, “does this leave the freedom to tinker?”

Anti-DMCA activism has tended to focus on worst-case, scary scenarios that can spur people to action. It’s a standard move in politics of all kinds, aptly captured in the title of a 2005 BBC documentary about Bush and Blair, The Power of Nightmares. In the context of a world of DRM gone mad, it’s obvious why we need the freedom to tinker. We need it because (in that world) opaque, tinker-proof devices protected by restrictive laws would be extremely harmful to consumers. The only way to make sure that the experience of the average media viewer or software user doesn’t go down the tubes, in this scenario, is to make sure that consumers, either legislatively or through individual choice, never let DRM get off the ground.

But consider an alternative possibility. The Darknet is a permanent backdrop for any real-world system. The major players know this – after all, it was a team at Microsoft Research that helped to launch the Darknet idea. The big players will, in the long run, be smart enough not to drive users into the arms of the Darknet. They will compete with the Darknet, and with each other, and will end up producing systems that most consumers think are fine. Yes, consumers will (still) chafe at the restrictions on DRM-protected systems ten or twenty years from now. But on the whole, they will find that these systems are attractive, and worth investing in.

Who loses in this scenario? Ed and others have argued that all consumers will suffer to some degree because we all enjoy the benefits that come from a few intrepid power users excercising the freedom to tinker. There are educational benefits that come from tinkering and, perhaps most importantly, the freedom to tinker keeps technologies flexible and leaves room for them to interoperate in surprising ways not initially envisioned by their creators. And, as Alex has pointed out to me, the social costs of tinkerproofing are cumulative in such a way that there may be a collective bargaining problem–we may have a situation in which the freedom to tinker does not matter very much to most individuals, but we’d all be better off if, collectively, we assigned a higher value to our individual freedom to tinker than we actually feel for it.

These arguments certainly have significant merit. Together, they (and others like them) might be enough to make it the case that we should create legal protection for the freedom to tinker, or at least build a social consensus for the importance of tinkering.

But I think the people who lose the most, in this DRM-isn’t-so-bad scenario, are the power users. People who like to poke around under the hood. People who are outliers, attaching more importance to the freedom to tinker than a typical consumer attaches to it. I’m talking, in other words, about us.

We the reader-participants of www.freedom-to-tinker.com are an unusual bunch. We really like to tinker. In my own case, I know that I care more about things like being able to time and space shift my media collection than the average person does. I derive a certain strange pleasure from being able to change the way the interface on my desktop computer looks. I buy books so I can mark them up, even though it would be much cheaper and more space-efficient to use a library.

In fact, when I think about it, I have to admit that I would find a world where DRM works and the ability to tinker can be bargained away to be a bit of a downer. I know that the equilibrium point the market reaches, in such a case, will be based on the moderate importance most people attach to tinkering, rather than the high importance that I attach to it. I’ll probably still buy in to some DRM-based music scheme in the long run, just as I still go to the movies even while wishing that they would focus more on plot and less on special effects. But I’ll miss the tinkering.

If the government were to put a legal guarantee behind the freedom to tinker, it would be reducing peoples’ freedom to contract by telling them they can’t bargain away their tinkering rights. It would force on consumers as a whole an outcome that they would manifestly not choose for themselves in the private market. Yes, it is possible that externalities or collective action issues could justify this coercion. But even if those considerations didn’t justify the coercion, part of me would still want it to happen, because that way, I’d get to keep tinkering rights that, under a different terrain of options, I would end up choosing to relinquish.

I apparently haven’t mastered the art of ending a blog post, so just as I closed last time with a “bottom line,” this one gets a “moral of the story.” The moral of the story is that many of us, who may find ourselves arguing based on public reasons for public policies that protect the freedom to tinker, also have a private reason to favor such policies. The private reason is that we ourselves care more about tinkering than the public at large does, and we would therefore be happier in a protected-tinkering world than the public at large would be. We all owe it to ourselves, to each other, and to the public with whom we communicate to be careful and candid about our mixture of motivations.

Rethinking DRM Dystopia

Thanks to Ed for the flattering introduction – now if only I can live up to it! It’s an honor (and a little intimidating) to be guest blogging on FTT after several years as an avid reader. I’ve never blogged before, but I am looking forward to the thoughtful, user-driven exchanges and high transparency that blogs in general, and FTT in particular, seem to cultivate. Please consider yourself, dear reader, every bit as warmly invited to comment and engage with my posts as you are with Ed’s, and Alex’s.

I want to use this first post to flag something that startled me, and to speculate a little about the lessons that might be drawn from it. I was surprised to read recently that Zune, Microsoft’s new music service, will probably scan users’ iTunes libraries and automatically buy for them (at Microsoft’s expense) copies of any protected music they own on the iTunes service.

Let’s suppose, for the sake of argument, that this early report is right – that Microsoft is, in fact, going to make an offer to all iTunes users to replicate their libraries of iTunes, FairPlay-protected music on the new Zune service at no added cost to the users. There are several questions of fact that leap to mind. Did Microsoft obtain the licensing rights to all of the music that is for sale on iTunes? If not, there will be some iTunes music that is not portable to the new service. Will copyright holders be getting the same amount from Microsoft, when their songs are re-purchased on behalf of migrating iTunes users, as they will get when a user makes a normal purchase of the same track in the Zune system? The copyright holders have a substantial incentive to offer Microsoft a discount on this kind of “buy out” mass purchasing. As Ed pointed out to me, it is unlikely that users would otherwise choose to re-purchase all of their music, at full price, out of their own pockets simply in order to be able to move from iTunes to Zune. By discounting their tracks to enable migration to a new service, the copyright holders would be helping create a second viable mass platform for online music sales – a move that would, in the long run, probably increase their sales.

I have spent a fair amount of time and energy worrying about dystopian scenarios in which a single vertically integrated platform, protected by legally-reinforced DRM technologies, locks users in and deprives them not only of first-order options (like the ability to copy songs to a second computer), but also of the second-order freedom to migrate away from a platform whose DRM provisions, catalog, or other features ultimately compare unfavorably to alternative platforms.

Of course, as it has turned out, the dominant DRM platform at the moment, FairPlay, actually does let people make copies of their songs on multiple computers. It is in general a fair bit less restrictive than what some of us have worried that we might, as consumers, ultimately end up being saddled with. Indeed, the relatively permissive structure of FairPlay DRM is very likely one of the factors that has contributed to Apple’s success in a marketplace that has seen many more restrictive alternative systems fail to take hold. But the dominance of Apple’s whole shiny white realm of vertical integration in the digital music market still has made it seem like it would be hard to opt against Apple, even if the platform were to get worse or if better platforms were to emerge to challenge it.

But now it seems that it may actually be easy as pie for any iTunes user to leave the Apple platform. The cost of the Zune player, which will presumably be exclusive to the Zune music service just as the iPod is to iTunes, is a significant factor, but given that reliability issues require users to replace iPods frequently, buying a new player doesn’t actually change the cost equation for a typical user over the long run.

What are the lessons here? Personally, I feel like I underestimated the power of the market to solve the possible problems raised by DRM. It appears that the “lock in” phenomenon creates a powerful incentive for competitors to invest heavily in acquiring new users, even to the point of buying them out. Microsoft is obviously the most powerful player in the technology field, and perhaps some will argue it is unique in its ability to make this kind of an offer. But I doubt that – if the Zune launch is a success, it will set a powerful precedent that DRM buyouts can be worthwhile. And even if Microsoft were unique in its ability to offer a buyout, the result in this case is that we’ll have two solid, competing platforms, each one vertically integrated. It’s no stretch of the imagination to think Apple may respond with a similar offer to lure Zune users to iTunes.

Bottom line: Markets are often surprisingly good at sorting out this kind of thing. Technology policy watchers underestimate the power of competition at our peril. It’s easy to see Microsoft or Apple as established firms coasting on their vertically integrated dominance, but the Zune buyout is a powerful reminder that that’s not what it feels like to be in this or most any other business. These firms, even the biggest, best and most dominant, are constantly working hard to outdo one another. Consumers often do very well as a result… even in a world of DRM.