October 30, 2024

Greetings, and a Thought on Net Neutrality

Hello again, FTT readers. You may remember me as a guest blogger here at FTT, writing about anti-circumvention, the print media’s superiority (or lack thereof) to Wikipedia, and a variety of other topics.

I’m happy to report that I’ve moved to Princeton to join the university’s Center for Information Technology Policy as its new associate director. Working with Ed and others here on campus, I’ll be helping bring the Center into its own as a leading interdisciplinary venue for research and conversation about the social and political impact of information technology.

Over the next few months, I’ll be traveling the country to look at how other institutions approach this area, in order to develop a strategic plan for Princeton’s involvement in the field. As a first step toward understanding the world of tech policy, I’ve been doing a lot of reading lately.

One great source is The Creation of the Media by Princeton’s own Paul Starr. It’s carefully argued and highly readable, and I’ve found its content challenging. Conversations in tech policy often seem to stem from the premise that in the interaction between technology and society, the most important causal arrow points from the technologies into the social sphere. “Remix culture”, perhaps the leading example at the moment, is a major cultural shift that is argued to stem from inherent properties of digital media, such as the identity between a copy and an original of a digital work.

But Paul argues that politics usually dominates the effects of technology, not the other way around. For example, although cheap printing technologies helped make the early United States one of the most literate countries of its time, Paul argues that America’s real advantage was its postal system. Congress not only invested heavily in the postal service, but also gave a special discounted rate to printed material, effectively subsidizing publications of all kinds. As a result much more printed material was mailed in America than in, say, British Columbia at the same time.

One fascinating observation from Paul’s book (pages 180-181 in the hardcover edition, for those following along at home) concerns the telegraph. In Britain, the telegraph was nationalized in order to ensure that private network operators didn’t take advantage of the natural monopoly that they enjoyed (“natural” since once there was one set of telegraph wires leading to a place, it became hard to justify building a second set).

In the United States, there was a vociferous debate about whether or not to nationalize the telegraph system, which was controlled by Western Union, a private company:

[W]ithin the United States, Western Union continued to dominate the telegraph industry after its triumph in 1866 but faced two constraints that limited its ability to exploit its market power. First, the postal telegraph movement created a political environment that was, to some extent, a functional substitute for government regulation. Britain’s nationalization of the telegraph was widely discussed in America. Worried that the US government might follow suit, Western Union’s leaders at various times extended service or held rates in check to keep public opposition within manageable levels. (Concern about the postal telegraph movement also led the company to provide members of Congress with free telegraph service — in effect, making the private telegraph a post office for officeholders.) Public opinion was critical in confining Western Union to its core business. In 1866 and again in 1881, the company was on the verge of trying to muscle the Associated Press aside and take over the wire service business itself when it drew back, apparently out of concern that it could lose the battle over nationalization by alienating the most influential newspapers in the country. Western Union did, however, move into the distribution of commercial news and in 1871 acquired majority control of Gold and Stock, a pioneering financial information company that developed the stock ticker.

This situation–a dynamic equilibrium in which a private party polices its own behavior in order to stave off the threat of government intervention–strikes me as closely analogous to the net neutrality debate today. Network operators, although not subject to neutrality requirements, are more reluctant to exercise the options for traffic discrimination that are formally open to them, because they recognize that doing so might lead to regulation.

Fact check: The New Yorker versus Wikipedia

In July—when The New Yorker ran a long and relatively positive piece about Wikipedia—I argued that the old-media method of laboriously checking each fact was superior to the wiki model, where assertions have to be judged based on their plausibility. I claimed that personal experience as a journalist gave me special insight into such matters, and concluded: “the expensive, arguably old fashioned approach of The New Yorker and other magazines still delivers a level of quality I haven’t found, and do not expect to find, in the world of community-created content.”

Apparently, I was wrong. It turns out that EssJay, one of the Wikipedia users described in The New Yorker article, is not the “tenured professor of religion at a private university” that he claimed he was, and that The New Yorker reported him to be. He’s actually a 24-year-old, sans doctorate, named Ryan Jordan.

Jimmy Wales, who is as close to being in charge of Wikipedia as anybody is, has had an intricate progression of thought on the matter, ably chronicled by Seth Finklestein. His ultimate reaction (or at any rate, his current public stance as of this writing) is on his personal page in Wikipedia

I only learned this morning that EssJay used his false credentials in content disputes… I understood this to be primarily the matter of a pseudonymous identity (something very mild and completely understandable given the personal dangers possible on the Internet) and not a matter of violation of people’s trust.

As Seth points out, this is an odd reaction since it seems simultaneously to forgive EssJay for lying to The New Yorker (“something very mild”) and to hold him much more strongly to account for lying to other Wikipedia users. One could argue that lying to The New Yorker—and by extension to its hundreds of thousands of subscribers—was in the aggregate much worse than lying to the Wikipedians. One could also argue that Mr. Jordan’s appeal to institutional authority, which was as successful as it was dishonest, raises profound questions about the Wikipedia model.

But I won’t make either of those arguments. Instead, I’ll return to the issue that has me putting my foot in my mouth: How can a reader decide what to trust? I predicted you could trust The New Yorker, and as it turns out, you couldn’t.

Philip Tetlock, a long-time student of the human penchant for making predictions, has found (in a book whose text I can’t link to, but which I encourage you to read) that people whose predictions are falsified typically react by making excuses. They typically claim that they are off the hook because the conditions based on which they predicted a certain result were actually not as they seemed at the time of the inaccurate prediction. This defense is available to me: The New Yorker fell short of its own standards, and took EssJay at his word without verifying his identity or even learning his name. He had, as all con men do, a plausible-sounding story, related in this case to a putative fear of professional retribution that in hindsight sits rather uneasily with his claim that he had tenure. If the magazine hadn’t broken its own rules, this wouldn’t have gotten into print.

But that response would be too facile, as Tetlock rightly observes of the general case. Granted that perfect fact checking makes for a trustworthy story; how do you know when the fact checking is perfect and when it is not? You don’t. More generally, predictions are only as good as someone’s ability to figure out whether or not the conditions are right to trigger the predicted outcome.

So what about this case: On the one hand, incidents like this are rare and tend to lead the fact checkers to redouble their meticulousness. On the other, the fact claims in a story that are hardest to check are often for the same reason the likeliest ones to be false. Should you trust the sometimes-imperfect fact checking that actually goes on?

My answer is yes. In the wake of this episode The New Yorker looks very bad (and Wikipedia only moderately so) because people regard an error in The New Yorker to be exceptional in a way the exact same error in Wikipedia is not. This expectations gap tells me that The New Yorker, warts and all, still gives people something they cannot find at Wikipedia: a greater, though conspicuously not total, degree of confidence in what they read.

Is there any such thing as “enough” technological progress?

Yesterday, Ed considered the idea that there may be “a point of diminishing returns where more capacity doesn’t improve the user’s happiness.” It’s a provocative concept, and one that I want to probe a bit further.

One observation that seems germane is that such thoughts have a pedigree. Henry L. Ellsworth, , in his 1843 report to Congress, wrote that “the advancement of the arts, from year to year, taxes our credulity and seems to presage the arrival of that period when human improvement must end.”

It seems to me that the idea of diminishing marginal returns is most at home in settings where the task or process under consideration has well-defined boundaries. For example, making steel: Larger steel mills, up to a point, are more efficient that smaller ones. Larger furnaces reduce capital costs per unit of output, and secondary functions like logistics, training and bookkeeping can be spanned across larger amounts of steel without commensurate increases in their cost. But consolidating an industry, and replacing small production facilities with a larger one, does not necessarily involve any fundamental advancement in the state of the art. (It may, of course.)

Innovation—which is the real wellspring of much of human progress—tends not to follow such predictable patterns. Science textbooks like to present sanitized stories of incremental, orderly advancement, but as Thomas Kuhn famously argued, history actually abounds with disjointed progress, serendipitous accidents, and unanticipated consequences, both good and bad.

There are areas in which incremental improvement is the norm: shaving razors, compression algorithms, mileage per gallon. But in each of these areas, the technology being advanced is task-specific. Nobody is going to use their car to shave or their Mach 3 to commute to the office.

But digital computers—Turing machines—are different. It’s an old saw that a digital computer can be used to change or analyze literally any information. When it comes to computers, advancement means faster Turing machines with larger memories, in smaller physical footprints and with lower costs (including, e.g., manufacturing expense and operational electricity needs).

Ed’s observation yesterday that there is an ultimate limit to the bandwidth leading into the human brain is well taken. But in terms of all transmission of digital content globally, the “last hop” from computer to human is already a very small part of the total traffic. Mostly, traffic is among nodes on end-to-end computer networks, among servers in a Beowulf cluster or similar setup, or even traffic among chips on a motherboard or cores in the same chip. Technologies that advance bandwidth capabilities are useful primarily because of the ways they change what computers can do (at the human time scale). The more they advance, the more things, and the more kinds of things, computers will be capable of. It’s very unlikely we’ve thought of them all.

It is also striking how far our capability to imagine new uses for digital technology has lagged behind the advancement of the technology itself. Blogs like this one were effectively possible from the dawn of the World Wide Web (or even before), and they now seem to be a significant part of what the web can most usefully be made to do. But it took years, after the relevant technologies were available, for people to recognize and take advantage of this possibility. Likewise, much of “web 2.0” has effectively meant harnessing relatively old technologies, such as Javascript, in new and patently unanticipated ways.

The literature of trying to imagine far-out implications of technological advancement is at once both exciting and discouraging: Exciting because it shows that much of what we can imagine probably will happen eventually, and discouraging because it shows that the future is full of major shifts, obvious in retrospect, to which we were blind up until their arrival.

I occasionally try my hand at the “big picture” prognostication game, and enjoy reading the efforts of others. But in the end I’m left feeling that the future, though bright, is mysterious. I can’t imagine a human community, even in the distant future, that has exhausted its every chance to create, innovate and improve its surroundings.

Bill Gates: Is he an IP Maximalist, or an Open Access Advocate?

Maybe both. On July 20, the Wall Street Journal reported:

Frustrated that over two decades of research have failed to produce an AIDS vaccine, Microsoft Corp. Chairman Bill Gates is tying his foundation’s latest, biggest AIDS-vaccine grants to a radical concept: Those who get the money must first agree to share the results of their work in short order.

I can’t link to the full article because the Wall Street Journal – the only major American newspaper whose online operation is in the black – puts nearly all of its online content behind a paywall. But as it happens, there isn’t a great deal more to say on this topic because the Gates foundation has declined to specify the legal details of the sharing arrangement it will mandate.

Grant recipients and outside observers were unsure whether data-sharing requirements of the grants could pose potential legal or patent conflicts with Mr. Gates’s vow to respect intellectual property. Foundation officials said this week researchers would still be free to commercialize their discoveries, but they must develop access plans for people in the developing world.

The foundation declined to make its attorney available to address these concerns.

As David Bollier noted, the lack of detail from the Gates Foundation makes it difficult to know how the tradeoffs between sharing discoveries, on the one hand, and using IP to harness their value, on the other, will actually be made. But be that as it may, there seems to be a general question here about Mr. Gates’s views on intellectual property. As Mr. Bollier put it, it may appear that hell has frozen over: that Mr. Gates, whose business model depends on the IP regime he frequently and vigorously defends, is retreating from his support of extremely strong intellectual property rights.

But hell has (as usual) probably not frozen over. The appearance of an inherent conflict between support for strong intellectual property rights and support for open access is, in general, illusory. Why? Because the decision to be carefully selective in the exercise of one’s intellectual property rights is independent of the policy questions about exactly how far those rights should extend. If anything, the expansion of IP rights actually strengthens arguments for open access, creative commons licenses, and other approaches that carefully exercise a subset of the legally available rights.

If copyright, say, only extends to a specified handful of covered uses for the protected work, then an author or publisher may be well advised to reserve full control over all of those uses with an “all rights reserved” notice. But as the space of “reservable” rights, if you will, expands, the argument for reserving all of them necessarily weakens, since it depends on the case for reserving whichever right one happens to have the least reason to reserve.

And just as it is the case that stronger IP regimes strengthen the case for various forms of creative commons, open access and the like, the reverse is also true: The availability of these infrastructures and social norms for partial, selective “copyleft” strengthens the case for expansive IP regimes by reducing the frequency with which the inefficient reservations of rights made legally possible by such regimes will actually take place.

That, I think, may be Mr. Gates’s genius. By supporting open access (of some kind), he can show the way to a world in which stronger IP rights do not imply a horrifyingly inefficient “lockdown” of creativity and innovation.

The New Yorker Covers Wikipedia

Writing in this week’s New Yorker, Stacy Schiff takes a look at the Wikipedia phenomenon. One sign that she did well: The inevitable response page at Wikipedia is almost entirely positive. Schiff’s writing is typical of what makes the New Yorker great. It has rich historical context, apt portrayals of the key characters involved in the story, and a liberal sprinkling of humor, bons mots and surprising factual nuggets. It is also, as all New Yorker pieces are, rigorously fact-checked and ably edited.

Normally, I wouldn’t use FTT as a forum to “talk shop” about a piece of journalism. But in this case, the medium really is the message – the New Yorker’s coverage of Wikipedia is itself a showcase for some of the things old-line publications still do best. As soon as I saw Schiff’s article in my New Yorker table of contents (yes, I still read it in hard copy, and yes, I splurge on getting it mailed abroad to Oxford) I knew it would be a great test case. On the one hand, Wikipedia is the preeminent example of community-driven, user-generated content. Any coverage of Wikipedia, particularly any critical coverage, is guaranteed to be the target of harsh, well-informed scrutiny by the proud community of Wikipedians. On the other, The New Yorker’s writing is, indisputably, among the best out there, and its fact checking department is widely thought to be the strongest in the business.

When reading Wikipedia, one has to react to surprising claims by entertaining the possibility that they might not be true. The less plausible a claim sounds, the more skepticism one must have when considering it. In some cases, a glance at the relevant Talk page helps, since this can at least indicate whether or not the claim has been vetted by other Wikipedians. But not every surprising claim has backstory available on the relevant talk page, and not every reader has the time or inclination to go to that level of trouble for every dubious claim she encounters in Wikipedia. The upshot is that implausible or surprising claims in Wikipedia often get taken with a grain or more of salt, and not believed – and on the other hand, plausible-sounding falsehoods are, as a result of their seeming plausibility, less likely to be detected.

On the other hand, rigorous fact-checking (at least in the magazine context where I have done it and seen it) does not simply mean that someone is trying hard to get things right: It means that someone’s job depends on their being right, and it means that the particularly surprising claims in the fact-checked content in particular can be counted on to be well documented by the intense, aspiring, nervous young person at the fact checker’s desk. At TIME, for example, every single word that goes in to the magazine physically gets a check mark, on the fact-checkers’ copy, once its factual content has been verified, with the documentation of the fact’s truth filed away in an appropriate folder (the folders, in a holdover from an earlier era, are still called “carbons”). It is every bit as grueling as it sounds, and entirely worthwhile. The same system is in use across most of the Time, Inc. magazine publishing empire, which includes People, Fortune, and Sports Illustrated and represents a quarter of the U.S. consumer magazine market. It’s not perfect of course – reports of what someone said in a one-on-one interview, for example, can only ever be as good as the reporter’s notes or tape recording. But it is very, very good. In my own case, knowing what goes in to the fact-checking process at places like TIME and The New Yorker gives me a much higher level of confidence in their accuracy than I have when, as I often do, I learn something new from Wikipedia.

The guarantee of truth that backs up New Yorker copy gives its content a much deeper impact. Consider these four paragraphs from Schiff’s story:

The encyclopedic impulse dates back more than two thousand years and has rarely balked at national borders. Among the first general reference works was Emperor’s Mirror, commissioned in 220 A.D. by a Chinese emperor, for use by civil servants. The quest to catalogue all human knowledge accelerated in the eighteenth century. In the seventeen-seventies, the Germans, champions of thoroughness, began assembling a two-hundred-and-forty-two-volume masterwork. A few decades earlier, Johann Heinrich Zedler, a Leipzig bookseller, had alarmed local competitors when he solicited articles for his Universal-Lexicon. His rivals, fearing that the work would put them out of business by rendering all other books obsolete, tried unsuccessfully to sabotage the project.

It took a devious Frenchman, Pierre Bayle, to conceive of an encyclopedia composed solely of errors. After the idea failed to generate much enthusiasm among potential readers, he instead compiled a “Dictionnaire Historique et Critique,” which consisted almost entirely of footnotes, many highlighting flaws of earlier scholarship. Bayle taught readers to doubt, a lesson in subversion that Diderot and d’Alembert, the authors of the Encyclopédie (1751-80), learned well. Their thirty-five-volume work preached rationalism at the expense of church and state. The more stolid Britannica was born of cross-channel rivalry and an Anglo-Saxon passion for utility.

Wales’s first encyclopedia was the World Book, which his parents acquired after dinner one evening in 1969, from a door-to-door salesman. Wales—who resembles a young Billy Crystal with the neuroses neatly tucked in—recalls the enchantment of pasting in update stickers that cross-referenced older entries to the annual supplements. Wales’s mother and grandmother ran a private school in Huntsville, Alabama, which he attended from the age of three. He graduated from Auburn University with a degree in finance and began a Ph.D. in the subject, enrolling first at the University of Alabama and later at Indiana University. In 1994, he decided to take a job trading options in Chicago rather than write his dissertation. Four years later, he moved to San Diego, where he used his savings to found an Internet portal. Its audience was mostly men; pornography—videos and blogs—accounted for about a tenth of its revenues. Meanwhile, Wales was cogitating. In his view, misinformation, propaganda, and ignorance are responsible for many of the world’s ills. “I’m very much an Enlightenment kind of guy,” Wales told me. The promise of the Internet is free knowledge for everyone, he recalls thinking. How do we make that happen?

As an undergraduate, he had read Friedrich Hayek’s 1945 free-market manifesto, “The Use of Knowledge in Society,” which argues that a person’s knowledge is by definition partial, and that truth is established only when people pool their wisdom. Wales thought of the essay again in the nineteen-nineties, when he began reading about the open-source movement, a group of programmers who believed that software should be free and distributed in such a way that anyone could modify the code. He was particularly impressed by “The Cathedral and the Bazaar,” an essay, later expanded into a book, by Eric Raymond, one of the movement’s founders. “It opened my eyes to the possibility of mass collaboration,” Wales said.

After reading this copy, and knowing how The New Yorker works, one can be confident that a devious Frenchman named Pierre Bayle once really did propose an encyclopedia comprised entirely of errors. The narrative is put together well. It will keep people reading and will not cause confusion. Interested readers can follow up on a nugget like Wales’ exposure to the Hayek essay by reading it themselves (it’s online here).

I am not a Wikipedia denialist. It is, and will continue to be, an important and valuable resource. But the expensive, arguably old fashioned approach of The New Yorker and other magazines still delivers a level of quality I haven’t found, and do not expect to find, in the world of community-created content.