April 25, 2014

avatar

Newspapers' Problem: Trouble Targeting Ads

Richard Posner has written a characteristically thoughtful blog entry about the uncertain future of newspapers. He renders widespread journalistic concern about the unwieldy character of newspapers into the crisp economic language of “bundling”:

Bundling is efficient if the cost to the consumer of the bundled products that he doesn’t want is less than the cost saving from bundling. A particular newspaper reader might want just the sports section and the classified ads, but if for example delivery costs are high, the price of separate sports and classified-ad “newspapers” might exceed that of a newspaper that contained both those and other sections as well, even though this reader was not interested in the other sections.

With the Internet’s dramatic reductions in distribution costs, the gains from bundling are decreased, and readers are less likely to prefer bundled products. I agree with Posner that this is an important insight about the behavior of readers, but would argue that reader behavior is only a secondary problem for newspapers. The product that newspaper publishers sell—the dominant source of their revenues—is not newspapers, but audiences.

Toward the end of his post, Posner acknowledges that papers have trouble selling ads because it has gotten easier to reach niche audiences. That seems to me to be the real story: Even if newspapers had undiminished audiences today, they’d still be struggling because, on a per capita basis, they are a much clumsier way of reaching readers. There are some populations, such as the elderly and people who are too poor to get online, who may be reachable through newspapers and unreachable through online ads. But the fact that today’s elderly are disproportionately offline is an artifact of the Internet’s novelty (they didn’t grow up with it), not a persistent feature of the marektplace. Posner acknoweldges that the preference of today’s young for online sources “will not change as they get older,” but goes on to suggest incongruously that printed papers might plausibly survive as “a retirement service, like Elderhostel.” I’m currently 26, and if I make it to 80, I very strongly doubt I’ll be subscribing to printed papers. More to the point, my increasing age over time doesn’t imply a growing preference for print; if anything, age is anticorrelated with change in one’s daily habits.

As for the claim that poor or disadvantaged communities are more easily reached offline than on, it still faces the objection that television is a much more efficient way of reaching large audiences than newsprint. There’s also the question of how much revenue can realistically be generated by building an audience of people defined by their relatively low level of purchasing power. If newsprint does survive at all, I might expect to see it as a nonprofit service directed at the least advantaged. Then again, if C. K. Prahalad is correct that businesses have neglected a “fortune at the bottom of the pyramid” that can be gathered by aggregating the small purchases of large numbers of poor people, we may yet see papers survive in the developing world. The greater relative importance of cell phones there, as opposed to larger screens, could augur favorably for the survival of newsprint. But phones in the developing world are advancing quickly, and may yet emerge as a better-than-newsprint way of reading the news.

avatar

The End of Theory? Not Likely

An essay in the new Wired, “The End of Theory: The Data Deluge Makes the Scientific Method Obsolete,” argues that we won’t need scientific theories any more, now that we have so much stored information and such great tools for analyzing it. Wired has never been the best source for accurate technology information, but this has to be a new low point.

Here’s the core of the essay’s argument:

[...] The scientific method is built around testable hypotheses. These models, for the most part, are systems visualized in the minds of scientists. The models are then tested, and experiments confirm or falsify theoretical models of how the world works. This is the way science has worked for hundreds of years.

Scientists are trained to recognize that correlation is not causation, that no conclusions should be drawn simply on the basis of correlation between X and Y (it could just be a coincidence). Instead, you must understand the underlying mechanisms that connect the two. Once you have a model, you can connect the data sets with confidence. Data without a model is just noise.

But faced with massive data, this approach to science — hypothesize, model, test — is becoming obsolete. Consider physics: Newtonian models were crude approximations of the truth (wrong at the atomic level, but still useful). A hundred years ago, statistically based quantum mechanics offered a better picture — but quantum mechanics is yet another model, and as such it, too, is flawed, no doubt a caricature of a more complex underlying reality. The reason physics has drifted into theoretical speculation about n-dimensional grand unified models over the past few decades (the “beautiful story” phase of a discipline starved of data) is that we don’t know how to run the experiments that would falsify the hypotheses — the energies are too high, the accelerators too expensive, and so on.

There are several errors here, but the biggest one is about correlation and causation. It’s true that correlation does not imply causation. But the reason is not that the correlation might have arisen by chance – that possibility can be eliminated given enough data. The problem is that we need to know what kind of causation is operating.

To take a simple example, suppose we discover a correlation between eating spinach and having strong muscles. Does this mean that eating spinach will make you stronger? Not necessarily; this will only be true if spinach causes strength. But maybe people in poor health, who tend to have weaker muscles, have an aversion to spinach. Maybe this aversion is a good thing because spinach is actually harmful to people in poor health. If that is true, then telling everybody to eat more spinach would be harmful. Maybe some common syndrome causes both weak muscles and aversion to spinach. In that case, the next step would be to study that syndrome. I could go on, but the point should be clear. Correlations are interesting, but if we want a guide to action – even if all we want to know is what question to ask next – we need models and experimentation. We need the scientific method.

Indeed, in a world with more and more data, and better and better tools for finding correlations, we need the scientific method more than ever. This is confirmed by the essay’s physics story, in which physics theory (supposedly) went off the rails due to a lack of experimental data. Physics theory would be more useful if there were more data. And the same is true of scientific theory in general: theory and experiment advance in tandem, with advances in one creating opportunities for the other. In the coming age, theory will not wither away. Instead, it will be the greatest era ever for theory, and for experiment.

avatar

Copyright, Technology, and Access to the Law

James Grimmelmann has an interesting new essay, “Copyright, Technology, and Access to the Law,” on the challenges of ensuring that the public has effective knowledge of the laws. This might sound like an easy problem, but Grimmelmann combines history and explanation to show why it can be difficult. The law – which includes both legislators’ statutes and judges’ decisions – is large, complex, and ever-changing.

Suppose I gave you a big stack of paper containing all of the laws ever passed by Congress (and signed by the President). This wouldn’t be very useful, if what you wanted was to know whether some action you were contemplating would violate the law. How would you find the laws bearing on that action? And if you did find such a law, how would you determine whether it had been repealed or amended later, or how courts had interpreted it?

Making the law accessible in practice, and not just in theory, requires a lot of work. You need reliable summaries, topic-based indices, reverse-citation indices (to help you find later documents that might affect the meaning of earlier ones), and so on. In the old days of paper media, all of this had to be printed and distributed in large books, and updated editions had to be published regularly. How to make this happen was an interesting public policy problem.

The traditional answer has been copyright. Generally, the laws themselves (statutes and court opinions) are not copyrightable, but extra-value content such as summaries and indices can be copyrighted. The usual theory of copyright applies: give the creators of extra-value content some exclusive rights, and the profit motive will ensure that good content is created.

This has some similarity to our Princeton model for government transparency, which urges government to publish information in simple open formats, and leave it to private parties to organize and present the information to the public. Here government was creating the basic information (statutes and court opinions) and private parties were adding value. It wasn’t exactly our model, as government was not taking care to publish information in the form that best facilitated private re-use, but it was at least evidence for our assertion that, given data, private parties will step in and add value.

All of this changed with the advent of computers and the Internet, which made many of the previously difficult steps cheaper and easier. For example, it’s much easier to keep a website up to date than to deliver updates to the owners of paper books. Computers can easily construct citation indices, and a search engine provides much of the value of a printed index. Access to the laws can be cheaper and easier now.

What does this mean for public policy? First, we can expect more competition to deliver legal information to the public, thanks to the reduced barriers to entry. Second, as competition drives down prices we’ll see fewer entities that are solely in the business of providing access to laws; instead we’ll see more non-profits, along with businesses providing free access. More competition and lower prices will mean better and more effective access to the law for citizens. Third, copyright will still play a role by supporting the steps that remain costly, such as the writing of summaries.

Finally, it will matter more than ever exactly how government provides access to the raw information. If, as sometimes happens now, government provides the raw information in an awkward or difficult-to-use form, private actors must invest in converting it into a more usable form. These investments might not have mattered much in the past when the rest of the process was already expensive; but in the Internet age they can make a big difference. Given access to the right information in the right format, one person can produce a useful mashup or visualization tool with a few weeks of spare-time work. Government, by getting the details of data publication right, can enable a flood of private innovation, not to mention a better public debate.

avatar

New bill advances open data, but could be better for reuse

Senators Obama, Coburn, McCain, and Carper have introduced the Strengthening Transparency and Accountability in Federal Spending Act of 2008 (S. 3077), which would modify their 2006 transparency act. That first bill created USASpending.gov, a searchable web site of government outlays. USASpending.gov—which was based on software developed by OMB Watch and the Sunlight Foundation—allows end users to search across a variety of criteria. It has begun offering an API, an interface that lets developers query the data and display the results on their own sites. This allows a kind of reuse, but differs significantly from the approach suggested in our recent “Invisible Hand” paper. We urge that all the data be published in open formats. An API delivers search results, but that makes the search interface itself very important: having to work through an interface sometimes limits developers from making innovative, unforeseen uses of the data.

The new bill would expand the scope of information available via USASpending.gov, adding information about federal contracts, leases, and audit disputes, among other areas. But it would also elevate the API itself to a matter of statutory mandate. I’m all in favor of mandates that make data available and reusable, but the wording here is already a prime example of why technical standards are often better left to expert regulatory bodies than etched in statute:

” (E) programmatically search and access all data in a serialized machine readable format (such as XML) via a web-services application programming interface”

A technical expert body would (I hope) recognize that there is added value in allowing the data itself to be published so that all of it can be accessed at once. This is significantly different from the site’s current attitude; addressing the list of top contractors by dollar volume, the site’s FAQ says it “does not allow the results of these tables to be downloaded in delimited or XML format because they are not standard search results.” I would argue that standardizers of search results, whomever they may be, should not be able to disallow any data from being downloaded. There doesn’t necessarily need to be a downloadable table of top contractors, but it should be possible for citizens to download all the data so that they can compose such a table themselves if they so desire. The API approach, if it substitutes for making all the data available for download, takes us away from the most vibrant possible ecosystem of data reuse, since whenever government web sites design an interface (whether it’s a regular web interface for end users, or a code-level interface for web developers), they import assumptions about how the data will be used.

All that said, it’s easy to make the data available for download, and a straightforward additional requirement that could be added to the bill. And in any cause we owe a debt of gratitude to Senators Coburn, Obama, McCain and Carper for their pioneering, successful efforts in this area.

==

Update, June 12: Amended the list of cosponsors to include Sens. Carper and (notably) McCain. With both major presidential candidates as cosponsors, the bill seems to reflect a political consensus. The original bill back in 2006 had 48 cosponsors and passed unanimously.

avatar

Study Shows DMCA Takedowns Based on Inconclusive Evidence

A new study by Michael Piatek, Yoshi Kohno and Arvind Krishnamurthy at the University of Washington shows that copyright owners’ representatives sometimes send DMCA takedown notices where there is no infringement – and even to printers and other devices that don’t download any music or movies. The authors of the study received more than 400 spurious takedown notices.

Technical details are summarized in the study’s FAQ:

Downloading a file from BitTorrent is a two step process. First, a new user contacts a central coordinator [a "tracker" – Ed] that maintains a list of all other users currently downloading a file and obtains a list of other downloaders. Next, the new user contacts those peers, requesting file data and sharing it with others. Actual downloading and/or sharing of copyrighted material occurs only during the second step, but our experiments show that some monitoring techniques rely only on the reports of the central coordinator to determine whether or not a user is infringing. In these cases whether or not a peer is actually participating is not verified directly. In our paper, we describe techniques that exploit this lack of direct verification, allowing us to frame arbitrary Internet users.

The existence of erroneous takedowns is not news – anybody who has seen the current system operating knows that some notices are just wrong, for example referring to unused IP addresses. Somewhat more interesting is the result that it is pretty easy to “frame” somebody so they get takedown notices despite doing nothing wrong. Given this, it would be a mistake to infer a pattern of infringement based solely on the existence of takedown notices. More evidence should be required before imposing punishment.

Now it’s not entirely crazy to send some kind of soft “warning” to a user based on the kind of evidence described in the Washington paper. Most of the people who received such warnings would probably be infringers, and if it’s nothing more than a warning (“Hey, it looks like you might be infringing. Don’t infringe.”) it could be effective, especially if the recipients know that with a bit more work the copyright owner could gather stronger evidence. Such a system could make sense, as long as everybody understood that warnings were not evidence of infringement.

So are copyright owners overstepping the law when they send takedown notices based on inconclusive evidence? Only a lawyer can say for sure. I’ve read the statute and it’s not clear to me. Readers who have an informed opinion on this question are encouraged to speak up in the comments.

Whether or not copyright owners can send warnings based on inconclusive evidence, the notification letters they actually send imply that there is strong evidence of infringement. Here’s an excerpt from a letter sent to the University of Washington about one of the (non-infringing) study computers:

XXX, Inc. swears under penalty of perjury that YYY Corporation has authorized XXX to act as its non-exclusive agent for copyright infringement notification. XXX’s search of the protocol listed below has detected infringements of YYY’s copyright interests on your IP addresses as detailed in the attached report.

XXX has reasonable good faith belief that use of the material in the manner complained of in the attached report is not authorized by YYY, its agents, or the law. The information provided herein is accurate to the best of our knowledge. Therefore, this letter is an official notification to effect removal of the detected infringement listed in the attached report. The attached documentation specifies the exact location of the infringement.

The statement that the search “has detected infringements … on your IP addresses” is not accurate, and the later reference to “the detected infringement” also misleads. The letter contains details of the purported infringement, which once again give the false impression that the letter’s sender has verified that infringement was actually occurring:

Evidentiary Information:
Notice ID: xx-xxxxxxxx
Recent Infringement Timestamp: 5 May 2008 20:54:30 GMT
Infringed Work: Iron Man
Infringing FileName: Iron Man TS Kvcd(A Karmadrome Release)KVCD by DangerDee
Infringing FileSize: 834197878
Protocol: BitTorrent
Infringing URL: http://tmts.org.uk/xbtit/announce.php
Infringers IP Address: xx.xx.xxx.xxx
Infringer’s DNS Name: d-xx-xx-xxx-xxx.dhcp4.washington.edu
Infringer’s User Name:
Initial Infringement Timestamp: 4 May 2008 20:22:51 GMT

The obvious question at this point is why the copyright owners don’t do the extra work to verify that the target of the letter is actually transferring copyrighted content. There are several possibilities. Perhaps BitTorrent clients can recognize and shun the detector computers. Perhaps they don’t want to participate in an act of infringement by sending or receiving copyrighted material (which would be necessary to know that something on the targeted computer is willing to transfer it). Perhaps it simply serves their interests better to send lots of weak accusations, rather than fewer stronger ones. Whatever the reason, until copyright owners change their practices, DMCA notices should not be considered strong evidence of infringement.

avatar

NJ Election Day: Voting Machine Status

Today is primary election day in New Jersey, for all races except U.S. President. (The presidential primary was Feb. 5.) Here’s a roundup of the voting-machine-related issues.

First, Union County found that Sequoia voting machines had difficulty reporting results for a candidate named Carlos CedeƱo, reportedly because it couldn’t handle the n-with-tilde character in his last name. According to the Star-Ledger, Sequoia says that election results will be correct but there will be some kind of omission on the result tape printed by the voting machine.

Second, the voting machines in my polling place are fitted with a clear-plastic shield over the operator panel, which only allows certain buttons on the panel to be pressed. Recall that some Sequoia machines reported discrepancies in the presidential primary on Feb. 5, and Sequoia said that these happened when poll workers accidentally pressed buttons on the operator panel that were supposed to be unused. This could only have been caused by a design problem in the machines, which probably was in the software. To my knowledge, Sequoia hasn’t fixed the design problem (nor have they offered an explanation that is consistent with all of the evidence – but that’s another story), so there was likely an ongoing risk of trouble in today’s election. The plastic shield looks like a kludgy but probably workable temporary fix.

Third, voting machines were left unguarded all over Princeton, as usual. On Sunday and Monday evenings, I visited five polling places in Princeton and found unguarded voting machines in all of them – 18 machines in all. The machines were sitting in school cafeteria/gyms, entry hallways, and even in a loading dock area. In no case were there any locks or barriers stopping people from entering and walking right up to the machines. In no case did I see any other people. (This was in the evening, roughly between 8:00 and 9:00 PM). There were even handy signs posted on the street pointing the way to the polling place, showing which door to enter, and so on.

Here are some photos of unguarded voting machines, taken on Sunday and Monday:

avatar

Government Data and the Invisible Hand

David Robinson, Harlan Yu, Bill Zeller, and I have a new paper about how to use infotech to make government more transparent. We make specific suggestions, some of them counter-intuitive, about how to make this happen. The final version of our paper will appear in the Fall issue of the Yale Journal of Law and Technology. The best way to summarize it is to quote the introduction:

If the next Presidential administration really wants to embrace the potential of Internet-enabled government transparency, it should follow a counter-intuitive but ultimately compelling strategy: reduce the federal role in presenting important government information to citizens. Today, government bodies consider their own websites to be a higher priority than technical infrastructures that open up their data for others to use. We argue that this understanding is a mistake. It would be preferable for government to understand providing reusable data, rather than providing websites, as the core of its online publishing responsibility.

In the current Presidential cycle, all three candidates have indicated that they think the federal government could make better use of the Internet. Barack Obama’s platform explicitly endorses “making government data available online in universally accessible formats.” Hillary Clinton, meanwhile, remarked that she wants to see much more government information online. John McCain, although expressing excitement about the Internet, has allowed that he would like to delegate the issue, possible to a vice-president.

But the situation to which these candidates are responding – the wide gap between the exciting uses of Internet technology by private parties, on the one hand, and the government’s lagging technical infrastructure on the other – is not new. The federal government has shown itself consistently unable to keep pace with the fast-evolving power of the Internet.

In order for public data to benefit from the same innovation and dynamism that characterize private parties’ use of the Internet, the federal government must reimagine its role as an information provider. Rather than struggling, as it currently does, to design sites that meet each end-user need, it should focus on creating a simple, reliable and publicly accessible infrastructure that “exposes” the underlying data. Private actors, either nonprofit or commercial, are better suited to deliver government information to citizens and can constantly create and reshape the tools individuals use to find and leverage public data. The best way to ensure that the government allows private parties to compete on equal terms in the provision of government data is to require that federal websites themselves use the same open systems for accessing the underlying data as they make available to the public at large.

Our approach follows the engineering principle of separating data from interaction, which is commonly used in constructing websites. Government must provide data, but we argue that websites that provide interactive access for the public can best be built by private parties. This approach is especially important given recent advances in interaction, which go far beyond merely offering data for viewing, to offer services such as advanced search, automated content analysis, cross-indexing with other data sources, and data visualization tools. These tools are promising but it is far from obvious how best to combine them to maximize the public value of government data. Given this uncertainty, the best policy is not to hope government will choose the one best way, but to rely on private parties with their vibrant marketplace of engineering ideas to discover what works.

To read more, see our preprint on SSRN.