April 23, 2014

avatar

NYT on Oberholzer/Strumpf Filesharing Study

Today’s New York Times has a great story by John Schwartz on last week’s filesharing study and the reaction to it. There’s a nice summary of the study itself, and some discussion and criticism of it.

The criticism seems to fall into two categories. One category is the appropriate scholarly caution toward a new result that hasn’t been peer-reviewed yet. Although economists who have seen the study say its methodology looks reasonable, there may be other unknown factors yet to be discovered that will cast doubt on the study. The other category of criticism comes from people who don’t criticize the study’s methdology but just point to other types of studies that give different results.

The article notes that these other studies haven’t been peer-reviewed either, and that some of their sponsors have agendas. Anybody who has been around for a while knows to be very skeptical of certain kinds of studies done by certain kinds of consulting firms.

Comments

  1. Copyfight says:

    NYT on UNC/Harvard P2P Study

    John Schwartz at the NYTimes has an interesting article this morning on the recent UNC/Harvard Study claiming P2P has almost null effect on CD sales. In particular, I thought the critique of the RIAA’s “illegal activities” survey method was particularl…

  2. Steve @ PM-Style.com says:

    I agree with you, peer review for other plausible explanations is very important, regardless of the sponsor or agenda. Although I’m a proponent of the position which this study highlights, I find the study’s causal theory exceedingly weak. Before I read Schwartz’s article, I thought the authors of the study had set out to explain something, found their explanation lacking, and then put a different spin on the results to save their effort. Schwartz’s article reinforces that notion.

    Even if the authors did stumble upon a result, it isn’t bad science. Such research is frequently useful for helping subsequent research offer a better explanation. I just find its usefulness limited for generalization.

    So, I guess my question is: shouldn’t we be equally skeptical of the people who claim expansive generalizations about the utility of the study? Lacking reasonable peer review to search for confounding factors, aren’t they as equally biased as the industry proponents?