April 27, 2024

Archives for 2007

Online Symposium: Future of Scholarly Communication

Today we’re kicking off an online symposium on The Future of Scholarly Communication, run by the Center for Information Technology Policy at Princeton. An “online symposium” is a kind of short-term group blog, focusing on a specific topic. Panelists (besides me) include Ira Fuchs, Paul DiMaggio, Peter Suber, Stan Katz, and David Robinson. (See the symposium site for more information on the panelists.)

I started the symposium with an “introductory post. Peter Suber has already chimed in, and we’re looking forward to contributions from the other panelists.

We’ll be running more online symposia on various topics in the future, so this might be a good time to bookmark the symposium site, or subscribe to its RSS feed.

attack of the context-sensitive blog spam?

I love spammers, really I do. Some of you may recall my earlier post here about freezing your credit report. In the past week, I’ve deleted two comments that were clearly spam and that made it through Freedom to Tinker’s Akismet filter. Both had generic, modestly complementary language and a link to some kind of credit card application processing site. What’s interesting about this? One of two things.

  1. Akismet is letting those spams through because their content is “related” to the post.
  2. Or more ominously, the spammer in question is trolling the blogosphere for “relevant” threads and is then inserting “relevant” comment spam.

If it’s the former, then one can certainly imagine that Akismet and other such filters will eventually improve to the point where the problem goes away (i.e., even if it’s “relevant” to a thread here, if it’s posted widely then it must be spam). If it’s the latter, then we’re in trouble. How is an automated spam catcher going to detect “relevant” spam that’s (statistically) on-topic with the discussion where it’s posted and is never posted anywhere else?

Infinite Storage for Music

Last week I spoke on a panel called “The Paradise of Infinite Storage”, at the “Pop [Music] and Policy” conference at McGill University in Montreal. The panel’s title referred to an interesting fact: sometime in the next decade, we’ll see a $100 device that fits in your pocket and holds all of the music ever recorded by humanity.

This is a simple consequence of Moore’s Law which, in one of its variants, holds that the amount of data storage available at a fixed size and price roughly doubles every eighteen months. Extrapolate that trend and, depending on your precise assumptions, you’ll find the magic date falls somewhere between 2011 and 2019. From then on, storage capacity might as well be infinite, at least as far as music is concerned.

This has at least two important consequences. First, it strains even further the economics of the traditional music business. The gap between the number of songs you might want to listen to, and the number you’re willing and able to pay a dollar each to buy, is growing ever wider. In a world of infinite storage you’ll be able to keep around a huge amount of music that is potentially interesting but not worth a dollar (or even a dime) to you yet. So why not pay a flat fee to buy access to everything?

Second, infinite storage will enable new ways of building filesharing technologies, which will be much harder for copyright owners to fight. For example, today’s filesharing systems typically have users search for a desired song by contacting strangers who might have the song, or who might have information about where the song can be found. Copyright owners’ technical attacks against filesharing often target this search feature, trying to disrupt it or to exploit the fact that it involves communication with strangers.

But in a world of infinite storage, no searching is needed, and filesharers need only communicate with their friends. If a user has a new song, it will be passed on immediately to his friends, who will pass it on to their friends, and so on. Songs will “flood” through the population this way, reaching all of the P2P system’s participants within a few hours – with no search, and no communication with strangers. Copyright owners will be hard pressed to fight such a system.

Just as today, many people will refuse to use such technologies. But pressure on today’s copyright-based business models will continue to intensify. Will we see new legal structures? New business models? Or new public attitudes? Something has to change.

Jury Finds User Liable for Downloading, Awards $9250 Per Song in Damages

The first Recording Industry v. End User lawsuit to go to trial just ended, and the industry won big. Jammie Thomas, a single mother in northern Minnesota, was found liable for illegally downloading 24 songs via Kazaa, and the jury awarded damages of $222,000, or $9250 per song. It’s always risky to extrapolate much from a single case – outsiders, schooled by TV courtroom dramas, often see cases as broad referenda on social issues, while in reality the specific circumstances of a case are often the decisive factor. But with that caution in mind, we can learn a few things from this verdict.

The industry had especially strong evidence that Thomas was the person who downloaded the songs in question. Thomas’s defense was that somebody else must have downloaded the songs. But the industry showed that the perpetrator used the same distinctive username that Thomas admitted to using on other services, and that the perpetrator downloaded songs by Thomas’s favorite performers. Based on press stories about the trial, the jury probably had an easy time concluding that Thomas downloaded the songs. (Remember that civil cases don’t require proof beyond a reasonable doubt, only that it was more likely than not that Thomas downloaded the songs illegally.)

People often argue that the industry has only weak evidence when they send their initial settle-or-else demand letters to users. That may well be true. But in this case, as the trial loomed, the industry bolstered its case by gathering more evidence. The lesson for future cases is clear. If the industry has to go to trial with only the initial evidence, they might not win. But what end user, knowing that they did download illegally, will want to take the chance that more evidence against them won’t turn up?

The most striking fact about the Thomas case is that the jury awarded damages of $9250 per song to faraway corporations.. That’s more than nine hundred times what the songs would have cost at retail, and the total of $222,000 is an astronomical amount to a person in Jammie Thomas’s circumstances. There is no way that Jammie Thomas caused $222,000 of harm to the record industry, so the jury’s purpose in awarding the damages has to be seen as punishment rather than compensation.

My guess is that the jury was turned off by Thomas’s implausible defense and her apparent refusal to take responsibility for her actions. Litigants disrespect the jury at their peril. It’s easy to imagine these jurors thinking, “She made us take off work and sit through a trial for this?” Observers who hoped for jury nullification – that a jury would conclude that the law was unjust and would therefore refuse to find even an obvious violator liable – must be sorely disappointed. It sure looks like juries will find violators liable, and more significantly, that they can be convinced to sympathize with the industry against obvious violators.

All of this, over songs that would have cost $23.76 from iTunes. At this point, Jammie Thomas must wish, desperately, that she had just paid the money.

Greetings, and a Thought on Net Neutrality

Hello again, FTT readers. You may remember me as a guest blogger here at FTT, writing about anti-circumvention, the print media’s superiority (or lack thereof) to Wikipedia, and a variety of other topics.

I’m happy to report that I’ve moved to Princeton to join the university’s Center for Information Technology Policy as its new associate director. Working with Ed and others here on campus, I’ll be helping bring the Center into its own as a leading interdisciplinary venue for research and conversation about the social and political impact of information technology.

Over the next few months, I’ll be traveling the country to look at how other institutions approach this area, in order to develop a strategic plan for Princeton’s involvement in the field. As a first step toward understanding the world of tech policy, I’ve been doing a lot of reading lately.

One great source is The Creation of the Media by Princeton’s own Paul Starr. It’s carefully argued and highly readable, and I’ve found its content challenging. Conversations in tech policy often seem to stem from the premise that in the interaction between technology and society, the most important causal arrow points from the technologies into the social sphere. “Remix culture”, perhaps the leading example at the moment, is a major cultural shift that is argued to stem from inherent properties of digital media, such as the identity between a copy and an original of a digital work.

But Paul argues that politics usually dominates the effects of technology, not the other way around. For example, although cheap printing technologies helped make the early United States one of the most literate countries of its time, Paul argues that America’s real advantage was its postal system. Congress not only invested heavily in the postal service, but also gave a special discounted rate to printed material, effectively subsidizing publications of all kinds. As a result much more printed material was mailed in America than in, say, British Columbia at the same time.

One fascinating observation from Paul’s book (pages 180-181 in the hardcover edition, for those following along at home) concerns the telegraph. In Britain, the telegraph was nationalized in order to ensure that private network operators didn’t take advantage of the natural monopoly that they enjoyed (“natural” since once there was one set of telegraph wires leading to a place, it became hard to justify building a second set).

In the United States, there was a vociferous debate about whether or not to nationalize the telegraph system, which was controlled by Western Union, a private company:

[W]ithin the United States, Western Union continued to dominate the telegraph industry after its triumph in 1866 but faced two constraints that limited its ability to exploit its market power. First, the postal telegraph movement created a political environment that was, to some extent, a functional substitute for government regulation. Britain’s nationalization of the telegraph was widely discussed in America. Worried that the US government might follow suit, Western Union’s leaders at various times extended service or held rates in check to keep public opposition within manageable levels. (Concern about the postal telegraph movement also led the company to provide members of Congress with free telegraph service — in effect, making the private telegraph a post office for officeholders.) Public opinion was critical in confining Western Union to its core business. In 1866 and again in 1881, the company was on the verge of trying to muscle the Associated Press aside and take over the wire service business itself when it drew back, apparently out of concern that it could lose the battle over nationalization by alienating the most influential newspapers in the country. Western Union did, however, move into the distribution of commercial news and in 1871 acquired majority control of Gold and Stock, a pioneering financial information company that developed the stock ticker.

This situation–a dynamic equilibrium in which a private party polices its own behavior in order to stave off the threat of government intervention–strikes me as closely analogous to the net neutrality debate today. Network operators, although not subject to neutrality requirements, are more reluctant to exercise the options for traffic discrimination that are formally open to them, because they recognize that doing so might lead to regulation.