November 21, 2024

Music Industry Under Fire for Exploring EFF Suggestion

Jim Griffin, a music industry consultant who is in the unusual position of being recognized as smart and reasonable by participants across a broad swath of positions in the copyright debate, revealed last week that he’s working to start a new music industry organization that will urge ISPs to bundle a music licensing fee into their monthly service costs, in exchange for which the major labels will agree not to sue (and, presumably, not to threaten suit against) the ISP’s customers for copyright infringement of the music whose rights they own. The goal, Griffin says, is to “monetize the anarchy of the Internet.”

This idea has a long history and has at various times been propounded by some on the “copyleft.” The Electronic Frontier Foundation, for example, issued in April 2004 a report entitled “A Better Way Forward: Voluntary Collective Licensing of Music File Sharing“. This report even suggested the $5 per user per month ($60 per user per year) that Griffin apparently has in mind.

According to the OECD, there were roughly 60 million broadband subscriptions in the United States as of the end of 2006. If each of these were to pay $60 a year, the total would be $3.6 billion a year. I know that broadband uptake is increasing, but I remain unsure how Griffin figures that the proposed system “could create a pool as large as $20 billion a year.” Perhaps this imagines global, rather than national, uptake of the plan? If so, it seems to embody some optimistic assumptions about how widely any such agreement could plausibly be extended.

Some prominent blogs have reacted with ire—Michael Arrington at TechCrunch, for example, characterizes the move as an “extortion scheme.” Arrington argues that a licensing system will hinder innovation because the revenues from it will be constant irrespective of the amount or quality of music published by the labels, and will flow to an infrastructure that, once it begins to be subsidized, will have little structural incentive to innovate. He also argues in a later post that since the core of the system is a covenant not to sue, it represents a “protection racket.”

I think this kind of skepticism is poorly justified at this point. If the labels can turn their statutory right to sue for damages after copyright infringement into a voluntary system where they get paid and nobody gets sued, it strikes me as a case of the system working. And the numbers matter: The idea of a $20 billion payoff that would triple the industry’s current $10 billion in annual revenue does not seem reasonable, but unless I am missing something it also does not seem probable.

There are two core questions for the plan. First, what will it cover? The idea is that it will let the industry stop suing, and thereby end the antagonism between labels and customers. But unless a critical mass of the labels agree to the plan, users whose ISPs are paying in will still face the risk of suit from non-participating copyright holders. In fact, if the plan takes off, individual rights holders may face an incentive to defect, since consumers are equally likely to infringe all popular music regardless of which music happens to be covered by the plan (since they aren’t likely to track which music is covered).

Second, how will the revenue be shared? Filesharing metrics, provided by analysts like BigChampagne, are at best approximate, and they only track downloads that occur via the public, unencrypted Internet–presumably a large share of the relevant copying, but not all of it, especially in the context of University and other networks. The squabbles will be challenging, and if past is prologue, then the labels may not prove themselves an amicable bunch in negotiating with each other.

Finally, it’s important to remember that the labels’ power depends, in the very long run, on their ability to sign the best new talent. If the licensing system proposed by Griffin takes off, it may preserve the status quo for now. But if the industry continues to give artists themselves a raw deal, as it is so often accused of doing, artists will still have the growing power that digital technology gives them to share their music without a label’s help.

Could Use-Based Broadband Pricing Help the Net Neutrality Debate?

Yesterday, thanks to a leaked memo, it came to light that Time Warner Cable intends to try out use-based broadband pricing on a few of its customers. It looks like the plan is for several tiers of use, with the heaviest users possibly paying overage charges on a per-byte basis. In confirming its plans to Reuters, Time Warner pointed out that its heaviest-using five percent of customers generate the majority of data traffic on the network, but still pay as though they were typical users. Under the new proposal, pricing would be based on the total amount of data transferred, rather than the peak throughput on a connection.

If the current, flattened pricing is based on what the connection is worth to a typical customer, who makes only limited use of the connection, then the heaviest five percent of users (let’s call them super-users as shorthand) are reaping a surplus. Bandwidth use might be highly elastic with respect to price, but I think it is also true that the super users do reap a great deal more benefit from their broadband connections than other users do – think of those who pioneer video consumption online, for example.

What happens when network operators fail to see this surplus? They have marginally less incentive to build out the network and drive down the unit cost of data transfer. If the pricing model changed so that network providers’ revenue remained the same in total but was based directly on how much the network is used, then the price would go down for the lightest users and up for the heaviest. If a tiered structure left prices the same for most users and raised them on the heaviest, operators’ total revenue would go up. In either case, networks would have an incentive to encourage innovative, high-bandwidth uses of their networks – regardless of what kind of use that is.

Gigi Sohn of Public Knowledge has come out in favor of Time Warner’s move on these and other grounds. It’s important to acknowledge that network operators still have familiar, monopolistic reasons to intervene against traffic that competes with phone service or cable. But under the current pricing structure, they’ve had a relatively strong argument to discriminate in favor of the traffic they can monetize, and against the traffic they can’t. By allowing them to monetize all traffic, a shift to use based pricing would weaken one of the most persuasive reasons network operators have to oppose net neutrality.

Clinton's Digital Policy

This is the second in our promised series summing up where the 2008 presidential candidates stand on digital technology issues. (See our first post, about Obama). This time,we’ll take a look at Hillary Clinton

Hillary has a platform plank on innovation. Much of it will be welcome news to the research community: She wants to up funding for basic research, and increase the number and size of NSF fellowships for graduate students in the sciences. Beyond urging more spending (which is, arguably, all too easy at this point in the process) she indicates her priorities by urging two shifts in how science funds are allocated. First, relative to their current slice of the federal research funding pie, she wants a disproportionate amount of the increase in funding to go the physical sciences and engineering. Second, she wants to “require that federal research agencies set aside at least 8% of their research budgets for discretionary funding of high-risk research.” Where the 8% figure comes from, and which research would count as “high risk,” I don’t know. Readers, can you help?

As far as specifically digital policy questions, she highlights just one: broadband. She supports “tax incentives to encourage broadband deployment in underserved areas,” as well as providing “financial support” for state, local, and municipal broadband initiatives. Government mandates designed to help the communications infrastructure of rural America keep pace with the rest of the country are an old theme, familiar in the telephone context as universal service requirements. That program taxes the telecommunications industry’s commercial activity, and uses the proceeds to fund deployment in areas where profit-seeking actors haven’t seen fit to expand. It’s politically popular in part because it serves the interests of less-populous states, which enjoy disproportionate importance in presidential politics.

On the larger question of subsidizing broadband deployment everywhere, the Clinton position outlined above strikes me, at its admittedly high level of vagueness, as being roughly on target. I’m politically rooted in the laissez-faire, free-market right, which tends to place a heavy burden of justification on government interventions in markets. In its strongest and most brittle form, the free-market creed can verge on naturalistic fallacy: For any proposed government program, the objection can be raised, “if that were really such a good idea, a private enterprise would be doing it already, and turning a profit.” It’s an argument that applies against government interventions as such, and that has often been used to oppose broadband subsidies. Broadband is attractive and valuable, and people like to buy it, the reasoning goes–so there’s no need to bother with tax-and-spend supports.

The more nuanced truth, acknowledged by thoughtful participants all across the debate, is that subsidies can be justified if but only if the market is failing in some way. In this case, the failure would be a positive externality: adding one more customer to the broadband Internet conveys benefits to so many different parties that network operators can’t possibly hope to collect payment from all of them.

The act of plugging someone in creates a new customer for online merchants, a present and future candidate for employment by a wide range of far-flung employers, a better-informed and more critical citizen, and a happier, better-entertained individual. To the extent that each of these benefits is enjoyed by the customer, they will come across as willingness to pay a higher price for broadband service. But to the extent that other parties derive these benefits, the added value that would be created by the broadband sale will not express itself as a heightened willingness to pay, on the part of the customer. If there were no friction at all, and perfect foreknowledge of consumer behavior, it’s a good bet that Amazon, for example, would be willing to chip in on individual broadband subscriptions of those who might not otherwise get connected but who, if they do connect, will become profitable Amazon customers. As things are, the cost of figuring out which third parties will benefit from which additional broadband connection is prohibitive; it may not even be possible to find this information ahead of time at any price because human behavior is too hard to predict.

That means there’s some amount of added benefit from broadband that is not captured on the private market – the price charged to broadband customers is higher than would be economically optimal. Policymakers, by intervening to put downward pressure on the price of broadband, could lead us into a world where the myriad potential benefits of digital technology come at us stronger and sooner than they otherwise might. Of course, they might also make a mess of things in any of a number of ways. But at least in principle, a broadband subsidy could and should be done well.

One other note on Hillary: Appearing on Meet the Press yesterday (transcript here), she weighed in on Internet-enabled transparency. It came up tangentially, when Tim Russert asked her to promise she wouldn’t repeat her husband’s surprise decision to pardon political allies over the objection of the Justice Department. The pardon process, Hillary maintained, should be made more transparent–and, she went on to say:

I want to have a much more transparent government, and I think we now have the tools to make that happen. You know, I said the other night at an event in New Hampshire, I want to have as much information about the way our government operates on the Internet so the people who pay for it, the taxpayers of America, can see that. I want to be sure that, you know, we actually have like agency blogs. I want people in all the government agencies to be communicating with people, you know, because for me, we’re now in an era–which didn’t exist before–where you can have instant access to information, and I want to see my government be more transparent.

This seems strongly redolent of the transparency thrust in Obama’s platform. If nothing else, it suggests that his focus on the issue may be helping pull the field into more explicit, more concrete support for the Internet as a tool of government transparency. Assuming that either Obama or Clinton becomes the nominee, November will offer at least one major-party presidential candidate who is on record supporting specific new uses of the Internet as a transparency tool.

Obama's Digital Policy

The Iowa caucuses, less than a week away, will kick off the briefest and most intense series of presidential primaries in recent history. That makes it a good time to check in on what the candidates are saying about digital technologies. Between now and February 5th (the 23-state tsunami of primaries that may well resolve the major party nominations), we’ll be taking a look.

First up: Barack Obama. A quick glance at the sites of other candidates suggests that Obama is an outlier – none of the other major players has gone into anywhere near the level of detail that he has in their official campaign output. That may mean we’ll be tempted to spend a disproportionate amount of time talking about him – but if so, I guess that’s the benefit he reaps by paying attention. Michael Arrington’s TechCrunch tech primary provides the best summary I’ve found, compiled from other sources, of candidates’ positions on tech issues, and we may find ourselves relying on that over the next few weeks.

For Obama, we have a detailed “Technology and Innovation” white paper. It spans a topical area that Europeans often refer to as ICTs – information and communications technologies. That means basically anything digital, plus the analog ambit of the FCC (media concentration, universal service and so on). Along the way, other areas get passing mention – immigration of high tech workers, trade policy, energy efficiency.

Net neutrality may be the most talked about tech policy issue in Washington – it has generated a huge amount of constituent mail, perhaps as many as 600,000 constituent letters. Obama is clear on this: He says requiring ISPs to provide “accurate and honest information about service plans” that may violate neutrality is “not enough.” He wants a rule to stop network operators from charging “fees to privilege the content or applications of some web sites and Internet applications over others.” I think that full transparency about non-neutral Internet service may indeed be enough, an idea I first got from a comment on this blog, but in any case it’s nice to have a clear statement of view.

Where free speech collides with child protection, Obama faces the structural challenge, common to Democrats, of simultaneously appeasing both the entertainment industry and concerned moms. Predictably, he ends up engaging in a little wishful thinking:

On the Internet, Obama will require that parents have the option of receiving parental controls software that not only blocks objectionable Internet content but also prevents children from revealing personal information through their home computer.

The idealized version of such software, in which unwanted communications are stopped while desirable ones remain unfettered, is typically quite far from what the technology can actually provide. The software faces a design tradeoff between being too broad, in which case desirable use is stopped, and too narrow, in which case undesirable online activity is permitted. That might be why Internet filtering software, despite being available commercially, isn’t already ubiquitous. Given that parents can already buy it, Obama’s aim to “require that parents have the option of receiving” such software sounds like a proposal for the software to be subsidized or publicly funded; I doubt that would make it better.

On privacy, the Obama platform again reflects a structural problem. Voters seem eager for a President who will have greater concern for statutory law than the current incumbent does. But some of the secret and possibly illegal reductions of privacy that have gone on at the NSA and elsewhere may actually (in the judgment of those privy to the relevant secrets) be indispensable. So Obama, like many others, favors “updating surveillance laws.” He’ll follow the law, in other words, but first he wants it modified so that it can be followed without unduly tying his hands. That’s very likely the most reasonable kind of view a presidential candidate could have, but it doesn’t tell us how much privacy citizens will enjoy if he gets his way. The real question, unanswered in this platform, is exactly which updates Obama would favor. He himself is probably reserving judgment until, briefed by the intelligence community, he can competently decide what updates are needed.

My favorite part of the document, by far, is the section on government transparency. (I’d be remiss were I not to shamelessly plug the panel on exactly this topic at CITP’s upcoming January workshop.) The web is enabling amazing new levels, and even new kinds, of sunlight to accompany the exercise of public power. If you haven’t experienced MAPlight, which pairs campaign contribution data with legislators’ votes, then you should spend the next five minutes watching this video. Josh Tauberer, who launched Govtrack.us, has pointed out that one major impediment to making these tools even better is the reluctance of government bodies to adopt convenient formats for the data they publish. A plain text page (typical fare on existing government sites like THOMAS) meets the letter of the law, but an open format with rich metadata would see the same information put to more and better use.

Obama’s stated position is to make data available “online in universally accessible formats,” a clear nod in this direction. He also calls for live video feeds of government proceedings. One more radical proposal, camoflaged among these others, is

…pilot programs to open up government decision-making and involve the public in the work of agencies, not simply by soliciting opinions, but by tapping into the vast and distributed expertise of the American citizenry to help government make more informed decisions.

I’m not sure what that means, but it sounds exciting. If I wanted to start using wikis to make serious public policy decisions – and needed to make the idea sound simple and easy – that’s roughly how I might put it.

The "…and Technology" Debate

When an invitation to the facebook group came along, I was happy to sign up as an advocate of ScienceDebate 2008, a grassroots effort to get the Presidential candidates together for a group grilling on, as the web site puts it, “what may be the most important social issue of our time: Science and Technology.”

Which issues, exactly, would the debate cover? The web site lists seventeen, ranging from pharmaceutical patents to renewable energy to stem cells to space exploration. Each of the issues mentioned is both important and interesting, but the list is missing something big: It doesn’t so much as touch on digital information technologies. Nothing about software patents, the future of copyright, net neutrality, voting technology, cybersecurity, broadband penetration, or other infotech policy questions. The web site’s list of prominent supporters for the proposal – rich with Nobel laureates and university presidents, our own President Tilghman among them – shares this strange gap. It only includes one computer-focused expert, Peter Norvig of Google.

Reading the site reminded me of John McCain’s recent remark, (captured in a Washington Post piece by Garrett Graff) that the minor issues he might delegate to a vice-president include “information technology, which is the future of this nation’s economy.” If information technology really is so important, then why doesn’t it register as a larger blip on the national political radar?

One theory would be that, despite their protestations to the contrary, political leaders do not understand how important digital technology is. If they did understand, the argument might run, then they’d feel more motivated to take positions. But I think the answer lies elsewhere.

Politicians, in their perennial struggle to attract voters, have to take into account not only how important an issue actually is, but also how likely it is to motivate voting decisions. That’s why issues that make a concrete difference to a relatively small fraction of the population, such as flag burning, can still emerge as important election themes if the level of voter emotion they stir up is high enough. Tech policy may, in some ways, be a kind of opposite of flag burning: An issue that is of very high actual importance, but relatively low voting-decision salience.

One reason tech policy might tend to punch below its weight, politically, is that many of the most important tech policy questions turn on factual, rather than normative, grounds. There is surprisingly wide and surprisingly persistent reluctance to acknowledge, for example, how insecure voting machines actually are, but few would argue with the claim that extremely insecure voting machines ought not to be used in elections.

On net neutrality, to take another case, those who favor intervention tend to think that a bad outcome (with network balkanization and a drag on innovators) will occur under a laissez-faire regime. Those who oppose intervention see a different but similarly negative set of consequences occurring if regulators do intervene. The debate at its most basic level isn’t about the goodness or badness of various possible outcomes, but is instead about the relative probabilities that those outcomes will happen. And assessing those probabilities is, at least arguably, a task best entrusted to experts rather than to the citizenry at large.

The reason infotech policy questions tend to recede in political contexts like the science debate, in other words, is not that their answers matter less. It’s that their answers depend, to an unusual degree, on technical fact rather than on value judgment.