A Federal Court in Missouri has ruled on the BNETD case, which involves contract and DMCA claims, and issues of reverse engineering and interoperability. Because I played a role in the litigation (as an expert), I won’t comment on the court’s ruling. The rest of you are welcome to discuss it.
The music industry likes to complain about sales lost to piracy, but figures that show huge sales declines only tell part of the story. Before we blame this trend on infringement, we have to make several assumptions, including that the demand for music (whether purchased or pirated) has remained steady.
Figures available from the US Census bureau suggest otherwise. Data on “Media Usage and Consumer Spending” abstracted from a study by Veronis Suhler Stevenson show the average number of hours spent listening to music by US residents age 12 and older has declined steadily since 1998 (from 283 to a projected 219 in 2003, a 21% decline). Meanwhile, home video, video games, and consumer Internet have seen dramatic gains. This suggests that people are turning to new forms of entertainment (i.e., the Internet, video games, and DVDs) at the expense of recorded music.
Here’s the data, extracted from the Census Bureau report, on the number of hours Americans spent using various types of media in 1998 and 2003.
|Activity||Hours, 1998||Hours, 2003 (proj.)||Change (hours)|
(Source: US Census Bureau, Statistical Abstract of the United States: 2003, p. 720.)
(Note 1: We chose to use 2003 as the ending point, even though the source includes projected 2004 data, on the assumption that the 2003 Statistical Abstract’s projected data would be more trustworthy for 2003 than for 2004. Using 2004 as the endpoint would not materially affect the analysis.)
(Note 2: It is possible that part of the decline in recorded music hours may be an artifact of the study methodology. The table caption states that the data for categories including recorded music were based on “survey research and consumer purchase data”. To the extent that the estimate of music listening hours is based on survey data, it can serve as a possible cause of the drop in music sales. But to the extent that the listening time estimate might be inferred from the drop in sales, it should not be used to explain the sale drop. More methodological details might be available in the VSS report, but that is not available to the public.
However, we think it is unlikely that the listening time estimate is derived entirely from sales data. According to the same Census Bureau report (which cites as its source the same Veronis Suhler Stevenson report), per-capita spending on recorded music fell by only 4% from 1998 to 2003; the RIAA estimated a 15% drop in its total recorded music revenue over the same period. It seems unlikely that a 21% drop in listening time would be inferred entirely from a 4% or 15% spending drop.)
(Note 3: VSS wants $2000 for a copy of their report. We’re not in a position to pay that much. If anybody has a copy of the report and is able to fill us in about their methodology, we’d be grateful.)
[This entry was written by Alex Halderman and Ed Felten. If you cite this, please don’t attribute authorship to Ed alone.]
Ashlee Vance at the Register tells the amazing story of SunnComm, the DRM company whose CD “protection” product was famously defeated by holding down a PC’s Shift key. It’s one of those true stories that would be hopelessly implausible if told as fiction. Here’s the opening paragraph:
You might expect one of the world’s leading digital rights management (DRM) technology makers to have a rich history in either the computing or music fields or both. This is not the case for SunnComm International Inc. Instead, the firm’s experience revolves around a troubled oil and gas business, an Elvis and Madonna impersonator operation and even a Christmas tree farm.
The story goes on with shell companies, phantom sales contracts, SEC investigations, shareholder lawsuits, and many, many excuses from the CEO. Oh yeah, at some point the company found time to develop a laughably weak CD copy “protection” product, to threaten legal armageddon against my student Alex Halderman when he wrote a paper analyzing the technology and detailing its weaknesses, and to somehow sell the technology to record companies despite its utter failure to keep even one song off the file-sharing networks.
Readers who are even moderately skeptical of CEO excuses will recognize this company for what it is. And remember, this company can plausibly claim to be the leader in music DRM. Gives you lots of confidence in the viability of DRM, doesn’t it?
In the recent hooha about CBS and the forged National Guard memos, one important issue has somehow been overlooked – the impact of the memo discussion on future forgery. There can be no doubt that all the talk about proportional typefaces, superscripts, and kerning will prove instructive to would-be amateur forgers, who will know not to repeat the mistakes of the CBS memos’ forger. Who knows, some amateur forgers may even figure out that if you want a document to look like it came from a 1970s Selectric typewriter, you should type it on a 1970s Selectric typewriter. The discussion, in other words, provides a kind of roadmap for would-be forgers.
This kind of tradeoff, between open discussion and future security worries, is common with information security issues – and this is a infosecurity issue, since it has to do with the authenticity of records. Any discussion of the pros and cons of a particular security system or artifact will inevitably reveal information useful to some hypothetical bad guy.
Nobody would dream of silencing the CBS memos’ critics because of this; and CBS would have been a laughingstock had it tried to shut down the discussion by asserting future forgery fears. But in more traditional infosecurity applications, one hears such arguments all the time, especially from the companies that, like CBS, face embarrassment if the facts are disclosed.
What’s true with CBS is true elsewhere in the security world. Disclosure teaches the public the truth about the situation at hand (in this case the memos), a benefit that shouldn’t be minimized. Even more important, disclosure deters future sloppiness – you can bet that CBS and others will be much more careful in the future. (You might think that the industry should police itself so that such deterrents aren’t necessary; but experience teaches otherwise.)
My sense is that it’s only the remote and mysterious nature, for most people, of cybersecurity that allows the anti-disclosure arguments to get traction. If people thought about most cybersecurity problems in the same way they think about the CBS memos, the cybersecurity disclosure argument would be much healthier.
The American Conservative Union, an influential right-wing group, has announced its opposition to the Induce Act, and is running ads criticizing those Republicans who support the Act. This should not be surprising, for opposition to the Act is a natural position for true conservatives, who oppose government regulation of technology products and support a competitive marketplace for technology and entertainment.
One sometimes hears the claim that conservatives should support the Induce Act, because that’s what big business wants. But thoughtful conservatives support free markets, not giveaways to specific business sectors. And conservatives who understand the economy know that the Induce Act is supported by a few businesses, but opposed by many more, and that the opponents – the computer, electronics, Internet, and software industries – account for a larger and more dynamic portion of the economy than the supporters do.
The Induce Act is a nice litmus test for self-described conservative lawmakers. They can support the Act, and confirm the criticism that conservatism is just a fig-leaf for corporate welfare. Or they can oppose the Act and confirm their own claims to stand for competition and the free market.
The ACU sees this choice for what it is, and opposes the Induce Act. Let’s hope that more conservatives join them.
Today I’ll wrap up Vice Week here at Freedom to Tinker with an entry on porn labeling. On Monday I agreed with the conventional wisdom that online porn regulation is a mess. On Tuesday I wrote about what my wife and I do in our home to control underage access to inappropriate material. Today, I’ll suggest a public approach to online porn that might possibly do a little bit of good. And as Seth Finkelstein (a.k.a. Eeyore, a.k.a. The Voice of Experience) would probably say, a little bit of good is the best one can hope for on this issue. My approach is similar to one that Larry Lessig sketched in a recent piece in Wired.
My proposal is to implement a voluntary labeling scheme for Web content. It’s voluntary, because we can’t force overseas sites to comply, so we might as well just ask people politely to participate. Labeling schemes tend not to be adopted if the labels are complicated, or if the scheme requires all sites to be labeled. So I’ll propose the simplest possible labels, in a scheme where the vast majority of sites need no labels at all.
The idea is to create a label, which I’ll call “adultsonly” (Lessig calls it “porn” but I think that’s imprecise). Putting the adultsonly tag on a page indicates that the publisher requests that the page be shown only to adults. And that’s all it means. There’s no official rule about when material should be labeled, and no spectrum of labels. It’s just the publisher’s judgment as to whether the material should be shown to kids. You could label an entire page by adding to it an adultsonly meta-tag; or you could label a portion of a page by surrounding it with “adultsonly” and “/adultsonly” tags. This would be easy to implement, and it would be backward compatible since browsers ignore tags that they don’t understand. Browsers could include a kids-mode that would hide all adultsonly material.
But where, you ask, is the incentive for web site publishers to label their racy material as adultsonly? The answer is that we create that incentive by decreeing that although material published on the open Internet is normally deemed as having been made available to kids, any material labeled as adultsonly will be deemed as having been made available only to adults. So by labeling its content, a publisher can ensure that the content’s First Amendment status is determined by the standard obscenity-for-adults test, rather than the less permissive obscenity-for-kids test. (I’m assuming that such tests will exist and their nature will be determined by immovable politico-legal forces.)
This is a labeling scheme that even a strict libertarian might be able to love. It’s simple and strictly voluntary, and it doesn’t put the government in the business of establishing fancy taxonomies of harmful content (beyond the basic test for obscenity, which is in practice unchangeable anyway). It’s more permissive of speech than the current system, at least if that speech is labeled. This is, I think, the least objectionable content labeling system possible.
Responding to my entry yesterday about pokerbots, Jordan Lampe emails a report from the world of backgammon. Backgammon bots play at least as well as the best human players, and backgammon is often played for money, so the temptation to use bots in online play is definitely there.
Most people seem to be wary of this practice, and the following
countermeasures have been developed (not necessarily exclusive or all
used by the same person)
1) Don’t play for money; only play for fun
2) Play money only against people you know [well]
3) Against somebody who takes a long time after every move, you are
suspicious that they are plugging their moves into computers
4) At the end of the game, you can analyze your game with one of the
computer programs. It turns out that all the computers rate each
other’s play very highly, with an error rate of 0-1.5 “millipoints” per
move. If you get a rate of exactly 0 you can be dead certain they are
using the same computer program. Computers rate the best humans in the
world in the 3-4 range. In any case, if your opponent is using a
computer program to decide all his moves it is fairly easy to tell after
only a few games, and then avoid playing with that player any more.
5) Some players take the attitude, “if I lose, at least I’ll have
learned something” and therefore ignore if they are playing bots
6) Using a bot to help you win is, well, boring, and so it doesn’t
happen that much anyway
Having played a lot of poker and backgammon in my day, I suspect that distinguishing human play from computer play would be harder in poker than it is in backgammon. For one thing, in backgammon you always know what information your opponent had in choosing a certain move (both players have the same information at all times); but in poker you may never know what your opponent knew or believed at a particular point in time. Also, a good poker player is always trying to frustrate opponents’ attempts to build mental models of his decision processes; this type of misdirection, which a good bot will emulate by using randomized algorithms, will make it harder to distinguish similar styles of play.
Jordan identifies another factor that several poker players mentioned as well: the fact that most gambling income is made by separating weak players from their money. As long as there are enough “fish”, all of the sharks, whether human or not, will feast. When the stakes get high, the fish will be driven out; but at low stakes, good human players may still make money.
Computerized “bots” may be common in online poker games according to a Mike Brunker story at MSNBC.com. I have my doubts about the prevalence today of skillful, fully automated pokerbots, but there is an interesting story here nonetheless.
Most online casinos ban bots, but there is really no way to enforce such a rule. Already, many online players use electronic assistants that help them calculate odds, something that world-class players are adept at doing in their heads. Pokerbot technology will only advance, so that even if bots don’t outplay people now, they will eventually. (The claim, sometimes heard, that computers cannot understand bluffing in poker, is incorrect. Game theory can predict and explain bluffing behavior. A good pokerbot will bluff sometimes.)
Once bots are better than people, it’s hard to see why a rational person, with real money at stake, would fail to use a bot. Sure, watching your bot play is less fun than playing yourself; but losing to a bunch of bots isn’t much fun either. Old-fashioned human vs. human play will still be seen in very-low-stakes online games, where it’s not worth the trouble of deploying a bot, and in in-person games where the non-botness of players can be checked.
The online casinos are kidding themselves if they think they can enforce a no-bots rule. How can they tell what a player is doing in the privacy of his own home? Even if they can tell that a human’s hands are on the keyboard, how can they tell whether that human is getting advice from a bot?
The article discusses yet another unenforceable rule of online poker: the ban on collusion between players. If two or more players simply show each other their cards, they gain an advantage over the others at the table. There’s no way for an online casino to prevent players from conducting back-channel communications, so a ban on collusion is impossible to enforce.
By reiterating their anti-bot and anti-collusion rules, and by claiming to have mysterious enforcement mechanisms, online casinos may be able to stem the tide of cheating for a while. But eventually, bots and collusion will become the norm, and lone human players will be driven out of all but the lowest stakes games.
But there is another strategy. An online casino could encourage bots, and even set up bots-only games. The game would then become not a human vs. human card game but a human vs. human battle between bot designers for geekly mastery. I’ll bet there are plenty of programmers out there who would like to give it a try.
It’s day two of porn week here at Freedom to Tinker, and time to talk about the tools parents have to limit what their kids see. As a parent, I have not only an opinion, but also an actual household policy (set jointly with my wife, of course) on this topic.
Like most parents, we want to limit what our kid sees. The reason is not so much that there are things we want our kid never to see, but more that we don’t think our kid is ready, yet, to see and hear absolutely everything in the world. Even the Cookie Monster is scary to kids at a certain age. Good parents know what their kids can handle alone, and what their kids can handle with a trusted adult present. We want to expose our kid to certain things gradually. Some things should be seen for the first time with a parent present to talk about what is being depicted.
But how can we do this, in the real world? It’s not enough simply to say that we should supervise our kid. To watch a kid nonstop, 24/7, is not only impractical but creepy. We don’t want to turn our home into a surveillance state.
Instead, we rely on architecture. For example, we put the only kid-accessible computer and TV in the busiest room of the house so that we’re less likely to lose track of what’s happening. But even that isn’t foolproof – it doesn’t work in the early morning hours when kids tend to be up while parents sleep.
This is where filtering technology can help. We find the TV rating and filtering system quite useful, despite its obvious flaws. This system is often called the V-chip, but we don’t actually rely on the V-chip itself. Instead, we rely on our Tivo to allow restrict access to shows with certain ratings, unless a secret password has been entered. We know that the technology overblocks and underblocks. But overall, we prefer a policy of “watch any kid-rated show you want, but ask a parent if you want to watch anything else” to the alternatives of “watch anything you want” or “always ask a parent first”. (A welcome side-effect: by changing the rating threshold we can easily implement a “no TV today” policy.)
It’s worth noting that we don’t use the federally mandated V-chip, which is built into our TV. We simply use the ratings associated with shows, and the parental controls that Tivo included voluntarily in its product. For us, the federal V-chip regulation provided, at most, the benefit of speeding standardization of the rating system. We’re happy with a semi-accurate, voluntary system that saves us time but doesn’t try to override our own judgment.
Adam Thierer at Technology Liberation Front offers a long and interesting discussion of the online porn wars, in the form of a review of two articles by Jeffrey Rosen and Larry Lessig. I’ve been meaning to write about online porn regulation for a while, and Thierer’s post seems like a good excuse to address that topic now.
Recent years have seen a series of laws, such as the Communications Decency Act (CDA) and the Child Online Protection Act (COPA), aimed at restricting access to porn by minors, that have been the subject of several important court decisions. These cases have driven a blip in interest, and commentary, on online porn regulation.
The argument of Rosen’s article is captured in its title: “The End of Obscenity.”
Rosen argues that it’s only a matter of time before the very notion of obscenity – a word which here means “porn too icky to receive First Amendment protection” – is abandoned. Rosen makes a two-part argument for this proposition. First, he argues that the Miller test – the obscenity-detection rule decreed by the Supreme Court in the 1970’s – is no longer tenable. Second, he argues that porn is becoming socially acceptable. Neither claim is as strong as Rosen claims.
The Miller test says that material is obscene if it meets all three of these criteria: (1) the average person, applying contemporary community standards, would find it is designed to appeal to the prurient interest; (2) it depicts [icky sexual stuff]; and (3) taken as a whole, it lacks serious literary, artistic, scientific, or political value.
Rosen argues that the “community standards” language, which was originally intended to account for differences in standards between, say, Las Vegas and Provo, no longer makes sense now that the Internet makes the porn market international. How is an online porn purveyor to know whether he is violating community standards somewhere? The result, Rosen argues, must be that the most censorious community in the U.S. will impose its standards on everybody else.
The implication of Rosen’s argument is that, for the purposes of porn distribution, the whole Internet, or indeed the whole nation, is essentially a single community. Applying the standards of the national community would seem to solve this problem – and the rest of Rosen’s essay supports the notion that national standards are converging anyway.
The other problem with the Miller standard is that it’s hopelessly vague. This seems unavoidable with any standard that divides obscene from non-obscene material. As long as there is a legal and political consensus for drawing such a line, it will be drawn somewhere; so at best we might replace the Miller line with a slightly clearer one.
Which brings us to the second, and more provocative, part of Rosen’s essay, in which he argues that community standards are shifting to make porn acceptable, so that the very notion of obscenity is becoming a dinosaur. There is something to this argument – the market for online porn does seem to be growing – but I think Rosen goes too far. It’s one thing to say that Americans spend $10 billion annually on online porn, but it’s another thing entirely to say that a consensus is developing that all porn should be legal. For one thing, I would guess that the vast majority of that $10 billion is spent on material that is allowed under the Miller test, and the use of already-legal material does not in itself indicate a consensus for legalizing more material.
But the biggest flaw in Rosen’s argument is that the laws at issue in this debate, such as the CDA and COPA, are about restricting access to porn by children. And there’s just no way that the porn-tolerant consensus that Rosen predicts will extend to giving kids uncontrolled access to porn.
It looks like we’re stuck with more of less the current situation – limits on porn access by kids, implemented by ugly, messy law and/or technology – for the foreseeable future. What, if anything, can we do to mitigate this mess? I’ll address that question, and the Lessig essay, later in the week.