November 27, 2024

Online Poker and Unenforceable Rules

Computerized “bots” may be common in online poker games according to a Mike Brunker story at MSNBC.com. I have my doubts about the prevalence today of skillful, fully automated pokerbots, but there is an interesting story here nonetheless.

Most online casinos ban bots, but there is really no way to enforce such a rule. Already, many online players use electronic assistants that help them calculate odds, something that world-class players are adept at doing in their heads. Pokerbot technology will only advance, so that even if bots don’t outplay people now, they will eventually. (The claim, sometimes heard, that computers cannot understand bluffing in poker, is incorrect. Game theory can predict and explain bluffing behavior. A good pokerbot will bluff sometimes.)

Once bots are better than people, it’s hard to see why a rational person, with real money at stake, would fail to use a bot. Sure, watching your bot play is less fun than playing yourself; but losing to a bunch of bots isn’t much fun either. Old-fashioned human vs. human play will still be seen in very-low-stakes online games, where it’s not worth the trouble of deploying a bot, and in in-person games where the non-botness of players can be checked.

The online casinos are kidding themselves if they think they can enforce a no-bots rule. How can they tell what a player is doing in the privacy of his own home? Even if they can tell that a human’s hands are on the keyboard, how can they tell whether that human is getting advice from a bot?

The article discusses yet another unenforceable rule of online poker: the ban on collusion between players. If two or more players simply show each other their cards, they gain an advantage over the others at the table. There’s no way for an online casino to prevent players from conducting back-channel communications, so a ban on collusion is impossible to enforce.

By reiterating their anti-bot and anti-collusion rules, and by claiming to have mysterious enforcement mechanisms, online casinos may be able to stem the tide of cheating for a while. But eventually, bots and collusion will become the norm, and lone human players will be driven out of all but the lowest stakes games.

But there is another strategy. An online casino could encourage bots, and even set up bots-only games. The game would then become not a human vs. human card game but a human vs. human battle between bot designers for geekly mastery. I’ll bet there are plenty of programmers out there who would like to give it a try.

Voluntary Filtering Works for Us

It’s day two of porn week here at Freedom to Tinker, and time to talk about the tools parents have to limit what their kids see. As a parent, I have not only an opinion, but also an actual household policy (set jointly with my wife, of course) on this topic.

Like most parents, we want to limit what our kid sees. The reason is not so much that there are things we want our kid never to see, but more that we don’t think our kid is ready, yet, to see and hear absolutely everything in the world. Even the Cookie Monster is scary to kids at a certain age. Good parents know what their kids can handle alone, and what their kids can handle with a trusted adult present. We want to expose our kid to certain things gradually. Some things should be seen for the first time with a parent present to talk about what is being depicted.

But how can we do this, in the real world? It’s not enough simply to say that we should supervise our kid. To watch a kid nonstop, 24/7, is not only impractical but creepy. We don’t want to turn our home into a surveillance state.

Instead, we rely on architecture. For example, we put the only kid-accessible computer and TV in the busiest room of the house so that we’re less likely to lose track of what’s happening. But even that isn’t foolproof – it doesn’t work in the early morning hours when kids tend to be up while parents sleep.

This is where filtering technology can help. We find the TV rating and filtering system quite useful, despite its obvious flaws. This system is often called the V-chip, but we don’t actually rely on the V-chip itself. Instead, we rely on our Tivo to allow restrict access to shows with certain ratings, unless a secret password has been entered. We know that the technology overblocks and underblocks. But overall, we prefer a policy of “watch any kid-rated show you want, but ask a parent if you want to watch anything else” to the alternatives of “watch anything you want” or “always ask a parent first”. (A welcome side-effect: by changing the rating threshold we can easily implement a “no TV today” policy.)

It’s worth noting that we don’t use the federally mandated V-chip, which is built into our TV. We simply use the ratings associated with shows, and the parental controls that Tivo included voluntarily in its product. For us, the federal V-chip regulation provided, at most, the benefit of speeding standardization of the rating system. We’re happy with a semi-accurate, voluntary system that saves us time but doesn’t try to override our own judgment.

Online Porn Issue Not Going Away

Adam Thierer at Technology Liberation Front offers a long and interesting discussion of the online porn wars, in the form of a review of two articles by Jeffrey Rosen and Larry Lessig. I’ve been meaning to write about online porn regulation for a while, and Thierer’s post seems like a good excuse to address that topic now.

Recent years have seen a series of laws, such as the Communications Decency Act (CDA) and the Child Online Protection Act (COPA), aimed at restricting access to porn by minors, that have been the subject of several important court decisions. These cases have driven a blip in interest, and commentary, on online porn regulation.

The argument of Rosen’s article is captured in its title: “The End of Obscenity.”
Rosen argues that it’s only a matter of time before the very notion of obscenity – a word which here means “porn too icky to receive First Amendment protection” – is abandoned. Rosen makes a two-part argument for this proposition. First, he argues that the Miller test – the obscenity-detection rule decreed by the Supreme Court in the 1970’s – is no longer tenable. Second, he argues that porn is becoming socially acceptable. Neither claim is as strong as Rosen claims.

The Miller test says that material is obscene if it meets all three of these criteria: (1) the average person, applying contemporary community standards, would find it is designed to appeal to the prurient interest; (2) it depicts [icky sexual stuff]; and (3) taken as a whole, it lacks serious literary, artistic, scientific, or political value.

Rosen argues that the “community standards” language, which was originally intended to account for differences in standards between, say, Las Vegas and Provo, no longer makes sense now that the Internet makes the porn market international. How is an online porn purveyor to know whether he is violating community standards somewhere? The result, Rosen argues, must be that the most censorious community in the U.S. will impose its standards on everybody else.

The implication of Rosen’s argument is that, for the purposes of porn distribution, the whole Internet, or indeed the whole nation, is essentially a single community. Applying the standards of the national community would seem to solve this problem – and the rest of Rosen’s essay supports the notion that national standards are converging anyway.

The other problem with the Miller standard is that it’s hopelessly vague. This seems unavoidable with any standard that divides obscene from non-obscene material. As long as there is a legal and political consensus for drawing such a line, it will be drawn somewhere; so at best we might replace the Miller line with a slightly clearer one.

Which brings us to the second, and more provocative, part of Rosen’s essay, in which he argues that community standards are shifting to make porn acceptable, so that the very notion of obscenity is becoming a dinosaur. There is something to this argument – the market for online porn does seem to be growing – but I think Rosen goes too far. It’s one thing to say that Americans spend $10 billion annually on online porn, but it’s another thing entirely to say that a consensus is developing that all porn should be legal. For one thing, I would guess that the vast majority of that $10 billion is spent on material that is allowed under the Miller test, and the use of already-legal material does not in itself indicate a consensus for legalizing more material.

But the biggest flaw in Rosen’s argument is that the laws at issue in this debate, such as the CDA and COPA, are about restricting access to porn by children. And there’s just no way that the porn-tolerant consensus that Rosen predicts will extend to giving kids uncontrolled access to porn.

It looks like we’re stuck with more of less the current situation – limits on porn access by kids, implemented by ugly, messy law and/or technology – for the foreseeable future. What, if anything, can we do to mitigate this mess? I’ll address that question, and the Lessig essay, later in the week.

Bike Lock Fiasco

Kryptonite may stymie Superman, but apparently it’s not much of a barrier to bike thieves. Many press reports (e.g., Wired News, New York Times, Boston Globe) say that the supposedly super-strong Kryptonite bike locks can be opened by jamming the empty barrel of a Bic ballpoint pen into the lock and turning clockwise. Understandably, this news has spread like wildfire on the net, especially after someone posted a video of the Bic trick in action. A bike-store employee needed only five seconds to demonstrate the trick for the NYT reporter.

The Kryptonite company is now in a world of hurt. Not only is their reputation severely damaged, but they are on the hook for their anti-theft guarantee, which offers up to $3500 to anybody whose bike is stolen while protected by a Kryptonite lock. The company says it will offer an upgrade program for owners of the now-suspect locks.

As often happens in these sorts of stories, the triggering event was not the discovery of the Bic trick, which had apparently been known for some time among lock-picking geeks, but the diffusion of this knowledge to the general public. The likely tipping point was a mailing list message by Chris Brennan, who had his Kryptonite-protected bike stolen and shortly thereafter heard from a friend about the Bic trick.

I have no direct confirmation that people in the lock-picking community knew this before. All I have is the words of a talking head in the NYT article. [UPDATE (11 AM, Sept. 17): Chris at Mutatron points to a 1992 Usenet message describing a similar technique.] But if it is true that this information was known, then the folks at Kryptonite must have known about it too, which puts their decision to keep selling the locks, and promoting them as the safest thing around, in an even worse light, and quickens the pulses of product liability lawyers.

Whatever the facts turn out to be, this incident seems destined to be Exhibit 1 in the debate over disclosure of security flaws. So far, all we know for sure is that the market will punish Kryptonite for making security claims that turned out to be very wrong.

UPDATE (11:00 AM): The vulnerability here seems to apply to all locks that have the barrel-type lock and key used on most Kryptonite bike locks. It would also apply, for example, to the common Kensington-style laptop locks, and to the locks on some devices such as vending machines.

DRM and the Market

In light of yesterday’s entry on DRM and competition, and the ensuing comment thread, it’s interesting to look at last week’s action by TiVo and ReplayTV to limit their customers’ use of pay-per-view content that the customers have recorded.

If customers buy access to pay-per-view content, and record that content on their TiVo or ReplayTV video recorders, the recorders will now limit playback of that content to a short time period after the recording is made. It’s not clear how the recorders will recognize affected programs, but it seems likely that some kind of signal will be embedded in the programs themselves. If so, this looks a lot like a kind of broadcast-flag technology, applied, ironically, only to programs that consumers have already paid a special fee to receive.

It seems unlikely that TiVo and ReplayTV made the decision to adopt this technology more or less simultaneously. Perhaps there was some kind of agreement between the two companies to take this action together. This kind of agreement, between two companies that together hold most of the personal-video-recorder market, to reduce product functionality in a way that either company, acting alone, would have a competitive disincentive to adopt, seems to raise antitrust issues.

Even so, these are not the only two entries in the market. MythTV, the open-source software replacement, is unlikely to make the same change; so this development will only make MythTV look better to consumers. Perhaps the market will push back, by giving more business to MythTV. True, MythTV is now too hard to for ordinary consumers to use. But if MythTV is as good as people say, it’s only a matter of time before somebody packages up a “MythTV system in a box” product that anybody can buy and use.