May 29, 2017

Archives for September 2004

A Roadmap for Forgers

In the recent hooha about CBS and the forged National Guard memos, one important issue has somehow been overlooked – the impact of the memo discussion on future forgery. There can be no doubt that all the talk about proportional typefaces, superscripts, and kerning will prove instructive to would-be amateur forgers, who will know not to repeat the mistakes of the CBS memos’ forger. Who knows, some amateur forgers may even figure out that if you want a document to look like it came from a 1970s Selectric typewriter, you should type it on a 1970s Selectric typewriter. The discussion, in other words, provides a kind of roadmap for would-be forgers.

This kind of tradeoff, between open discussion and future security worries, is common with information security issues – and this is a infosecurity issue, since it has to do with the authenticity of records. Any discussion of the pros and cons of a particular security system or artifact will inevitably reveal information useful to some hypothetical bad guy.

Nobody would dream of silencing the CBS memos’ critics because of this; and CBS would have been a laughingstock had it tried to shut down the discussion by asserting future forgery fears. But in more traditional infosecurity applications, one hears such arguments all the time, especially from the companies that, like CBS, face embarrassment if the facts are disclosed.

What’s true with CBS is true elsewhere in the security world. Disclosure teaches the public the truth about the situation at hand (in this case the memos), a benefit that shouldn’t be minimized. Even more important, disclosure deters future sloppiness – you can bet that CBS and others will be much more careful in the future. (You might think that the industry should police itself so that such deterrents aren’t necessary; but experience teaches otherwise.)

My sense is that it’s only the remote and mysterious nature, for most people, of cybersecurity that allows the anti-disclosure arguments to get traction. If people thought about most cybersecurity problems in the same way they think about the CBS memos, the cybersecurity disclosure argument would be much healthier.

Conservative Group Takes Conservative Position on Induce Act

The American Conservative Union, an influential right-wing group, has announced its opposition to the Induce Act, and is running ads criticizing those Republicans who support the Act. This should not be surprising, for opposition to the Act is a natural position for true conservatives, who oppose government regulation of technology products and support a competitive marketplace for technology and entertainment.

One sometimes hears the claim that conservatives should support the Induce Act, because that’s what big business wants. But thoughtful conservatives support free markets, not giveaways to specific business sectors. And conservatives who understand the economy know that the Induce Act is supported by a few businesses, but opposed by many more, and that the opponents – the computer, electronics, Internet, and software industries – account for a larger and more dynamic portion of the economy than the supporters do.

The Induce Act is a nice litmus test for self-described conservative lawmakers. They can support the Act, and confirm the criticism that conservatism is just a fig-leaf for corporate welfare. Or they can oppose the Act and confirm their own claims to stand for competition and the free market.

The ACU sees this choice for what it is, and opposes the Induce Act. Let’s hope that more conservatives join them.

The Least Objectionable Content Labeling System

Today I’ll wrap up Vice Week here at Freedom to Tinker with an entry on porn labeling. On Monday I agreed with the conventional wisdom that online porn regulation is a mess. On Tuesday I wrote about what my wife and I do in our home to control underage access to inappropriate material. Today, I’ll suggest a public approach to online porn that might possibly do a little bit of good. And as Seth Finkelstein (a.k.a. Eeyore, a.k.a. The Voice of Experience) would probably say, a little bit of good is the best one can hope for on this issue. My approach is similar to one that Larry Lessig sketched in a recent piece in Wired.

My proposal is to implement a voluntary labeling scheme for Web content. It’s voluntary, because we can’t force overseas sites to comply, so we might as well just ask people politely to participate. Labeling schemes tend not to be adopted if the labels are complicated, or if the scheme requires all sites to be labeled. So I’ll propose the simplest possible labels, in a scheme where the vast majority of sites need no labels at all.

The idea is to create a label, which I’ll call “adultsonly” (Lessig calls it “porn” but I think that’s imprecise). Putting the adultsonly tag on a page indicates that the publisher requests that the page be shown only to adults. And that’s all it means. There’s no official rule about when material should be labeled, and no spectrum of labels. It’s just the publisher’s judgment as to whether the material should be shown to kids. You could label an entire page by adding to it an adultsonly meta-tag; or you could label a portion of a page by surrounding it with “adultsonly” and “/adultsonly” tags. This would be easy to implement, and it would be backward compatible since browsers ignore tags that they don’t understand. Browsers could include a kids-mode that would hide all adultsonly material.

But where, you ask, is the incentive for web site publishers to label their racy material as adultsonly? The answer is that we create that incentive by decreeing that although material published on the open Internet is normally deemed as having been made available to kids, any material labeled as adultsonly will be deemed as having been made available only to adults. So by labeling its content, a publisher can ensure that the content’s First Amendment status is determined by the standard obscenity-for-adults test, rather than the less permissive obscenity-for-kids test. (I’m assuming that such tests will exist and their nature will be determined by immovable politico-legal forces.)

This is a labeling scheme that even a strict libertarian might be able to love. It’s simple and strictly voluntary, and it doesn’t put the government in the business of establishing fancy taxonomies of harmful content (beyond the basic test for obscenity, which is in practice unchangeable anyway). It’s more permissive of speech than the current system, at least if that speech is labeled. This is, I think, the least objectionable content labeling system possible.