November 12, 2024

Archives for January 2004

Can Ownership Be Owned?

Julian Dibbell, at TerraNova, points out an issued U.S. Patent that seems to cover digital property systems of the type used by many multiplayer online games:

How naive must one be, in this day and age, to spend months debating the question of virtual property without once wondering whether the question itself (or at any rate the phenomenon underlying it) wasn’t already somebody’s intellectual property?

Speaking only for myself, I confess the thought never crossed my mind. Not until last week, that is, when I received a friendly email from veteran game designer Ron Martinez, who alerted me to U.S. patent 6,119,229, “Virtual Property System,” filed April 1997, granted September 2000, and jointly held by Martinez, Greg Guerin, and the famous cryptographer Bruce Schneier.

As if it weren’t freaky enough that someone could own the concept of digital property, check this out: the patent arguably covers the U.S. Patent system itself, as administered by the PTO, at least with respect to patents on network technology.

Don’t believe me? Let’s read the text of Claim 1 of the patent against the U.S. Patent system. I’ll intersperse the language of the claim (in ordinary typeface) with explanations of where each element can be found in the patent system (in italics). Ready? Here goes.

What is claimed is:

1. A digital object ownership system [the U.S. patent system], comprising:

a plurality of user terminals, each of said user terminals being accessible by at least one individual user [PCs on the Internet];

at least one central computer system, said central computer system being capable of communicating with each of said user terminals [the Patent Office’s servers];

a plurality of digital objects [U.S. patents], each of said digital objects having a unique object identification code [the patent number], each of said digital objects being assigned to an owner [patents have owners], said digital objects being persistent such that each of said digital objects is accessible by a particular user both when said user’s terminal is in communications with said central computer system and also when said terminal is not in communication with said central computer system [patents still exist even when users aren’t reading them on the Net], said object having utility [the ability to bring an infringement suit] in connection with communication over a network [assuming the patent covers subject matter connected to communication over a network], said utility requiring the presence of the object identification code and proof of ownership [infringement suit requires the use of the patent number and a proof of ownership of the patent];

wherein said objects are transferable among users [patent ownership can be transferred]; and

wherein an object that is transferred is assigned to the new owner [when transferred, patent belongs to the new owner].

Yikes! Perhaps the patent system itself is prior art that would invalidate this claim, or at least narrow its scope. This is too much to contemplate on a Friday afternoon.

Balancing Can Be Harder Than It Looks

Reflecting on the recent argument about Howard Dean’s old smartcard speech, Larry Lessig condemns the kind of binary thinking that would divide us all into two camps, pro-privacy vs. pro-national-security. He argues that Dean’s balanced speech was (perhaps deliberately) misread by some, with the goal of putting Dean into the extreme pro-national-security/anti-privacy camp.

There is a special circle in hell reserved for those who try to destroy the middle ground on issues like this. Dean was clearly trying to take a balanced position, and it’s unfair to ignore the pro-privacy part of his speech to paint him as anti-privacy. Dean was advocating a reasonable balance.

But it’s not enough simply to want balance. You also have to figure out how to achieve it, or at least approximate it, by adjusting the available policy levers. And that can be difficult, especially if those levers are weak or hard to understand. Opting for balance is not the end of the policy process, but the beginning.

Rather than accusing politicians like Dean of wanting the worst for America, we can do much more good by helping them understand what the policy levers do and why it might not be such a good idea to pull that one they’re reaching for.

Diebold Fails Yet Another Security Evaluation

A group of ex-NSA security experts, hired by the state of Maryland to evaluate the state’s Diebold electronic voting systems, found the systems riddled with basic security flaws. This confirmed two previous studies, one led by Johns Hopkins researchers and one by SAIC. Here are some excerpts from John Schwartz’s New York Times story:

Electronic voting machines made by Diebold Inc. that are widely used in several states have such poor computer security and physical security that an election could be disrupted or even stolen by corrupt insiders or determined outsiders, according to a new report presented today to Maryland state legislators.

The authors of the report said that they had expected a higher degree of security in the design of the machines. “We were genuinely surprised at the basic level of the exploits” that allowed tampering, said Mr. Wertheimer, a former security expert for the National Security Agency.

William A. Arbaugh, an assistant professor of computer science at the University of Maryland and a member of the Red Team exercise, said, “I can say with confidence that nobody looked at the system with an eye to security who understands security.”

Read the second (on-line) page of the NYT story for a litany of problems the team found. In short, they could easily corrupt individual voting machines so that they counted votes for the wrong candidate or not at all; they could introduce false vote counts for whole precincts into the central vote-tallying server; or they could use well-known hostile exploits to seize control of the servers remotely.

Diebold’s response?

In a statement released today, Bob Urosevich, president of Diebold Election Systems, said this report and another by the Science Applications International Corporation “confirm the accuracy and security of Maryland’s voting procedures and our voting systems as they exist today.”

Mr. Urosevich added: “With that said, in our continued spirit of innovation and industry leadership, there will always be room for improvement and refinement. This is especially true in assuring the utmost security in elections.”

University of Maryland professor Bill Arbaugh, one of the study participants and a genuine security expert, gets the last word: “It seemed everywhere we scratched, there was something that’s pretty troubling.”

Dean's Smart-Card Speech

Declan McCullagh at CNet news.com criticizes a speech given by Howard Dean about two years ago, in which Dean called for aggressive adoption of smartcard-based state driver’s licenses and smartcard readers. Declan highlights the privacy-endangering aspects of the smartcard agenda, and paints Dean as a hypocrite for pushing that agenda while positioning himself as pro-privacy.

Larry Lessig (among others) argues that Declan mischaracterized Dean’s speech, and urges people to read the text of Dean’s speech. Others have compared this incident to Declan’s infamous role in manufacturing the “Al Gore claims to have invented the Internet” meme back in 2000.

There is certainly a disconnect between the tone of Declan’s article and that of Dean’s speech. Reading the speech, we see Dean genuflecting properly, and at length, to the importance of privacy. We don’t hear about that in Declan’s article.

But Declan’s omissions aren’t the whole story. The first half of Declan’s piece quotes extensively from Dean’s speech, and it portrays accurately the technical proposal that Dean was endorsing. Declan’s reaction to that technical agenda is not unreasonable. For example, a National Academy study report on national ID technologies took a position closer to Declan’s than to Dean’s.

The fact is that there is a deep disconnect between the different sections of Dean’s speech. It’s hard to reconcile the privacy-is-paramount part of the speech with the smartcards-everywhere part. At least, it’s hard to reconcile them if you really understand the technology. Dean makes a compelling argument that computer security is important, and he makes an equally compelling argument in favor of preserving privacy. But how can we have both? Enter the smartcard as deus ex machina. It sounds good, but unfortunately it’s not a technically sound argument.

Now, nobody expects state governors to understand technology well enough to spot the technical flaws in Dean’s speech. Probably, nobody advising Dean at the time had the knowledge to notice the problem. That’s not good; but it hardly makes Dean unique.

At bottom, what we have here is a mistake by Dean, in deciding to give a speech recommending specific technical steps whose consequences he didn’t fully understand. That’s not good. But on the scale of campaign gaffes, this one seems pretty minor.

[Disclaimer: My longstanding policy is to avoid partisan politics on this blog. I’m commenting on this issue because of my expertise in computer security, and not to make a political point or to urge anyone to vote for or against Dean.]

Was the Senate File Pilfering Criminal?

Some people have argued that the Senate file pilfering could not have violated the law, because the files were reportedly on a shared network drive that was not password-protected. (See, for instance, Jack Shafer’s Slate article.) Assuming those facts, were the accesses unlawful?

Here’s the relevant wording from the Computer Fraud and Abuse Act (18 U.S.C. 1030):

Whoever … intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains … information from any department or agency of the United States … shall be punished as provided in subsection (c) …

[T]he term ”exceeds authorized access” means to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter

To my non-lawyer’s eye, this looks like a judgment call. It seems not to matter that the files were on a shared server or that the staffers may have been entitled to access other files on that server.

The key issue is whether the staffers were “entitled to” access the particular files in question. And this issue, to me at least, doesn’t look clear-cut. The fact that it was easy to access the files isn’t dispositive – “entitled to access” is not the same as “able to access”. (An “able to access” exception would render the provision vacuous – a violation would require someone to access information that they are unable to access.)

The lack of password protection cuts in favor of an entitlement to access, if failure to protect the files is taken to indicate a decision not to protect them, or at least an indifference to whether they were protected. But if the perpetrators knew that the failure to use password protection was a mistake, that would cut against entitlement. The rules and practices of the Senate seem relevant too, but I don’t know much about them.

The bottom line is that unsupported claims that the accesses were obviously lawful, or obviously unlawful, should be taken with a large grain of salt. I’d love to hear the opinion of a lawyer experienced with the CFAA.

(Disclaimer: This post is only about whether the accesses were lawful. Even if lawful, they appear unethical.)