April 25, 2024

Archives for October 2008

Kentucky vs. 141 Domain Names

Yes, that is a title of a real, current legal case and controversy.

(And, no, the links in this post are not spam… mostly gambling news sites seem to be reporting on this.)

The Governor of Kentucky, through his Justice and Public Safety Cabinet, has moved in court to have 141 gambling-related domain names transferred to the Kentucky state government, partially because other legal gambling operations in Kentucky, like horseracing, lose revenue to online gaming. Yes, you read that right: by allegedly violating KY law, the state can move to have property used in these unlawful acts transferred to the state. In this case, the “property” in question is the domain names themselves.

This case is definitely novel in the realm of cyberlaw, but also is a bit controversial for how it originally proceeded. At first, the state met with the judge in a unilateral hearing where the judge granted a seizure order directing the registrars of each domain name to transfer the domain name to the state of Kentucky (a few registrars transferred the domain names immediately upon receiving the order). The judge also then established a date for a forfeiture hearing (think of it as a last chance opportunity for affected parties to appear and dispute the seizure of their property). A phalanx of attorneys for various gambling outfits (presumably, see below) as well as industry and players associations showed up to this original hearing. The judge decided to accept briefing on the various issues presented; his order was due on Wednesday but was delayed until yesterday due to a computer glitch.

Judge Wingate’s order was handed down on Thursday. There’s so much interesting stuff in this case, perhaps it deserves a few more posts; I’d like to highlight a few things:

  • Identifying parties — For obvious reasons related to gambling being illegal in many parts of the United States, many of the 141 Domain Names defendants don’t want to be identified. However, to have standing — that is, to be able to present a legal argument as a direct party to a case — one needs to have an attorney and be identified as one of the named defendants (or anyone could make the case).
  • Domain names as property — Are domain names more like an address or phone number or are they more like a piece of physical property? Here the judge relies on a case from the 9th Circuit in California, Kremen v. Cohen 337 F.3d 1024 (9th Cir. 2003), where Justice Kozinski had to decide if a domain name was property that could be stolen under California law. That case established an “attributes test” for intangible property that includes 1) is there an interest capable of precise definition? 2) can it be excluded from possession or otherwise controlled? and 3) can the purported owner establish a legitimate claim to exclusivity? Applying this test (and some additional muddled reasoning), Judge Wingate found that domain names are indeed intangible property.
  • Devices and chance — The state maintains, and presented expert testimony to the effect, that domain names are a “device or transport device allowing Kentuckians to engage in internet gambling.” In my opinion, this is where Judge Wingate goes a bit off the deep end. The part of Kentucky law that defines a “gambling device” (KRS 528.010 (4)(a) and (b)) as a tangible device manufactured and designed specifically for gambling. Wingate compares domain names to “virtual keys” for “virtual casinos” and finds that reading the law literally is not appropriate here and, rather, Kentucky courts have to uphold the intent of the law. And how much virtual intent can we read into Kentucky law? I would further quibble with Wingate’s assertion that these particular domain names have been designed to attract players; most of the successful gambling sites in the list of 141 seem to have more branding value in their domain names rather than cachet due to clever word choice.

    Also, under KY law, games of chance are explicitly illegal while games of skill are not. The Poker Player’s Alliance, a group that represents players of poker and poker enthusiasts, argued in an amicus brief that the poker-related subset of the 141 Domain Names should not be subject to the forfeiture due to their not being illegal under KY law. Wingate seems on more solid ground with the chance element raised by the Poker Player’s Alliance. The part of KY law relevant here (KRS 528.010(3)) in that it defines chance as only one element of what constitutes “gambling” with risking something of value and the opportunity of winning something of value as the other elements.

What’s the upshot of all of this? To me, it’s pretty scary: A state government moved to order seizure of domain names that it found were illegal “devices” and a judge issued an order demanding the transfer of these domain names before any hearing or opportunity to protest. The state has so far successfully argued that domain names are property and devices used for illegal gambling within Kentucky and that the 141 Domain Names defendants must identify themselves to have standing to contest the seizure and forfeiture. The last shoe to drop is that Judge Wingate, as part of his order from yesterday, ordered the state to rescind any forfeiture for gambling sites that block Kentucky gamers using geographical blocking methods (the wording was, essentially: Defendants who install a “software or device […] which has the capability to block and deny access to [the defendant’s] online gambling sites […] from any users or consumers within the […] Commonwealth [of Kentucky] and reasonably establishes to the [state] or this Court that such geographical blocks are operational, shall be relieved from the effects of the Seizure Order and from any further proceedings [in this action.]”).

What is to stop other local governments from mandating blacklisting of geographical user bases (despite the plain futility of this protection measure)? What’s to stop an authoritarian state from seizing the domain name of a dissident group? I don’t see a good solution.

Finally, the only general amicus brief submitted was from the Internet Commerce Association representing domain name registrars. Where is the public interest voices in this? Where are my friends from the Electronic Frontier Foundation?

Report on the Sequioa AVC Advantage

Today I am releasing an in-depth study of the Sequoia AVC Advantage direct-recording electronic (DRE) voting machine, available at citp.princeton.edu/voting/advantage. I led a team of six computer scientists in a monthlong examination of the source code and hardware of these voting computers, which are used in New Jersey, Pennsylvania, and other states.

The Rutgers Law School Constitutional Litigation Clinic filed a lawsuit seeking to decommission of all of New Jersey’s voting computers, and asked me to serve as an expert witness. This year the Court ordered the State of New Jersey and Sequoia Voting Systems to provide voting machines and their source code for me to examine. By Court Order, I can release the report no sooner than October 17th, 2008.

Accompanying the report is a video and a FAQ.

Executive Summary

I. The AVC Advantage 9.00 is easily “hacked” by the installation of fraudulent firmware. This is done by prying just one ROM chip from its socket and pushing a new one in, or by replacement of the Z80 processor chip. We have demonstrated that this “hack” takes just 7 minutes to perform.

The fraudulent firmware can steal votes during an election, just as its criminal designer programs it to do. The fraud cannot practically be detected. There is no paper audit trail on this machine; all electronic records of the votes are under control of the firmware, which can manipulate them all simultaneously.

II. Without even touching a single AVC Advantage, an attacker can install fraudulent firmware into many AVC Advantage machines by viral propagation through audio-ballot cartridges. The virus can steal the votes of blind voters, can cause AVC Advantages in targeted precincts to fail to operate; or can cause WinEDS software to tally votes inaccurately. (WinEDS is the program, sold by Sequoia, that each County’s Board of Elections uses to add up votes from all the different precincts.)

III. Design flaws in the user interface of the AVC Advantage disenfranchise voters, or violate voter privacy, by causing votes not to be counted, and by allowing pollworkers to commit fraud.

IV. AVC Advantage Results Cartridges can be easily manipulated to change votes, after the polls are closed but before results from different precincts are cumulated together.

V. Sequoia’s sloppy software practices can lead to error and insecurity. Wyle’s Independent Testing Authority (ITA) reports are not rigorous, and are inadequate to detect security vulnerabilities. Programming errors that slip through these processes can miscount votes and permit fraud.

VI. Anomalies noticed by County Clerks in the New Jersey 2008 Presidential Primary were caused by two different programming errors on the part of Sequoia, and had the effect of disenfranchising voters.

VII. The AVC Advantage has been produced in many versions. The fact that one version may have been examined for certification does not give grounds for confidence in the security and accuracy of a different version. New Jersey should not use any version of the AVC Advantage that it has not actually examined with the assistance of skilled computer-security experts.

VIII. The AVC Advantage is too insecure to use in New Jersey. New Jersey should immediately implement the 2005 law passed by the Legislature, requiring an individual voter-verified record of each vote cast, by adopting precinct-count optical-scan voting equipment.

Hot Custom Car (software?)

I’ve found Tim’s bits on life post-driving interesting. I’ve sometimes got a one-track mind, though- so what I really want to know is if I’ll be able to hack on that self-driving car. I mentioned this to Tim, and he said he wasn’t sure either- so here is my crack at it.

We’re not very good at making choices like this. Historically, liability constrained software development at large institutions (the airlines had a lot of reasons not to let people to hack on their airplanes) and benign neglect was sufficient to regulate hacking of personal software- if you hacked your PC or toaster, no one cared because it had no impact (a form of Lessig’s regulation by architecture). The net result was that we didn’t need to regulate software very much, we got lots of innovation from individual developers, and we stayed bad at making choices like ‘how should we regulate people’s ability to hack?’

Individuals are now beginning to own hackable devices that can also harm the neighbors, though, so the space in between large institution and isolated hacker is filling up. For example, the FCC regulates your ability to modify your own wireless devices, so that you can’t interfere with other people’s spectrum. And some of Prof. Jonathan Zittrain’s analysis suggests that we might want to even regulate PCs, since they can now frequently be vectors for spam and viruses. Tim and I are normally fairly anti-regulation, and pro-open source, but even we are aware that cars running around all over the place driven by by potentially untested code might also fit in this gap- and be more worthy of regulation.

So what should happen? Should we be able to hack our cars (more than we already do), and if so, under what conditions?

It’d help if we could better measure the risks and benefits involved. Unfortunately, probably because we regulate software so rarely, our metrics for assessing the risks and benefits of software development aren’t very good. One such metric is Prof. Zittrain’s ‘generativity’; Dan Wallach’s proposal to measure the ‘O(n)’ of potential system damage is another. Neither are perfect fits here, but that only confirms that we need more such tools in our software policy toolkit.

This lack of tools shouldn’t stop us from some basic, common-sense analysis, though. On the pro side, the standard arguments for open source apply, though perhaps not as strongly as usual, since many casual hackers might be discouraged at the thought of hacking their own car. We probably would want car manufacturers to pool their safety expertise, which would be facilitated by openness. Finally, we might also want open code for auditing reasons- with millions of lives on the line, this seems like a textbook case for wanting ‘many eyes‘ to take a look at the code.

If we accept these arguments on the ‘pro’ hacking side, what then? First, we could require that the car manufacturers use test-driven development, and share those tests with the public- perhaps even allowing the public to add new tests. This would help avoid serious safety problems in the ‘original’ code, and home hackers might be blocked from loading new code into their cars unless the code was certified to have passed the tests. Second, we could treat the consequences very seriously- ‘driving’ with bad code could be treated similarly to DUI. Third, we could make sure that the safety fallbacks (emergency brake systems, etc.) are in separate, redundant (and perhaps only mechanical?) unhackable systems. Having such systems is going to be good engineering whether the code is open or not, and making them unhackable might be a good compromise. (Serious engineers, instead of compsci BAs now in law school, should feel free to suggest other techniques in the comments.)

Bottom line? First, we don’t really know- we just have pretty poor analytical tools for this type of problem. But if we take a stab at it, we can see that there are some potential solutions that might be able to harness the innovation and generativity of open source in our cars without significantly compromising our safety. At least, not compromising it any moreso than the already crazy core idea 🙂

[picture is ‘Car Show 2‘, by starmist1, used under the CC-BY license.]

Life after Driving

I’m working on a three-part series on self-driving automobile technology for Ars Technica. In part one I covered the state of existing self-driving technology and highlighted the dramatic progress that has been made in recent years. In part two, I assume that the remaining technical hurdles can be surmounted and examine what the world might look like when self-driving cars become ubiquitous. The potential benefits are enormous: autonomous vehicles could save thousands of lives, billions of person-hours, and billions of dollars of energy costs.

The article has sparked interesting discussion around the blogosphere. Matt Yglesias has a long-standing interest in urban planning issues, so he did a post about the urban planning implications of self-driving technologies. I argue that by making taxis cheaper, self-driving cars would shift a lot of people from owning cars to renting them. And that, in turn, would dramatically reduce demand for parking lots, which will allow more pleasant, high-density cities. It’s hard to overstate the extent to which the need for parking exacerbates sprawl and congestion problems. Parking lots consume vast amounts of land in suburban areas. This, in turn, means that stuff is farther apart, which forces people to rely even more on their cars to get from place to place.

Matt’s post prompted a number of interesting responses. Ryan Avent chimed in with some thoughts about how self-driving technologies would make urban living more attractive. On the other hand Tom Lee offers a counterpoint: making car travel cheaper and more convenient will, on the margin, cause people to drive (or “ride” anyway) more. This is a good point, and it’s not clear how these factors would balance out. But even if Tom is right, this wouldn’t be an entirely bad thing. Increased mobility is a virtue in its own right.

I think Atrios and Kevin Drum are on less firm ground when they argue that this technology is so far in the future that it’s not worth thinking about. Drum compares self-driving technologies to cold fusion and human-level AI, while Atrios compares them to flying cars and jet packs. I can only assume they didn’t read the first installment of my series, in which I discuss the state of the technology in some detail. The basic technology for self-driving is already here. There are cars in university laboratories that can navigate for hundreds of miles without human supervision, and can interact safely with other cars on urban streets. Of course, there’s still a lot of work to do to enable these vehicles to safely handle the multiplicity of obstacles they would encounter in real urban environments. And after that the technology will need to be made reliable and affordable enough for commercial use. But these problems are nowhere close to the difficulty of human-level AI. Your car doesn’t have to understand why you want to go to the store in order to find a safe path from here to there. If you’re skeptical that this technology can be made to work, I encourage you to read my first article and watch PBS’s excellent documentary on the 2005 DARPA Grand Challenge. There’s a lot of uncertainty about how long until this technology will be mature enough to let loose on our streets, but I think it’s pretty clearly a matter of “when,” not “if.”

Cloud(s), Hype, and Freedom

Richard Stallman’s recent description of ‘the cloud’ as ‘hype’ and a ‘trap’ seems to have stirred up a lot of commentary, but not a lot of clear discussion of the problems Stallman raised. This isn’t surprising- the term ‘the cloud’ has always been vague. (It was hard to resist saying ‘cloudy.’ 😉 When people say ‘the cloud’ they are really lumping at least four ‘cloud types’ together.

traditional applications, hosted elsewhere

Probably the most common type of ‘cloud’ is a service that takes a traditional software functionality and moves it to remotely hosted, (typically) web-delivered servers. Gmail and salesforce.com are like this- fairly traditional email and CRM applications, ‘just’ moved to the web.

If Stallman’s ‘hype’ claim is valid anywhere, it is here. Administration and maintenance costs are definitely lower when an expert like Google funds and runs the server, and reliability may improve as well. But the core functionality of these apps, and the ability to access data over a network, have been present since the dawn of networked computing. On average, this is undoubtedly a significant change in quality, but only rarely a change in type- making the buzz much harder to justify.

Stallman’s ‘trap’ charge is more complex. Computer users have long compromised on personal control by storing data remotely but accessing it via standardized protocols. This introduced risks- you had to trust the data host and couldn’t tinker with the server- but kept some controls- you could switch clients, and typically you could export the data. Some web apps still strike that balance- for example, most gmail features are accessible via good old POP and IMAP. But others don’t.

Getting your data out of a service like salesforce can be a ‘hidden cost’ of an apparently free service, and even with a relatively standards-based service like gmail you have no freedom to make changes to the server. These risks are what Stallman means when he talks about a ‘trap’, and regardless of your conclusion about them, understanding them is important.

services involving data that can’t (yet) be managed locally

Google Maps and Google Search are the canonical examples of this type of cloud service- heaps of data so large that one would need a large data center to host your own copy and a very, very fat pipe to keep it up-to-date.

Hype-wise, these are a mixed bag. These services definitely bring radical new functionality that traditionally can’t exist- I can’t store all of google maps on my phone. That hype is justified. At the same time, our personal ability to store and process data is still growing quickly, so the claims that this type of cloud service will always ‘require’ remote servers may be overblown.

‘Trap’-wise? Dependence on these services reminds me of ‘dependence’ on a library before the internet- you can work to make sure your library respects your privacy, prefer public libraries to private ones, or establish a personal library if your reading interests are narrow, but in the end eschewing large libraries is likely to be a case of cutting off your nose to spite your face. We’re in the same state with this type of cloud service. You can avoid them, but those concerned with freedom might be better off understanding and fixing them than condemning them altogether.

services that make creation of new data technically or economically feasible

Facebook and wikipedia are the canonical examples here. Unlike the first two types of cloud, where data was available but inconvenient before it ended up in the cloud, this class of cloud applications creates information that wasn’t previously feasible to collect at all.

There may well not be enough hype around this type of cloud. Replicating web scale collaborative facilities like these will be very difficult to do in a p2p fashion, and the impact of the creation of new information (even when it is as mundane as facebook’s data often is) is hard to understate.

Like the previous type of cloud, it is hard to call these a trap per se- they do make it hard to leave, but they do so by providing new functionality that is very hard to get with any traditional software model.

services offering computing and storage, rather than data

The most recent type of cloud service is remotely provisioned computing and storage, like Amazon’s EC2/S3 and Google’s App Engine. This is perhaps the most purely generative type of cloud, allowing individuals to create new services and scale them out to service millions of people without having to invest in their own physical infrastructure. It is hard to see any way in which this can reasonably be called ‘hype,’ given the reach it allows individuals and small or transient groups to have which might otherwise cost them many thousands of dollars.

From a freedom perspective, these can be both the best and worst of the cloud types. On the plus side, these services can be incredibly transparent- developers who use them directly have access to their own source code, and end users may not know they are using them at all. On the down side, especially for proprietary platforms like App Engine, these can have very deep lock-in- it is complicated, expensive, and risky to switch deployment platforms after achieving success. And they replace traditional, very open platforms- a tradeoff that isn’t always appreciated.

takeaways

‘The cloud’ isn’t going away, but hopefully we can clarify our thinking about it by talking about the different types of clouds. Hopefully this post is a useful step in that direction.

[This post is an extension of some ideas I’ve been playing around with on my own blog and at the autonomo.us group blog; readers curious about these issues may want to read further in those places. I also recommend reading this piece, which set me on the (very long) road to this particular post.]