November 21, 2024

Archives for March 2004

New Survey of Spam Trends

The Pew Internet & American Life Project has released results of a new survey of experiences with email spam.

The report’s headline is “The CAN-SPAM Act Has Not Helped Most Email Users So Far”, and this interpretation is followed by the press articles I have seen so far. But it’s not actually supported by the data. Taken at face value, the data show that the amount of spam has not changed since January 1, when the CAN-SPAM Act took effect.

If true, this is actually good news, since the amount of spam had been increasing previously; for example, according to Brightmail, spam had grown from 7% of all email in April 2001, to 50% in September 2003. If the CAN-SPAM Act put the brakes on that increase, it has been very effective indeed.

Of course, the survey demonstrates only correlation, not causality. The level of spam may be steady, but there is nothing in the survey to suggest that CAN-SPAM is the reason.

An alternative explanation is hiding in the survey results: fewer people may be buying spammers’ products. Five percent of users reported having bought a product or service advertised in spam. That’s down from seven percent in June 2003. Nine percent reported having responded to a spam and later discovered it was phony or fraudulent; that’s down from twelve percent in June 2003.

And note that the survey asked whether the respondent had ever responded to a spam, so the decrease in recent response rates would be much more dramatic. To understand why, imagine a group of 200 people who responded to the latest survey. Suppose that 100 of them are Recent Adopters, having started using the Internet since June 2003, and that the other 100 are Longtime Users who went online before June 2003. According to the previous survey, seven of the Longtime Users (i.e., 7%) bought from a spammer before June 2003; and according to the latest survey, only ten of our overall group of 200 users (i.e., 5%) have ever bought from a spammer. It follows that only three of our other 190 hypothetical users responded to a spam since June 2003, so that spammers are finding many fewer new buyers than before.

A caveat is in order here. The survey’s margin of error is three percent. so we can’t be certain there’s a real trend here. But still, it’s much more likely than not that the number of responders really has decreased.

ATM Crashes to Windows Desktop

Yesterday, an ATM in Baker Hall at Carnegie Mellon University crashed, or had some kind of software error, and ended up displaying the Windows XP desktop. Some students started Windows Media Player on it, playing a song that comes preinstalled on Windows XP machines. Students took photos and movies of this.

There’s no way to tell whether the students, starting with the Windows desktop, would have been able to eject the ATM’s stock of cash. As my colleague Andrew Appel observes, it’s possible to design an ATM in a way that prevents it from dispensing cash without the knowledge and participation of a computer back at the bank. For example, the cash dispensing hardware could require some cryptographic message from the bank’s computer before doing anything. Then again, it’s possible to design a Windows-based ATM that never (or almost never) displays the Windows desktop, failing instead into a “technical difficulties – please call customer service” screen, and the designers apparently didn’t adopt that precaution.

A single, isolated failure like this isn’t, in itself, a big deal. Every ATM transaction is recorded and audited. Banks have the power to adopt loss-prevention technology; they have good historical data on error rates and losses; and they absorb the cost of both losses and loss-prevention technology. So it seems safe to assume that they are managing these kinds of risks rationally.

Good News: Election Error Found in California

From Kim Zetter at wired.com comes the story at of the recent Napa County, California election. Napa County uses paper ballots that are marked by the voter with a pen or pencil, and counted by an optical scanner machine.

Due to a miscalibrated scanner, some valid votes went uncounted, as the scanner failed to detect the markings on some ballots. The problem was discovered during a random recount of one percent of precincts. The ballots are now being recounted with properly calibrated scanners, and the recount might well affect the election’s result.

Although a mistake was made in configuring the one scanner, the good news is that the system was robust enough to catch the problem. The main source of this robustness lies in the paper record, which could be manually examined to determine whether there was a problem, and could be recounted later when a problem was found. Another important factor was the random one percent recount, which brought the problem to light.

Our biggest fear in designing election technology should not be that we’ll make a mistake, but that we’ll make a mistake and fail to notice it. Paper records and random recounts help us notice mistakes and recover from them. Paperless e-voting systems don’t.

Did I mention that the Holt e-voting bill, H.R. 2239, requires paper trails and random recounts?

[Link via Peter Neumann’s RISKS Forum.]

Solum's Response on .mobile

Larry Solum, at Legal Theory Blog, responds to my .mobile post from yesterday. He also points to a recently published paper he co-authored with Karl Mannheim. The paper looks really interesting.

Solum’s argument is essentially that creating .mobile would be an experiment, and that the experiment won’t hurt anybody. If nobody adopts .mobile, the experiment will have no effect at all. And if some people like .mobile and some don’t, those who like it will benefit and the others won’t be harmed. So why not try the experiment? (Karl-Friedrich Lenz made a similar comment.)

The Mannheim/Solum paper argues that ICANN should let a thousand gTLDs bloom, and should use auctions to allocate the new gTLDs. (gTLDs are Generic Top-Level Domains such as .com, .org, or .union) The paper argues persuasively for this policy.

If ICANN were following the Mannheim/Solum policy, or some approximation to it, I would agree with Solum’s argument and would be happy to see the .mobile experiment proceed. (But I would still bet on its failure.) No evidence for its viability would be needed, beyond the sponsors’ willingness to outbid others for the rights to that gTLD.

But today’s ICANN policy is to authorize very few gTLDs, and to allocate them administratively. In the context of today’s policy, and knowing that the creation of one new gTLD will be used to argue against the creation of others, I think a strong case needs to be made for any new gTLD. The proponents of .mobile have not made such a case. Certainly, they have not offered a convincing argument that theirs is the best way to allocate a new gTLD, or even that their is the best way to allocate the name .mobile.

Why We Don't Need .mobile

A group of companies is proposing the creation of a new Internet top level domain called “.mobile”, with rules that require sites in .mobile to be optimized for viewing on small-display devices like mobile phones.

This seems like a bad idea. A better approach is to let website authors create mobile-specific versions of their sites, but serve out those versions from ordinary .com addresses. A mobile version of weather.com, for example, would be served out from the weather.com address. The protocol used to fetch webpages, HTTP, already tells the server what kind of device the content will be displayed on, so the server could easily send different versions of a page to different devices. This lets every site have a single URL, rather than having to promote separate URLs for separate purposes; and it lets any page link to any other page with a single hyperlink, rather than an awkward “click here on mobile phones, or here on other devices” construction.

The .mobile proposal looks like a textbook example of Lessig’s point about how changing the architecture of the net can increase its regulability. .mobile would be a regulated space, in the sense that somebody would make rules controlling how sites in .mobile work. And this, I suspect, is the real purpose of .mobile – to give one group control over how mobile web technology develops. We’re better off without that control, letting the technology develop on its own over in the less regulated .com.

We already have a regulated subdomain, .kids.us, and that hasn’t worked out too well. Sites in .kids.us have to obey certain rules to keep them child-safe; but hardly any sites have joined .kids.us. Instead, child-safe sites have developed in .com and .org, and parents who want to limit what their kids see on the net just limit their kids to those sites.

If implemented, .mobile will probably suffer the same fate. Sites will choose not to pay extra for the privilege of being regulated. Instead, they’ll stay in .com and focus on improving their product.