November 27, 2024

Why We Don't Need .mobile

A group of companies is proposing the creation of a new Internet top level domain called “.mobile”, with rules that require sites in .mobile to be optimized for viewing on small-display devices like mobile phones.

This seems like a bad idea. A better approach is to let website authors create mobile-specific versions of their sites, but serve out those versions from ordinary .com addresses. A mobile version of weather.com, for example, would be served out from the weather.com address. The protocol used to fetch webpages, HTTP, already tells the server what kind of device the content will be displayed on, so the server could easily send different versions of a page to different devices. This lets every site have a single URL, rather than having to promote separate URLs for separate purposes; and it lets any page link to any other page with a single hyperlink, rather than an awkward “click here on mobile phones, or here on other devices” construction.

The .mobile proposal looks like a textbook example of Lessig’s point about how changing the architecture of the net can increase its regulability. .mobile would be a regulated space, in the sense that somebody would make rules controlling how sites in .mobile work. And this, I suspect, is the real purpose of .mobile – to give one group control over how mobile web technology develops. We’re better off without that control, letting the technology develop on its own over in the less regulated .com.

We already have a regulated subdomain, .kids.us, and that hasn’t worked out too well. Sites in .kids.us have to obey certain rules to keep them child-safe; but hardly any sites have joined .kids.us. Instead, child-safe sites have developed in .com and .org, and parents who want to limit what their kids see on the net just limit their kids to those sites.

If implemented, .mobile will probably suffer the same fate. Sites will choose not to pay extra for the privilege of being regulated. Instead, they’ll stay in .com and focus on improving their product.

An Inexhaustible Supply of Bugs

Eric Rescorla recently released an interesting paper analyzing data on the discovery of security bugs in popular products. I have some minor quibbles with the paper’s main argument (and I may write more about that later) but the data analysis alone makes the paper worth reading. Briefly, what Eric did is to take data about reported security vulnerabilities, and fit it to a standard model of software reliability. This allowed him to estimate the number of security bugs in popular software products and the rate at which those bugs will be found in the future.

When a product version is shipped, it contains a certain number of security bugs. Over time, some of these bugs are found and fixed. One hopes that the supply of bugs is depleted over time, so that it gets harder (for both the good guys and the bad guys) to find new bugs.

The first conclusion from Eric’s analysis is that there are many, many security bugs. This confirms the expectations of many security experts. My own rule of thumb is that typical release-quality industrial code has about one serious security bug per 3,000 lines of code. A product with tens of millions of lines of code will naturally have thousands of security bugs.

The second conclusion is a bit more surprising: there is little if any depletion of the bug supply. Finding and fixing bugs seems to have a small effect, or no effect at all, on the rate at which new bugs are discovered. It seems that the supply of security bugs is practically inexhaustible.

If true, this conclusion has profound implications for how we think about software security. It implies that once a version of a software product is shipped, there is nothing anybody can do to improve its security. Sure, we can (and should) apply software patches, but patching is just a treadmill and not a road to better security. No matter how many bugs we fix, the bad guys will find it just as easy to uncover new ones.

Suit Challenges Broadcast Flag

A lawsuit was filed last week, challenging the FCC’s Broadcast Flag decree. Petitioners include the American Library Association, several other library associations, the Consumer Federation of America, Consumers Union, the EFF, and PublicKnowledge. Here is a court filing outlining the petitioners’ arguments.

A Spoonful of Sugar

Here’s a brilliant idea. A group at Carnegie Mellon University has created The ESP Game, in which a pair of strangers, shown a photographic image, are each asked to guess the single word that the other will use to characterize the image. Get it right and you score valuable points. For an extra challenge, sometimes there are “taboo words” that you aren’t allowed to use. Players report that the game is semi-addictive.

The brilliant part is that the game “tricks” its players into doing an important and incredibly time-consuming job. By playing the game, you’re helping to build a giant index that associates each image on the internet with a set of words that describe it. It’s well known that indexing and searching a set of images requires the time-consuming manual step of assigning descriptive words to each image. Labeling all of the images on the internet is an enormous amount of work. When you play the ESP Game, you’re shown images randomly chosen from the internet. You’re doing the time-consuming manual work to index the whole internet’s images – and enjoying it! So far the group has collected over two million labels.

Utah Anti-Spyware Bill

The Utah state legislature has passed an anti-spyware bill, which now awaits the governor’s signature or veto. The bill is opposed by a large coalition of infotech companies, including Amazon, AOL, AT&T, eBay, Microsoft, Verizon, and Yahoo.

The bill bans the installation of spyware on a user’s computer. The core of the bill is its definition of “spyware”, which includes both ordinary spyware (which captures information about the user and/or his browsing habits, and sends that information back to the spyware distributor) and adware (which displays uninvited popup ads on a user’s computer, based on what the user is doing). Leaving aside the adware parts of the definition, we’re left with this:

(4) Except as provided in Subsection (5), “spyware” means software residing on a computer that:

(a) monitors the computer’s usage;
(b) (i) sends information about the computer’s usage to a remote computer or server; or [adware stuff omitted]; and
(c) does not:

(i) obtain the consent of the user, at the time of, or after installation of the software but before the software does any of the actions described in Subsection (4)(b):

(A) to a license agreement:

(I) presented in full; and
(II) written in plain language;

(B) to a notice of the collection of each specific type of information to be transmitted as a result of the software installation; [adware stuff omitted]
and

(ii) provide a method:

(A) by which a user may quickly and easily disable and remove the software from the user’s computer;
(B) that does not have other effects on the non-affiliated parts of the user’s computer; and
(C) that uses obvious, standard, usual, and ordinary methods for removal of computer software.

(5) Notwithstanding Subsection (4), “spyware” does not include:

(a) software designed and installed solely to diagnose or resolve technical difficulties;
(b) software or data that solely report to an Internet website information previously stored by the Internet website on the user’s computer, including:

(i) cookies;
(ii) HTML code; or
(iii) Java Scripts; or

(c) an operating system.

Since all spyware is banned, this amounts to a requirement that programs that meet the criteria of 4(a) and 4(b) (except those exempted by 5), must avoid 4(c) by obtaining user consent and providing a suitable removal facility.

The bill’s opponents claim the definition is overbroad and would cover many legitimate software services. If they’re right, it seems to me that the notice-and-consent requirement would be more of a burden than the removability requirement, since nearly all legitimate software is removable, either by itself or as part of a larger package in which it is embedded.

I have not seen specific examples of legitimate software that would be affected. A letter being circulated by opponents refers generically to “a host of important and beneficial Internet communication software”, that gather and communicate data that “may include information necessary to provide upgrade computer security, to protect against hacker attacks, to provide interactivity on web sites, to provide software patches, to improve Internet browser performance, or enhance search capabilities”. Can anybody think of a specific example, in which it would be burdensome to obtain the required consent or to provide the required removal facility?

[Opponents also argue that the bill’s adware language is overbroad. That in itself may be enough to oppose the bill; but I won’t discuss that aspect of the bill here.]