November 24, 2024

Archives for 2002

Economist Article

The article on me and my pro-tinkering work, from the June 20th issue of the Economist, is now available on line.

Dornseif on Source Code and Object Code

Maximillian Dornseif offers another comment on my source code vs. object code posting.

He points out, correctly, that we can still define “source code” and “object code” reasonably. We can get some mileage out of these definitions, as long as we remember that a piece of code might be either source code, or object code, or both, or neither.

Dornseif raises another interesting question, about the boundary between “writing a program” and “using a program”. Consider a typical Excel spreadsheet. To me as a computer scientist, a spreadsheet is a program – it directs the computer to combine some inputs in a certain way to produce some outputs. Yet the typical spreadsheet author probably doesn’t think of what he or she is doing as programming.

More on Berman-Coble's Peer-to-Peer Definition

In a previous posting, I remarked on the overbreadth of the Berman-Coble bill’s definition of “peer to peer file trading network”. The definition has another interesting quirk, which looks to me like an error by the bill’s drafters.

Here is the definition:

‘peer to peer file trading network’ means two or more computers which are connected by computer software that–
(A) [is designed to support file sharing]; and
(B) does not permanently route all file or data inquiries or searches through a designated, central computer located in the United States;

Last time I dissected (A). Now let’s look at (B). I read (B) as requiring that all inquiries or searches be routed through a single computer in the U.S.

Some people speculate that this exception is supposed to protect AOL Instant Messenger and similar systems. Others surmise that it is meant to exclude “big central server” systems like Napster, on the theory that the central server can be sued out of existence so no hacking attacks on it are necessary.

In either case, the exception fails to achieve its aim. In fact, it’s hard to see how any popular file sharing system could possibly be covered by (B).

The reason is simple. Big sites don’t use a single server computer. They tend to use a cluster of computers, routing each incoming request to one or another of the computers. This is done because the load on a big site is simply too large for any single computer to handle, and because it allows the server to keep going despite the crash of any individual computer.

A really big site might use a hundred or more computers, and they might not all be in the same physical location. (Spreading them out increases fault tolerance and allows requests to be routed to a nearby server for faster service.)

Sites that implement advanced functions need even more computers. For example, Google uses more than 10,000 computers to provide their service.

Some small file sharing systems might be able to function with a single computer, but as soon as such a system became popular, it would have to switch to multiple computers and so the exception would no longer protect it.

It seems unlikely that the exception was intended to cover only small, unpopular systems. More likely, the authors of the bill, and the people who vetted it for them, simply missed this point.

China Now Re-Routing Google Requests

Reuters reports that, since the weekend, some requests for Google from inside China are being rerouted to other, government-approved search engines. (Link at wirednews.com)

UPDATE (3pm EDT, Sept. 10): Ben Edelman now has screenshots of redirected browsers. (Link thanks to greplaw.)

John Gilmore on Spam and Censorship

Politech has an interesting message from John Gilmore about the effect of anti-spam measures.