Seth Finkelstein and Eric Albert criticize my claim that the fallacy of the almost-general-purpose computer can best be illustrated by analogy to an almost-general-purpose spoken language. They make some good points, but I think my original conclusion is still sound.
Seth argues that speech (or a program) can be regulated by making it extremely difficult to express, even if it isn’t strictly impossible to say. I’m skeptical of this claim for human languages, since it seems to me that no usable language can hope to prevent people from creating new words and then teaching others what they mean. I think my skepticism is even more valid for computer languages. If a computer language makes something difficult but not impossible, then some programmer will create a library that provides the difficult functionality in more convenient form. This is the computer equivalent of creating a new word and defining it for others.
Eric argues that advancing technology might make it possible to restrict what people can say online. I’m skeptical, but he may be right that restrictions on, say, porn may become more accurately enforceable over time. Still, my point was not that mega-censorship is impossible, but that mega-censorship necessarily causes huge collateral damage.
There’s another obvious reason to like the 1984 analogy: using it puts the anti-computer forces into the shoes of the 1984 government. (I don’t think they’ll spend a lot of time comparing and contrasting themselves with the 1984 government.)
You may say that this is cheap rhetorical trick, but I disagree. I believe that code is speech, and I believe that its status as speech is not just a legal technicality but a deep truth about the social value of code. What the code-regulators want is not so different from what the speech-regulators of 1984 wanted.