November 24, 2024

Forecast for Infotech Policy in the New Congress

Cameron Wilson, Director of the ACM Public Policy Office in Washington, looks at changes (made already or widely reported) in the new Congress and what they tell us about likely legislative action. (He co-writes the ACM U.S. Public Policy Blog, which is quite good.)

He mentions four hot areas. The first is regulation of peer-to-peer technologies. Once the Supreme Court’s decision in Grokster comes down, expect Congress to spring into action, to protect whichever side claims to be endangered by the decision. A likely focal point for this is the new Intellectual Property subcommittee of the Senate Judiciary Committee. (The subcommittee will be chaired by Sen. Orrin Hatch, who has not been shy about regulating infotech in the name of copyright. He championed of the Induce Act.) This issue will start out being about P2P but could easily expand to regulate a wider class of technologies.

The second area is telecom. Sen. Ted Stevens is the new chair of the Senate Commerce Committee, and he seems eager to work on a big revision of the Telecom Act of 1996. This will be a battle royal involving many interest groups, and telecom policy wonks will be fully absorbed. Regulation of non-telecom infotech products seems likely to creep into the bill, given the technological convergence of telecom with the Internet.

The third area is privacy. The Real ID bill, which standardizes state driver’s licenses to create what is nearly a de facto national ID card, is controversial but seems likely to become law. The recent ChoicePoint privacy scandal may drive further privacy legislation. Congress is likely to do something about spyware as well.

The fourth area is security and reliability of systems. Many people on the Hill will want to weigh in on this issue, but it’s not clear what action will be taken. There are also questions over which committees have jurisdiction. Many of us hope that PITAC’s report on the sad state of cybersecurity research funding will trigger some action.

As someone famous said, it’s hard to make predictions, especially about the future. There will surely be surprises. About the only thing we can be sure of is that infotech policy will get even more attention in this Congress than in the last one.

More Trouble for Network Monitors

A while back I wrote about a method (well known to cryptography nerds) to frustrate network monitoring. It works by breaking up a file into several shares, in such a way that any individual share, and indeed any partial subset of the shares, looks entirely random, but if you have all of the shares then you can add them together to get back the original file. Today I want to develop this idea a bit further.

The trick I discussed before sent the shares one at a time, perhaps interspersed with other traffic, so that a network monitor would have to gather and store all of the shares, and know that they were supposed to go together, in order to know what was really being transmitted. The network monitor couldn’t just look at one message at a time. In other words, the shares were transmitted from the same place, but at different times.

It turns out that you can also transmit the shares from different places. The idea is to divide a file into shares, and put one share on server A, another on server B, another on server C, and so on. Somebody who wanted the file (and who knew how it was distributed) would go to all of the servers and get one share from each, and then combine them. To figure out what was going on, a network monitor would have to be monitoring the traffic from all of the servers, and it would have to know how to put the seemingly random shares together. The network monitor would have to gather information from many places and bring it together. That’s difficult, especially if there are many servers involved.

If the network monitor did figure out what is going on, then it would know which servers are participating in the scheme. If Alice and Bob were both publishing shares of the file, then the network monitor would blame them both.

Congratulations on making it this far. Now here’s the cool part. Suppose that Alice is already publishing some file A that looks random. Now Bob wants to publish a file F using two-way splitting; so Bob publishes B = F-A, so that people can add A and B to get back F. Now suppose the network monitor notices that the two random-looking files A and B add up to F; so the network monitor tries to blame Alice and Bob. But Alice says no – she was just publishing the random-looking file A, and Bob came along later and published F-A. Alice is off the hook.

But note that Bob can use the same excuse. He can claim that he published the random-looking file B, and Alice came along later and published F-B. To the monitor, A and B look equally random. So the monitor can’t tell who is telling the truth. Both Alice and Bob have plausible deniability – Alice because she is actually innocent, and Bob because the network monitor can’t distinguish his situation from Alice’s. (Of course, this also works with more than two people. There could be many innocent Alices and one tricky Bob.)

Bob faces some difficulties in pulling off this trick. For example, the network monitor might notice that Alice published her file before Bob published his. Bob doesn’t have a foolproof scheme for distributing files anonymously – at least not yet. But stay tuned for my next post about this topic.

My Morning Pick-Me-Up

First thing this morning, I’m sitting in my bathrobe, scanning my inbox, when I’m jolted awake by the headline on a TechDirt story:

California Senator Wants to Throw Ed Felten in Jail

I guess I’ll take the time to read that story!

Kevin Murray, a California legislator, has introduced a bill that would fine, or imprison for up to one year, any person who “sells, offers for sale, advertises, distributes, disseminates, provides, or otherwise makes available” software that allows users to connect to networks that can share files, unless that person takes “reasonable care” to ensure that the software is not used illegally. TechDirt argues that my TinyP2P program would violate the proposed law.

Actually, the bill would appear to apply to a wide range of general-purpose software:

“[P]eer-to-peer file sharing software” means software that once installed and launched, enables the user to connect his or her computer to a network of other computers on which the users of these computers have made available recording or audiovisual works for electronic dissemination to other users who are connected to the network. When a transaction is complete, the user has an identical copy of the file on his or her computer and may also then disseminate the file to other users connected to the network.

That definition clearly includes the web, and the Internet itself, so that any software that enabled a user to connect to the Internet would be covered. And note that it’s not just the author or seller of the software who is at risk, but also any advertiser or distributor. Would TechDirt be committing a crime by linking to my TinyP2P page? Would my ISP be committing a crime by hosting my site?

The bill provides a safe harbor if the person takes “reasonable care” to ensure that the software isn’t used illegally. What does this mean? Standard law dictionaries define “reasonable care” as the level of care that a “reasonable person” would take under the circumstances, which isn’t very helpful. (Larry Solum has a longer discussion, which is interesting but doesn’t help much in this case.) I would argue that trying to build content blocking software into a general-purpose network app is a fruitless exercise which a reasonable person would not attempt. Presumably Mr. Murray’s backers would argue otherwise. This kind of uncertain situation is ripe for intimidation and selective prosecution.

This bill is terrible public policy, especially for the state that leads the world in the creation of innovative network software.

Enforceability and Steroids

Regular readers know that I am often skeptical about whether technology regulations can really be enforced. Often, a regulation that would make sense if it were (magically) enforceable, turns out to be a bad idea when coupled with a realistic enforcement strategy. A good illustrative example of this issue arises in Major League Baseball’s new anti-steroids program, as pointed out by David Pinto.

The program bars players from taking anabolic steroids, and imposes mandatory random testing, with serious public sanctions for players who test positive. A program like this helps the players, by eliminating the competitive pressure to take drugs that boost on-the-field performance but damage users’ health. Players are better off in a world where nobody takes steroids than in one where everybody does. But this is only true if drug tests can accurately tell who is taking steroids.

A common blood test for steroids measures T/E, the ratio of testosterone (T) to epitestosterone (E). T promotes the growth and regeneration of muscle, which is why steroids provide a competitive advantage. The body naturally makes E, and later converts it into T. Steroids are converted directly into T. So, all else being equal, a steroid user will have higher T/E ratio than a non-user. But of course all else isn’t equal. Some people naturally have higher T/E ratios than others.

The testing protocol will set some threshold level of T/E, above which the player will be said to have tested positive for steroids. What should the threshold be? An average value of T/E is about 1.0. About 1% of men naturally have T/E of 6.0 or above, so setting the threshold at that level would falsely accuse about 1% of major leaguers. (Or maybe more – if T makes you a better baseball player, then top players are likely to have unusually high natural levels of T.) That’s a pretty large number of false accusations, when you consider that these players will be punished, and publicly branded as steroid users. Even worse, nearly half of steroid users have T/E of less than 6.0, so setting the threshold there will give a violator a significant chance of evading detection. That may be enough incentive for a marginal player to risk taking steroids.

(Of course it’s possible to redo the test before accusing a player. But retesting only helps if the first test mismeasured the player’s true T/E level. If an innocent player’s T/E is naturally higher than 6.0, retesting will only seem to confirm the accusation.)

We can raise or lower the threshold for accusation, thereby trading off false positives (non-users punished) against false negatives (steroid users unpunished). But it may not be possible to have an acceptable false positive rate and an acceptable false negative rate at the same time. Worse yet, “strength consultants” may help players test themselves and develop their own customized drug regimens, to gain the advantages of steroids while evading detection by the official tests.

Taking these issues into account, it’s not at all clear that a steroid program helps the players. If many players can get away with using steroids, and some who don’t use are punished anyway, the program may actually be a lose-lose proposition for the players.

Are there better tests? Will a combination of multiple tests be more accurate? What tests will Baseball use? I don’t know. But I do know that these are the key questions to answer in evaluating Baseball’s steroids program. It’s not just a question of whether you oppose steroid use.

When Is a "Network" Not a Network?

Last week, in response to the MPAA lawsuits against BitTorrent trackers, I wrote that it’s impossible to sue BitTorrent itself, because it is nothing but a communications protocol. Michael Madison was skeptical, which was a fair response given what little I had written on the subject. Let me say a bit more, to clarify.

Opponents of P2P technologies often make the rhetorical move of calling the thing they oppose a “network.” The word carries connotations – especially for nonexperts – of a physical contrivance that is operated by some organization. Think of the old phone system, or the electrical power grid. Somebody has to build and manage all that equipment. The implication is that there is somebody in charge who can supervise the use of the network. Read the plaintiffs’ briefs in the Grokster case and you’ll see many references to a “network” that is “operated” by the defendants.

Computer scientists sometimes use the word “network” to refer to something more virtual. Others are now using “network” in this sense, as when people talk about the social network of friendships among the residents of a small town. Nobody owns and operates the social network. There is nobody you can sue to shut it down, because it’s not a network in the same sense the power grid is.

A communications protocol is an agreement or convention about how computer systems can cooperate to accomplish some task. It isn’t owned or operated by anybody. (People might own copyrights or patents relating to a protocol, but let’s set aside that possibility for now.) There’s a sense in which English or any other human language is a kind of protocol that people use to cooperate with each other. Again: nobody owns, operates or controls the English language, and there is nobody you can sue to shut it down. This isn’t to say that you can’t punish misuses of English, such as fraud or criminal conspiracies that use the language; but punishing misuse is not the same as attacking the language itself.

Given a lawsuit about a particular technology, how can we tell whether that network is more like the power grid or more like a social network? Here I think the Grokster courts have gotten it right. Rather than arguing over what is a “network,” or what “network” means anyway, they looked at the nature of the technology and the defendant’s control or influence over it. That is, as lawyers say, a fact-intensive inquiry.

The MPAA, in suing the operators of BitTorrent trackers rather than trying to attack the BitTorrent protocol itself, seems to be recognizing this distinction. That in itself good news.