December 22, 2024

Archives for October 2005

Net Neutrality and Competition

No sooner do I start writing about net neutrality than Ed Whitacre, the CEO of baby bell company SBC, energizes the debate with a juicy interview:

Q: How concerned are you about Internet upstarts like Google, MSN, Vonage, and others?

A: How do you think they’re going to get to customers? Through a broadband pipe. Cable companies have them. We have them. Now what they would like to do is use my pipes free, but I ain’t going to let them do that because we have spent this capital and we have to have a return on it. So there’s going to have to be some mechanism for these people who use these pipes to pay for the portion they’re using. Why should they be allowed to use my pipes?

The Internet can’t be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo or Vonage or anybody to expect to use these pipes [for] free is nuts!

This is a pretty dumb thing for him to say, for several reasons. First, it shows amazing disrespect for his home broadband customers, who are paying $40 or so every month to use SBC’s pipes. If I were an SBC broadband customer, I’d be dying to ask Mr. Whitacre exactly what my monthly payment is buying, if it isn’t buying access to Google, Yahoo, Vonage, and any other $%&^* Internet service I want to use. Didn’t SBC’s advertising say I was buying access to the Internet?

Second, if somebody is going to pay somebody in this situation, it’s not clear who should be doing the paying. There is some set of customers who want to use SBC broadband service to access Google. Why should Google pay SBC for this? Why shouldn’t SBC pay Google instead?

Sure, SBC would like its customers to have free access to Google, Yahoo, and Vonage. But as Mr. Whitacre would put it, the Internet can’t be free in that sense, because Google, Yahoo, and Vonage have made an investment and for SBC or anybody to expect to use those services for free is nuts!

My point is not that SBC should necessarily pay, but that there is no rule of nature saying that one layer of the protocol stack should pay another layer. If SBC gets paid by Google, it’s because SBC faces less competition and hence has more market power. As Susan Crawford observes, Mr. Whitacre speaks with “the voice of someone who doesn’t think he has any competitors.”

At this point, economists will object that it’s sometimes efficient to let ISPs levy these kinds of charges, and so requiring net neutrality from SBC may lead to an inefficient outcome. I appreciate this point, and will be writing more about it in the future.

For now, though, notice that Mr. Whitacre isn’t speaking the language of efficiency. He wants to extract payments because he can. There’s a whiff here of the CEO-tournament syndrome that infected the media world in the 1990s, as documented in Ken Auletta’s “mogul” stories. Can Mr. Whitacre make the CEOs of Google, Yahoo, and Vonage genuflect to him? Is he really the man with the biggest … market power? If there are to be side payments, will they reflect business calculation, or just ego?

It’s one thing to argue that a policy can lead to efficient results. It’s another thing entirely to show that itwill lead to efficient results, in the hands of real human beings.

Discrimination Against Network Hogs

Adam Thierer has an interesting post about network neutrality over at Tech Liberation Front. He is reacting to a recent Wall Street Journal story about how some home broadband service providers (BSPs) are starting to modify their networks to block or frustrate network applications they don’t like.

Why would a BSP discriminate against an application’s traffic? The standard scenario that people worry about is that a BSP hinders traffic from Vonage or some other VoIP application, because the BSP wants to sell phone service to the customer and VoIP competes with that phone service. One can cook up a hypothetical like this whenever a BSP wants to sell an application-level service. The standard response to this worry is to suggest “net neutrality” regulation, which would require BSPs to carry all traffic on an equal footing, regardless of which application or protocol is used. There is a complicated literature about the economics of net neutrality; for now, suffice it to say that net neutrality regulation can help or hurt, depending on the precise circumstances.

Thierer opposes net neutrality regulation. He seems especially worried that neutrality might require BSPs to treat all customers the same, regardless of how much network traffic they generate. If a few customers use lots of bandwidth this will leave less for everybody else, or alternatively will require the BSP to upgrade the network and pass on the cost neutrally to all users. It’s better, he argues, to let BSPs price differentially based on bandwidth usage.

It’s hard to argue with that proposition. I don’t think any reasonable net neutrality advocate would object to a BSP discriminating or pricing based solely on bandwidth usage. They would of course object if a BSP blocked a particular app and rationalized that act with vague excuses about saving bandwidth; but a real bandwidth limit ought to be uncontroversial.

(Technically, customers already have bandwidth limits, in the sense that a given class of service limits the maximum instantaneous bandwidth that a customer can use. What we’re talking about here are limits that are defined over a longer period, such as a day or a week.)

It’s already the case that some customers use much more bandwidth than others. Thierer quotes a claim that fewer than 10% of Time-Warner customers use more than 75% of bandwidth; and another BSP makes an even stronger claim. This isn’t a surprise – this kind of business is often subject to an 80/20 rule (80% of the resources used by 20% of the customers) or even a 90/10 rule.

But will ISPs actually apply bandwidth limits? Here’s Thierer:

This raises the most interesting issue in this entire debate: Why is it that BSPs are not currently attempting to meter broadband usage and price it to account for demand and “excessive” usage by some users? In my opinion, this would be the most efficient and least meddlesome way of dealing with this problem. Per-minute or per-bit pricing schemes could help conserve pipe space, avoid congestion, recover costs and enable BSPs to plow the savings into new capacity / innovation. Despite this, no BSP seems willing to engage in any sort of metering of the pipe. Why is that?

I think there are two reasons that BSPs have so far been unwilling to price discriminate. Frist broadband operators are probably concerned that such a move would bring about unwanted regulatory attention. Second, and more importantly, cable and telco firms are keenly aware of the fact that the web-surfing public has come to view “all you can eat” buffet-style, flat-rate pricing as a virtual inalienable right. Internet guru Andrew Odlyzko, has correctly argued that “People react extremely negatively to price distrimination. They also dislike the bother of fine-grained pricing, and are willing to pay extra for simple prices, especially flat-rate ones.”

So if BSPs aren’t willing to bandwidth-discriminate now, and doing so would anger customers, why would we expect them to start discriminating in the future? It’s not enough to point to a 90/10 rule of bandwidth usage. If, as seems likely, a 90/10 rule has been operating for a while now, and BSPs have not responded with differential pricing, then it’s not clear why anything would change in the future. Perhaps there is data showing that the customer-to-customer imbalance is getting worse; but I haven’t seen it.

Ultimately, BSPs’ general refusal to bandwidth-discriminate would seem to contradict claims that bandwidth discrimination is necessary. Still, even net neutrality advocates ought to support BSPs’ freedom to bandwidth-discriminate.

Alert readers have surely noticed by this point that I haven’t said whether I support net neutrality regulation. The reason is pretty simple: I haven’t made up my mind yet. Both sides make plausible arguments, and the right answer seems to depend on what assumptions we make about the markets and technology of the near future. I’ll probably be talking myself through the issue in occasional blog posts here over the next few weeks. Maybe, with your help, I’ll figure it out.

RFID, Present and Future

One of the advantages of teaching in a good university is the opportunity to hear smart students talk to each other about complicated topics. This semester I’m teaching a graduate seminar in technology and privacy, to a group of about ten computer science and electrical engineering students. On Monday the class discussed the future of RFID technology.

The standard scenario for RFID involves affixing a small RFID “tag” to a consumer product, such as an item of clothing sold at WalMart. (I’m using WalMart as a handy example here; anyone can use RFID.) Each tag has a unique ID number. An RFID “reader” can use radio signals to determine the ID numbers of any tags that are nearby. WalMart might use an RFID reader to take an inventory of which items are in their store, or which items are in the shopping cart of a customer. This has obvious advantages in streamlining inventory control, which helps WalMart operate more efficiently and sell products at lower prices.

This sounds fine so far, but there is a well-known problem with this scheme. When a customer buys the item and takes it home, the RFID tag is still there, so people may be able to track the customer or learn what he is carrying in his backpack, by scanning him and his possessions for RFID tags. This scares many people.

The risk of post-sale misuse of RFID tags can be mitigated by having WalMart deactivate or “kill” the tags when the customer buys the tag-containing item. This could be done by sending a special radio code to the tag. On receiving the kill code, the tag would stop operating. (Any practical kill feature would allow a special scanner to detect that a dead tag was present, but not to learn the dead tag’s ID number.)

Killing tags is a fine idea, but perhaps the consumer wants to use the tag for his own purposes. It would be cool if my laundry hamper knew which clothes were in it and could warn me of an impending clean-sock crisis, or if my fridge knew whether it contained any milk and how long that milk had been present. These things are possible if my clothing and food containers have working RFID tags.

One way to get what we want is to have smarter tags that use cryptography to avoid leaking information to outsiders. A smart tag would know the cryptographic key of its owner, and would only respond to requests properly signed by that key; and it would reveal its ID number in such a way that only its owner could understand it. At the checkout stand, WalMart would transfer cryptographic ownership of a tag to the buyer, rather than killing the tag. Any good cryptographer can figure out how to make this work.

The problem at present is that garden-variety RFID tags can’t do fancy crypto. Tags don’t have their own power source but get their power parasitically from an electromagnetic “carrier wave” broadcast by the reader. This means that the tag has a very limited power budget and very limited time – not nearly enough of either to do serious crypto. Some people argue that the RFID privacy problem is an artifact of these limitations of today’s RFID tags.

If so, that’s good news, because Moore’s Law is increasing the amount of computing we can do with a fixed power or time budget. If Moore’s Law applies to RFID circuits – and it seems that it should – then the time will come in a few years when dirt-cheap RFID tags can do fancy crypto, and therefore can be more privacy-friendly than they are today. The price difference between simple tags and smart tags will be driven toward zero by Moore’s Law, so there won’t be a cost justification for using simpler but less privacy-friendly tags.

But here’s the interesting question: when nicer RFID tags become possible, will people switch over to using them, or will they keep using today’s readable-by-everybody tags? If there’s no real cost difference, there are only two reasons we might not switch. The first is that we are somehow locked in by backward compatibility, so that any switch to a new technology incurs costs that nobody wants to be the first to pay. The second is a kind of social inertia, in which people are so accustomed to accepting the privacy risks of dumber RFID technologies that they don’t insist on improvement. Either of these scenarios could develop, and if they do, we may be locked out from a better technology for quite a while.

Our best hope, perhaps, is that WalMart can benefit from a stronger technology. Current systems are subject to various uses that WalMart may not like. For example, a competitor might use RFID to learn how many of each product WalMart is stocking, or to learn where WalMart customers live. Or a malicious customer might try to kill or impersonate a WalMart tag. Smarter RFID tags can prevent these attacks. Perhaps that will be enough to get WalMart to switch.

Looking further into the future, the privacy implications of small, communicating devices will only get more serious. The seminar read a paper on “smart dust”, a more futuristic technology involving tiny, computationally sophisticated motes that might some day be scattered across an area, then picked up by passersby, as any dust mote might be. This is a really scary technology, if it’s used for evil.

Today, inventory control and remote tracking come in a single technology called RFID. Tomorrow, they can be separated, so that we can have the benefits of inventory control (for businesses and individuals) without having to subject ourselves to tracking. Tracking will be more possible than ever before, but at least we won’t have to accept tracking as a side-effect of shopping.

Do University Honor Codes Work?

Rick Garnett over at ProfsBlawg asked his readers about student honor codes and whether they work. His readers, who seem to be mostly lawyers and law students, chimed in with quite a few comments, most of them negative.

I have dealt with honor codes at two institutions. My undergraduate institution, Caltech, has a simply stated and all-encompassing honor code that is enforced entirely by the students. My sense was that it worked very well when I was there. (I assume it still does.) Caltech has a small (800 students) and relatively homogeneous student body, with a student culture that features less student versus student competitiveness than you might expect. Competition there tends to be student versus crushing workload. The honor code was part of the social contract among students, and everybody appreciated the benefits it provided. For example, you could take your final exams at the time and place of your choosing, even if they were closed-book and had a time limit; you were trusted to follow the rules.

Contrasting this to the reports of Garnett’s readers, I can’t help but wonder if honor codes are especially problematic in law schools. There is reportedly more cutthroat competition between law students, which could be more conducive to ethical corner-cutting. Competitiveness is an engine of our adversarial legal system, so it’s not surprising to see law students so eager to win every point, though it is disappointing if they do so by cheating.

I’ve also seen Princeton’s disciplinary system as a faculty member. Princeton has a student-run honor code system, but it applies only to in-class exams. I don’t have any first-hand experience with this system, but I haven’t heard many complaints. I like the system, since it saves me from the unpleasant and trust-destroying task of policing in-class exams. Instead, I just hand out the exams, then leave the room and wait nearby to answer questions.

Several years ago, I did a three-year term on Princeton’s Student-Faculty Committee on Discipline, which deals with all serious disciplinary infractions, whether academic or non-academic, except those relating to in-class exams. This was hard work. We didn’t hear a huge number of cases, but it took surprisingly long to adjudicate even seemingly simple cases. I thought this committee did its job very well.

One interesting aspect of this committee was that faculty and students worked side by side. I was curious to see any differences between student and faculty attitudes toward the disciplinary process, but it turned out there were surprisingly few. If anything, the students were on average slightly more inclined to impose stronger penalties than the faculty, though the differences were small and opinions shifted from case to case. I don’t think this reflected selection bias either; discussions with other students over the years have convinced me that students support serious and uniform punishment for violators. So I don’t think there will be much difference in the outcomes of a student-run versus a faculty-run disciplinary process.

One lesson from Garnett’s comments is that an honor code will die if students decide that enforcement is weak or biased. Here the secrecy of disciplinary processes, which is of course necessary to protect the accused, can be harmful. Rumors do circulate. Sometimes they’re inaccurate but can’t be corrected without breaching secrecy. For example, when I was on Princeton’s discipline committee, some students believed that star athletes or students with famous relatives would be let off easier. This was untrue, but the evidence to contradict it was all secret.

Academic discipline seems to have a major feedback loop. If students believe that the secret disciplinary processes are generally fair and stringent, they will be happy with the process and will tend to follow the rules. This leaves the formal disciplinary process to deal with the exceptions, which a good process will be able to handle. Students will buy in to the premise of the system, and most people will be happy.

If, on the other hand, students lose their trust in the fairness of the system, either because of false rumors or because the system is actually unfair, then they’ll lose their aversion to rule-breaking and the system, whether honor-based or not, will break down. Several of Garnett’s readers tell a story like this.

One has to wonder whether it makes much difference in practice whether a system is formally honor-based or not. Either way, students have an ethical duty to follow the rules. Either way, violations will be punished if they come to light. Either way, at least a few students will cheat without getting caught. The real difference is whether the institution conspicuously trusts the students to comply with the rules, or whether it instead conspicuously polices compliance. Conspicuous trust is more pleasant for everybody, if it works.

[Feel free to talk about your own experiences in the comments. I’m especially eager to hear from current or past Princeton students.]

Breathalyzers and Open Source

Lawyers for 150 Floridians accused of drunk driving have asked a court to order the disclosure of the source code for software running in the breathalyzer machines used by police to analyze their blood alcohol level, according to a Tom Sanders story on vunet.

The defendants say they have the right to examine the machines that accused them, and that a meaningful examination requires access to the machines’ software. Prosecutors say the code is a trade secret.

The accused are right that one needs the code to understand fully how the machines work. The machines consist of sensors, a user interface, and control software. The software is the “brain” of the machine, and it is almost certainly involved in the calculations that derive a blood alcohol value from the sensor readings, as well as the display of the calculated value. If the accused have the right to fully examine the machines – and the article says that they do under Florida law – then they should see the source code.

Contrary to the article and some other commentators, this is not a dispute over whether the software should be open source. The accused aren’t seeking to open the software to everybody; they only want it opened to their legal teams.

There are standard practices for handling trade-secret information that must be turned over in court cases. A court will typically establish a protective order, which is a kind of nondisclosure agreement covering secret material that is turned over by one side to the other. The protective order will require parties to keep the information secret and to use it only for purposes related to the court proceedings. Typically the information can be turned over to a limited number of expert analysts who have also signed the protective order. Documents containing secret information are filed under seal, and testimony about secret matters may take place in a closed courtroom.

So this issue is not about open source, but about ensuring fairness for the accused. If they’re going to be accused based on what some machine says, then they ought to be allowed to challenge the accuracy of the machine. And they can’t do that unless they’re allowed to know how the machine works.

You might argue that the machine’s technical manuals convey enough information. Having read many manuals and examined the innards of many software systems, I’m skeptical of such claims. Often, knowing how the maker says a machine works is a poor substitute for knowing how it actually works. If a machine is flawed, it’s likely the maker will either (a) not know about the flaw or (b) be unwilling to admit it exists.

If the article’s description of Florida law is correct, this seems like a pretty easy decision for the court.