April 19, 2014

avatar

New Congress, Same Old Issues

With control of the House and Senate about to switch parties, everybody is wondering how the new management will affect their pet policy issues. Cameron Wilson has a nice forecast for tech policy issues such as competitiveness, globalization, privacy, DRM, and e-voting.

Most of these don’t break down as partisan issues – differences are larger within each party than between the two parties. So the shift in control won’t necessarily lead to any big change. But there are two factors that may shake things up.

The first factor is the acceleration of change that happens in any organization when new leadership comes in. The new boss wants to show that he differs from the old boss, especially if the old boss was fired. And the new boss gets a short grace period in which to be bold. If a policy or practice was stale and needed to be changed but the institutional ice floes were just stuck, new management may loosen them.

The second factor has to do with the individuals who will run the various committees. If you’re not a government geek, you may not realize how much the agenda on particular issues is set by House and Senate committees, and particularly by the committee chairs. For example, any e-voting legislation must pass through the House Administration Committee, so the chair of that committee can effectively block such legislation. As long as Bob Ney was chair of the committee, e-voting reform was stymied – that’s why the Holt e-voting bill could have more than half of the House members as co-sponsors without even reaching a vote. But Mr. Ney’s Abramoff problem and the change in party control will put Juanita Millender-McDonald in charge of the committee. Suddenly Ms. Millender-McDonald’s opinion on e-voting has gotten much more important.

The bottom line is that on most tech issues we don’t know what will happen. On some issues, such as the broad telecom/media/Internet reform discussion, the situation is at least as cloudy as before. Let the battles begin.

avatar

Taking Stevens Seriously

From the lowliest blogger to Jon Stewart, everybody is laughing at Sen. Ted Stevens and his remarks (1.2MB mp3) on net neutrality. The sound bite about the Internet being “a series of tubes” has come in for for the most ridicule.

I’ll grant that Stevens sounds pretty confused on the recording. But’s let’s give the guy a break. He was speaking off the cuff in a meeting, and he sounds a bit agitated. Have you ever listened to a recording of yourself speaking in an unscripted setting? For most people, it’s pretty depressing. We misspeak, drop words, repeat phrases, and mangle sentences all the time. Normally, listeners’ brains edit out the errors.

In this light, some of the ridicule of Stevens seems a bit unfair. He said the Internet is made up of “tubes”. Taken literally, that’s crazy. But experts talk about “pipes” all the time. Is the gap between “tubes” and “pipes” really so large? And when Stevens says that his staff sent him “an Internet” and it took several days to arrive, it sounds to me like he meant to say “an email” and just misspoke.

So let’s take Stevens seriously, and consider the possibility that somewhere in his head, or in the head of a staffer telling him what to say, there was a coherent argument that was supposed to come out of Stevens’ mouth but was garbled into what we heard. Let’s try to reconstruct that argument and see if it makes any sense.

In particular, let’s look at the much-quoted core of Stevens’ argument, as transcribed by Ryan Singel. Here is my cleaned-up restatement of that part of Stevens’ remarks:

NetFlix delivers movies by mail. What happens when they start delivering them by download? The Internet will get congested.

Last Friday morning, my staff sent me an email and it didn’t arrive until Tuesday. Why? Because the Internet was congested.

You want to help consumers? Consumers don’t benefit when the Net is congested. A few companies want to flood the Internet with traffic. Why shouldn’t ISPs be able to manage that traffic, so other traffic can get through? Your regulatory approach would make that impossible.

The Internet doesn’t have infinite capacity. It’s like a series of pipes. If you try to push too much traffic through the pipes, they’ll fill up and other traffic will be delayed.

The Department of Defense had to build their own network so their time-critical traffic wouldn’t get blocked by Internet congestion.

Maybe the companies that want to dump so much traffic on the Net should pay for the extra capacity. They shouldn’t just dump their traffic onto the same network links that all of us are paying for.

We don’t have regulation now, and the Net seems to be working reasonably well. Let’s leave it unregulated. Let’s wait to see if a problem really develops.

This is a rehash of two of the standard arguments of neutrality regulation opponents: let ISPs charge sites that send lots of traffic through their networks; and it’s not broke so don’t fix it. Nothing new here, but nothing scandalous either.

His examples, on the other hand, seem pretty weak. First, it’s hard to imagine that NetFlix would really use up so much bandwidth that they or their customers weren’t already paying for. If I buy an expensive broadband connection, and I want to use it to download a few gigabytes a month of movies, that seems fine. The traffic I slow down will mostly be my own.

Second, the slow email wouldn’t have been caused by general congestion on the Net. The cause must be either an inattentive person or downtime of a Senate server. My guess is that Stevens was searching his memory for examples of network delays, and this one popped up.

Third, the DoD has plenty of reasons other than congestion to have its own network. Secrecy, for example. And a need for redundancy in case of a denial-of-service attack on the Internet’s infrastructure. Congestion probably ranks pretty far down the list.

The bottom line? Stevens may have been trying to make a coherent argument. It’s not a great argument, and his examples were poorly chosen, but it’s far from the worst argument ever heard in the Senate.

Why then the shock and ridicule from the Internet public? Partly because the recording was a perfect seed for a Net ridicule meme. But partly, too, because people unfamiliar with everyday Washington expect a high level of debate in the Senate, and Stevens’ remarks, even if cleaned up, don’t nearly qualify. As Art Brodsky of Public Knowledge put it, “We didn’t [post the recording] to embarrass Sen. Stevens, but to give the public an inside view of what can go on at a markup. Just so you know.” Millions of netizens now know, and they’re alarmed.

avatar

Net Neutrality: Strike While the Iron Is Hot?

Bill Herman at the Public Knowledge blog has an interesting response to my net neutrality paper. As he notes, my paper was mostly about the technical details surrounding neutrality, with a short policy recommendation at the end. Here’s the last paragraph of my paper:

There is a good policy argument in favor of doing nothing and letting the situation develop further. The present situation, with the network neutrality issue on the table in Washington but no rules yet adopted, is in many ways ideal. ISPs, knowing that discriminating now would make regulation seem more necessary, are on their best behavior; and with no rules yet adopted we don’t have to face the difficult issues of line-drawing and enforcement. Enacting strong regulation now would risk side-effects, and passing toothless regulation now would remove the threat of regulation. If it is possible to maintain the threat of regulation while leaving the issue unresolved, time will teach us more about what regulation, if any, is needed.

Herman argues that waiting is a mistake, because the neutrality issue is in play now and that can’t continue for long. Normally, issues like these are controlled by a small group of legislative committee members, staffers, interest groups and lobbyists, but rarely an issue will open up for wider debate, giving broader constituencies influence over what happens. That’s when most of the important policy changes happen. Herman argues that the net neutrality issue is open now, and if we don’t act it will close again and we (the public) will lose our influence on the issue.

He makes a good point: the issue won’t stay in the public eye forever, and when it leaves the public eye change will be more difficult. But I don’t think it follows that we should enact strong neutrality regulation now. There are several reasons for this.

Tim Lee offers one reason in his response to Herman. Here’s Tim:

So let’s say Herman is right and the good guys have limited resources with which to wage this fight. What happens once network neutrality is the law of the land, Public Knowledge has moved onto its next legislative issue, and the only guys in the room at FCC hearings on network neutrality implementation are telco lawyers and lobbyists? The FCC will interpret the statute in a way that’s friendly to the telecom industry, for precisely the reasons Herman identifies. Over time, “network neutrality” will be redefined and reinterpreted to mean something the telcos can live with.

But it’s worse than that, because the telcos aren’t likely to stop at rendering the law toothless. They’re likely to continue lobbying for additional changes to the rules—by the FCC or Congress—that helps them exclude new competitors and cement their monopoly power? Don’t believe me? Look at the history of cable franchising. Look at the way the CAB helped cartelize the airline industry, and the ICC cartelized surface transportation. Look at FCC regulation of telephone service and the broadcast spectrum. All of those regulatory regimes were initially designed to control oligopolistic industries too, and each of them ended up becoming part of the problem.

I’m wary of Herman’s argument for other reasons too. Most of all, I’m not sure we know how to write neutrality regulations that will have the effects we want. I’m all in favor of neutrality as a principle, but it’s one thing to have a goal and another thing entirely to know how to write rules that will achieve that goal in practice. I worry that we’ll adopt well-intentioned neutrality regulations that we’ll regret later – and if the issue is frozen later it will be even harder to undo our mistakes. Waiting will help us learn more about the problem and how to fix it.

Finally, I worry that Congress will enact toothless rules or vague statements of principle, and then declare that the issue has been taken care of. That’s not what I’m advocating; but I’m afraid it’s what we’ll get if insist that Congress pass a net neutrality bill this year.

In any case, odds are good that the issue will be stalemated, and we’ll have to wait for the new Congress, next year, before anything happens.

avatar

New Net Neutrality Paper

I just released a new paper on net neutrality, called Nuts and Bolts of Network Neutrality. It’s based on several of my earlier blog posts, with some new material.

avatar

Quality of Service: A Quality Argument?

One of the standard arguments one hears against network neutrality rules is that network providers need to provide Quality of Service (QoS) guarantees to certain kinds of traffic, such as video. If QoS is necessary, the argument goes, and if net neutrality rules would hamper QoS by requiring all traffic to be treated the same, then net neutrality rules must be harmful. Today, I want to unpack this argument and see how it holds up in light of computer science research and engineering experience.

First, I need to make clear that guaranteeing QoS for an application means more than just giving it lots of bandwidth or prioritizing its traffic above other applications. Those things might be helpful, but they’re not QoS (or at least not the kind I’m talking about today). What QoS mechanisms (try to) do is to make specific performance guarantees to an app over a short window of time.

An example may clarify this point. If you’re loading a web page, and your network connection hiccups so that you get no traffic for (say) half a second, you may notice a short pause but it won’t be a big deal. But if you’re having a voice conversation with somebody, a half-second gap will be very annoying. Web browsing needs decent bandwidth on average, but voice conversations needs better protection against short delays. That protection is QoS.

Careful readers will protest at this point that a good browsing experience depends on more than just average bandwidth. A half-second hiccup might not be a big problem, but a ten-minute pause would be too much, even if performance is really snappy afterward. The difference between voice conversations and browsing is one of degree – voice conversations want guarantees over fractions of seconds, and browsing wants them over fractions of minutes.

The reason we don’t need special QoS mechanisms for browsing is that the broadband Internet already provides performance that is almost always steady enough over the time intervals that matter for browsing.

Sometimes, too, there are simple tricks that can turn an app that cares about short delays into one that cares only about longer delays. For example, watching prerecorded audio or video streams doesn’t need QoS, because you can use buffering. If you’re watching a video, you can download every frame ten seconds before you’re going to watch it; then a hiccup of a few seconds won’t be a problem. This is why streaming audio and video work perfectly well today (when there is enough average bandwidth).

There are two other important cases where QoS isn’t needed. First, if an app needs higher average speed than the Net can provide, than QoS won’t help it – QoS makes the Net’s speed steadier but not faster. Second – and less obvious – if an app needs much less average speed than the Net can provide, then QoS might also be unnecessary. If speed doesn’t drop entirely to zero but fluctuates, with peaks and valleys, then even the valleys may be high enough to give the app what it needs. This is starting to happen for voice conversations – Skype and other VoIP systems seem to work pretty well without any special QoS support in the network.

We can’t say that QoS is never needed, but experience does teach that it’s easy, especially for non-experts, to overestimate the importance of QoS. That’s why I’m not convinced – though I could be, with more evidence – that QoS is a strong argument against net neutrality rules.

avatar

AOL, Yahoo Challenge Email Neutrality

AOL and Yahoo will soon start using Goodmail, a system that lets bulk email senders bypass the companies’ spam filters by paying the companies one-fourth of a cent per message, and promising not to send unsolicited messages, according to a New York Times story by Saul Hansell.

Pay-to-send systems are one standard response to spam. The idea is that raising the cost of sending a message will deter the kind of shot-in-the-dark spamming that sends a pitch to everybody in the hope that somebody, somewhere, will respond. The price should be high enough to deter spamming but low enough that legitimate email won’t be deterred. Or so the theory goes.

What’s different here is that senders aren’t paying for delivery, but for an exemption from the email providers’ spam filters. As Eric Rescorla notes, this system creates interesting incentives for the providers. For instance, the providers will have an incentive to make their spam filters overly stringent – so that legitimate messages will be misclassified as spam, and senders will be more likely to pay for an exemption from the filters.

There’s an interesting similarity here to the network neutrality debate. Net neutrality advocates worry that residential ISPs will discriminate against some network traffic so that they can charge web sites and services a fee in exchange for not discriminating against their traffic. In the email case, the worry is that email providers will discriminate against commercial email, so that they can charge email senders a fee in exchange for not discriminating against their messages.

Is this really the same policy problem? If you advocate neutrality regulations on ISPs, does consistency require you to advocate neutrality regulations on email providers? Considering these questions may shed a little light on both issues.

My tentative reaction to the email case is that this may or may not be a smart move by AOL and Yahoo, but they ought to be free to try it. If customers get fewer of the commercial email messages they want (and don’t get enough reduction in spam to make up for it), they’ll be less happy with AOL and Yahoo, and some will take their business elsewhere. The key point, I think, is that customers have realistic alternatives they can switch to. Competition will protect them.

(You may object that switching email providers is costly for a customer who has been using an aol.com or yahoo.com email address – if he switches email providers, his old email address might not work any more. True enough, but a rational email provider will already be exploiting this lock-in, perhaps by charging the customer a slightly higher fee than he would pay elsewhere.)

Competition is a key issue – perhaps the most important one – in the net neutrality debate too. If commercial ISPs face real competition, so that users have realistic alternatives to an ISP who misbehaves, then ISPs will have to heed their customers’ demand for neutral access to sites and services. But if ISPs have monopoly power, their incentives may drive them to behave badly.

To me, the net neutrality issue hinges largely on whether the residential ISP market will be competitive. I can’t make a clear prediction, but I know that there are people who probably can. I’d love to hear what they have to say.

What does seem clear is that regulatory policy can help or hinder the emergence of competition. Enabling competition should be a primary goal of our future telecom regulation.

avatar

How Would Two-Tier Internet Work?

The word is out now that residential ISPs like BellSouth want to provide a kind of two-tier Internet service, where ordinary Internet services get one level of performance, and preferred sites or services, presumably including the ISPs’ own services, get better performance. It’s clear why ISPs want to do this: they want to charge big web sites for the privilege of getting preferred service.

I should say up front that although the two-tier network is sometimes explained as if there were two tiers of network infrastructure, the obvious and efficient implementation in practice would be to have a single fast network, and to impose deliberate delay or bandwidth throttling on non-preferred traffic.

Whether ISPs should be allowed to do this is an important policy question, often called the network neutrality issue. It’s a harder issue than advocates on either side admit. Regular readers know that I’ve been circling around this issue for a while, without diving into its core. My reason for shying away from the main issue is simply that I haven’t figured it out yet. Today I’ll continue circling.

Let’s think about the practical aspects of how an ISP would present the two-tier Internet to customers. There are basically two options, I think. Either the ISP can create a special area for preferred sites, or it can let sites keep their ordinary URLs. As we’ll see, either option leads to problems.

The first option is to give the preferred sites special URLs. For example, if this site had preferred status on AcmeISP, its URL for AcmeISP customers would be something like freedom-to-tinker.preferred.acmeisp.com. This has the advantage of telling customers clearly which sites are expected to have preferred-level performance. But it has the big disadvantage that URLs are no longer portable from one ISP to another. Portability of URLs – the fact that a URL means the same thing no matter where you use it – is one of the critical features that makes the web work, and makes sites valuable. It’s hard to believe that sites and users will be willing to give it up.

The second option is for users to name sites using ordinary names and URLs. For example, this site would be called freedom-to-tinker.com, regardless of whether it had preferred status on your ISP. In this scenario, the only difference between preferred and ordinary sites is that users would see much better performance for perferred sites.

To an ordinary user, this would look like a network that advertises high peak performance but often has lousy performance in practice. If you’ve ever used a network whose performance varies widely over time, you know how aggravating it can be. And it’s not much consolation to learn that the poor performance only happens when you’re trying to use that great video site your friend (on another ISP) told you about. You assume something is wrong, and you blame the ISP.

In this situation, it’s hard to believe that a complaining user will be impressed by an explanation that the ISP could have provided higher performance, but chose not to because the site didn’t pay some fee. Users generally expect that producers will provide the best product they can at a given cost. Business plans that rely on making products deliberately worse, without reducing the cost of providing them, are widely seen as unfair. Given that explanation, users will still blame the ISP for the performance problems they see.

The basic dilemma for ISPs is pretty simple. They want to segregate preferred sites in users’ minds, so that users will blame the site rather than the ISP for the poor performance of non-preferred sites; but segregating the preferred sites makes the sites much less valuable because they can no longer be named in the same way on different ISPs.

How can ISPs escape this dilemma? I’m not sure. It seems to me that ISPs will be driven to a strategy of providing Internet service alongside exclusive, only-on-this-ISP content. That’s a strategy with a poor track record.

Clarification (3:00 PM EST): In writing this post, I didn’t mean to imply that web sites were the only services among which providers wanted to discriminate. I chose to use Web sites because they’re useful in illustrating the issues. I think many of the same issues would arise with other types of services, such as VoIP. In particular, there will be real tension between the ISPs desire to label preferred VoIP services as strongly associated with, and supported by, that particular ISP; but VoIP services will have strong reasons to portray themselves as being the same service everywhere.

avatar

Net Neutrality and Competition

No sooner do I start writing about net neutrality than Ed Whitacre, the CEO of baby bell company SBC, energizes the debate with a juicy interview:

Q: How concerned are you about Internet upstarts like Google, MSN, Vonage, and others?

A: How do you think they’re going to get to customers? Through a broadband pipe. Cable companies have them. We have them. Now what they would like to do is use my pipes free, but I ain’t going to let them do that because we have spent this capital and we have to have a return on it. So there’s going to have to be some mechanism for these people who use these pipes to pay for the portion they’re using. Why should they be allowed to use my pipes?

The Internet can’t be free in that sense, because we and the cable companies have made an investment and for a Google or Yahoo or Vonage or anybody to expect to use these pipes [for] free is nuts!

This is a pretty dumb thing for him to say, for several reasons. First, it shows amazing disrespect for his home broadband customers, who are paying $40 or so every month to use SBC’s pipes. If I were an SBC broadband customer, I’d be dying to ask Mr. Whitacre exactly what my monthly payment is buying, if it isn’t buying access to Google, Yahoo, Vonage, and any other $%&^* Internet service I want to use. Didn’t SBC’s advertising say I was buying access to the Internet?

Second, if somebody is going to pay somebody in this situation, it’s not clear who should be doing the paying. There is some set of customers who want to use SBC broadband service to access Google. Why should Google pay SBC for this? Why shouldn’t SBC pay Google instead?

Sure, SBC would like its customers to have free access to Google, Yahoo, and Vonage. But as Mr. Whitacre would put it, the Internet can’t be free in that sense, because Google, Yahoo, and Vonage have made an investment and for SBC or anybody to expect to use those services for free is nuts!

My point is not that SBC should necessarily pay, but that there is no rule of nature saying that one layer of the protocol stack should pay another layer. If SBC gets paid by Google, it’s because SBC faces less competition and hence has more market power. As Susan Crawford observes, Mr. Whitacre speaks with “the voice of someone who doesn’t think he has any competitors.”

At this point, economists will object that it’s sometimes efficient to let ISPs levy these kinds of charges, and so requiring net neutrality from SBC may lead to an inefficient outcome. I appreciate this point, and will be writing more about it in the future.

For now, though, notice that Mr. Whitacre isn’t speaking the language of efficiency. He wants to extract payments because he can. There’s a whiff here of the CEO-tournament syndrome that infected the media world in the 1990s, as documented in Ken Auletta’s “mogul” stories. Can Mr. Whitacre make the CEOs of Google, Yahoo, and Vonage genuflect to him? Is he really the man with the biggest … market power? If there are to be side payments, will they reflect business calculation, or just ego?

It’s one thing to argue that a policy can lead to efficient results. It’s another thing entirely to show that itwill lead to efficient results, in the hands of real human beings.

avatar

Discrimination Against Network Hogs

Adam Thierer has an interesting post about network neutrality over at Tech Liberation Front. He is reacting to a recent Wall Street Journal story about how some home broadband service providers (BSPs) are starting to modify their networks to block or frustrate network applications they don’t like.

Why would a BSP discriminate against an application’s traffic? The standard scenario that people worry about is that a BSP hinders traffic from Vonage or some other VoIP application, because the BSP wants to sell phone service to the customer and VoIP competes with that phone service. One can cook up a hypothetical like this whenever a BSP wants to sell an application-level service. The standard response to this worry is to suggest “net neutrality” regulation, which would require BSPs to carry all traffic on an equal footing, regardless of which application or protocol is used. There is a complicated literature about the economics of net neutrality; for now, suffice it to say that net neutrality regulation can help or hurt, depending on the precise circumstances.

Thierer opposes net neutrality regulation. He seems especially worried that neutrality might require BSPs to treat all customers the same, regardless of how much network traffic they generate. If a few customers use lots of bandwidth this will leave less for everybody else, or alternatively will require the BSP to upgrade the network and pass on the cost neutrally to all users. It’s better, he argues, to let BSPs price differentially based on bandwidth usage.

It’s hard to argue with that proposition. I don’t think any reasonable net neutrality advocate would object to a BSP discriminating or pricing based solely on bandwidth usage. They would of course object if a BSP blocked a particular app and rationalized that act with vague excuses about saving bandwidth; but a real bandwidth limit ought to be uncontroversial.

(Technically, customers already have bandwidth limits, in the sense that a given class of service limits the maximum instantaneous bandwidth that a customer can use. What we’re talking about here are limits that are defined over a longer period, such as a day or a week.)

It’s already the case that some customers use much more bandwidth than others. Thierer quotes a claim that fewer than 10% of Time-Warner customers use more than 75% of bandwidth; and another BSP makes an even stronger claim. This isn’t a surprise – this kind of business is often subject to an 80/20 rule (80% of the resources used by 20% of the customers) or even a 90/10 rule.

But will ISPs actually apply bandwidth limits? Here’s Thierer:

This raises the most interesting issue in this entire debate: Why is it that BSPs are not currently attempting to meter broadband usage and price it to account for demand and “excessive” usage by some users? In my opinion, this would be the most efficient and least meddlesome way of dealing with this problem. Per-minute or per-bit pricing schemes could help conserve pipe space, avoid congestion, recover costs and enable BSPs to plow the savings into new capacity / innovation. Despite this, no BSP seems willing to engage in any sort of metering of the pipe. Why is that?

I think there are two reasons that BSPs have so far been unwilling to price discriminate. Frist broadband operators are probably concerned that such a move would bring about unwanted regulatory attention. Second, and more importantly, cable and telco firms are keenly aware of the fact that the web-surfing public has come to view “all you can eat” buffet-style, flat-rate pricing as a virtual inalienable right. Internet guru Andrew Odlyzko, has correctly argued that “People react extremely negatively to price distrimination. They also dislike the bother of fine-grained pricing, and are willing to pay extra for simple prices, especially flat-rate ones.”

So if BSPs aren’t willing to bandwidth-discriminate now, and doing so would anger customers, why would we expect them to start discriminating in the future? It’s not enough to point to a 90/10 rule of bandwidth usage. If, as seems likely, a 90/10 rule has been operating for a while now, and BSPs have not responded with differential pricing, then it’s not clear why anything would change in the future. Perhaps there is data showing that the customer-to-customer imbalance is getting worse; but I haven’t seen it.

Ultimately, BSPs’ general refusal to bandwidth-discriminate would seem to contradict claims that bandwidth discrimination is necessary. Still, even net neutrality advocates ought to support BSPs’ freedom to bandwidth-discriminate.

Alert readers have surely noticed by this point that I haven’t said whether I support net neutrality regulation. The reason is pretty simple: I haven’t made up my mind yet. Both sides make plausible arguments, and the right answer seems to depend on what assumptions we make about the markets and technology of the near future. I’ll probably be talking myself through the issue in occasional blog posts here over the next few weeks. Maybe, with your help, I’ll figure it out.

avatar

Who Is An ISP?

There’s talk in Washington about a major new telecommunications bill, to update the Telecom Act of 1996. A discussion draft of the bill is floating around.

The bill defines three types of services: Internet service (called “Broadband Internet Transmission Service” or BITS for short); VoIP; and broadband television. It lays down specific regulations for each type of service, and delegates regulatory power to the FCC.

In bills like this, much of the action is in the definitions. How you’re regulated depends on which of the definitions you satisfy, if any. The definitions essentially define the markets in which companies can compete.

Here’s how the Internet service market is defined:

The term “BITS” or “broadband Internet transmission service” –
(A) means a packet-switched service that is offered to the public, or [effectively offered to the public], with or without a fee, and that, regardless of the facilities used –
(i) is transmitted in a packed-based protocol, including TCP/IP or a successor protocol; and
(ii) provides to subscribers the capability to send and receive packetized information; …

The term “BITS provider” means any person who provides or offers to provide BITS, either directly or through an affiliate.

The term “packet-switched service” means a service that routes or forwards packets, frames, cells, or other data units based on the identification, address, or other routing information contained in the packets, frames, cells, or other data units.

The definition of BITS includes ordinary Internet Service Providers, as we would expect. But that’s not all. It seems to include public chat servers, which deliver discrete messages to specified destination users. It seems to include overlay networks like Tor, which provide anonymous communication over the Internet using a packet-based protocol. As Susan Crawford observes, it seems to cover nodes in ad hoc mesh networks. It even seems to include anybody running an open WiFi access point.

What happens to you if you’re a BITS provider? You have to register with the FCC and hope your registration is approved; you have to comply with consumer protection requirements (“including service appointments and responses to service interruptions and outages”); and you have to comply with privacy regulation which, ironically, require you to keep track of who your users are so you can send them annual notices telling them that you are not storing personal information about them.

I doubt the bill’s drafters meant to include chat or Tor as BITS providers. The definition can probably be rewritten to exclude cases like these.

A more interesting question is whether they meant to include open access points. It’s hard to justify applying heavyweight regulation to the individuals or small businesses who run access points. And it seems likely that many would ignore the regulations anyway, just as most consumers seem ignore the existing rules that require an FCC license to use the neighborhood-range walkie-talkies sold at Wal-Mart.

The root of the problem is the assumption that Internet connectivity will be provided only by large institutions that can amortize regulatory compliance costs over a large subscriber base. If this bill passes, that will be a self-fulfilling prophecy – only large institutions will be able to offer Internet service.