November 23, 2024

Staying Off the Regulatory Radar

I just returned from a tech policy conference. It was off the record so I can’t tell you about what was said. But I can tell you that it got me thinking about what happens when a tech startup appears on policymakers’ radar screens.

Policymakers respond to what they see. Generally they don’t see startups, so startup products can do whatever makes sense from a technical and customer relations standpoint. Startups talk to lawyers, they try to avoid doing anything too risky, but they don’t spend their time trying to please policymakers.

But if a startup has enough success and attracts enough users, policymakers suddenly notice it and everything changes. To give just one example, YouTube is now on the radar screen and is facing takedown requests from national authorites in places like Thailand. (Thai authorities demanded takedown of an unflattering video about their king.) The cost of being on the policy radar screen can be high for online companies that have inherently global reach.

Some companies respond by changing their product strategy or by trying to outsource certain functions to other companies. We might even see the emergence of companies that specialize in coping with policymakers, making money by charging other tech-focused companies for managing certain parts of their technology.

Perhaps this is just another cost of scaling up a service that works well at smaller scale. But I can’t help wondering whether companies will change their behavior to try to stay off the radar screen longer. There’s an old strategy called “stealth mode” where a startup tries to avoid the attention of potential competitors by keeping secret its technology or even its very existence, to emerge in public at a strategically chosen time. I can think of several companies that wish for a new kind of stealth mode, where customers notice a company but policymakers don’t.

Internet So Crowded, Nobody Goes There Anymore

Once again we’re seeing stories, like this one from Anick Jesdanun at AP, saying that the Internet is broken and needs to be redesigned.

The idea may seem unthinkable, even absurd, but many believe a “clean slate” approach is the only way to truly address security, mobility and other challenges that have cropped up since UCLA professor Leonard Kleinrock helped supervise the first exchange of meaningless test data between two machines on Sept. 2, 1969.

The Internet “works well in many situations but was designed for completely different assumptions,” said Dipankar Raychaudhuri, a Rutgers University professor overseeing three clean-slate projects. “It’s sort of a miracle that it continues to work well today.”

It’s absolutely worthwhile to ask what kind of Net we would design if we were starting over, knowing what we know now. But it’s folly to think we can or should actually scrap the Net and build a new one.

For one thing, the Net is working very nicely already. Sure, there are problems, but they mostly stem from the fact that the Net is full of human beings – which is exactly what makes the Net so great. The Net has succeeded brilliantly at lowering the cost of communication and opening the tools of mass communication to many more people. That’s why most members of the redesign-the-Net brigade spend hours everyday online.

Let’s stop to think about what would happen if we really were going to redesign the Net. Law enforcement would show up with their requests. Copyright owners would want consideration. ISPs would want some concessions, and broadcasters. The FCC would show up with an anti-indecency strategy. We’d see an endless parade of lawyers and lobbyists. Would the engineers even be allowed in the room?

The original design of the Internet escaped this fate because nobody thought it mattered. The engineers were left alone while everyone else argued about things that seemed more important. That’s a lucky break that won’t be repeated.

The good news is that despite the rhetoric, hardly anybody believes the Internet will be rebuilt, so these research efforts have a chance of avoiding political entanglements. The redesign will be a useful intellectual exercise, and maybe we’ll learn some tricks useful for the future. But for better or worse, we’re stuck with the Internet we have.

FreeConference Suit: Neutrality Fight or Regulatory Squabble?

Last week FreeConference, a company that offers “free” teleconferencing services, sued AT&T for blocking access by AT&T/Cingular customers to FreeConference’s services. FreeConference’s complaint says the blocking is anticompetitive and violates the Communications Act.

FreeConference’s service sets up conference calls that connect a group of callers. Users are given an ordinary long-distance phone number to call. When they call the assigned number, they are connected to their conference call. Users pay nothing beyond the cost of the ordinary long-distance call they’re making.

As of last week, AT&T/Cingular started blocking access to FreeConference’s long-distance numbers from AT&T/Cingular mobile phones. Instead of getting connected to their conference calls, AT&T/Cingular users are getting an error message. AT&T/Cingular has reportedly admitted doing this.

At first glance, this looks like an unfair practice, with AT&T trying to shut down a cheaper competitor that is undercutting AT&T’s lucrative conference-call business. This is the kind of thing net neutrality advocates worry about – though strictly speaking this is happening on the phone network, not the Internet.

The full story is a bit more complicated, and it starts with FreeConference’s mysterious ability to provide conference calls for free. These days many companies provide free services, but they all have some way of generating revenue. FreeConference appears to generate revenue by exploiting the structure of telecom regulation.

When you make a long-distance call, you pay your long-distance provider for the call. The long-distance provider is required to pay connection fees to the local phone companies (or mobile companies) at both ends of the call, to offset the cost of connecting the call to the endpoints. This regulatory framework is a legacy of the AT&T breakup and was justified by the desire to have a competitive long-distance market coexist with local phone carriers that were near-monopolies.

FreeConference gets revenue from these connection fees. It has apparently cut a deal with a local phone carrier under which the carrier accepts calls for FreeConference, and FreeConference gets a cut of the carrier’s connection fees from those calls. If the connection fees are large enough – and apparently they are – this can be a win-win deal for FreeConference and the local carrier.

But of course somebody has to pay the fees. When an AT&T/Cingular customer calls FreeConference, AT&T/Cingular has to pay. They can pass on these fees to their customers, but this hardly seems fair. If I were an AT&T/Cingular customer, I wouldn’t be happy about paying more to subsidize the conference calls of other users.

To add another layer of complexity, it turns out that connection fees vary widely from place to place, ranging roughly from one cent to seven cents per minute. FreeConnection, predictably, has allied itself with a local carrier that gets a high connection fee. By routing its calls to this local carrier, FreeConnection is able to extract more revenue from AT&T/Cingular.

For me, this story illustrates everything that is frustrating about telecom. We start with intricately structured regulation, leading companies to adopt business models shaped by regulation rather than the needs of customers. The result is bewildering to consumers, who end up not knowing which services will work, or having to pay higher prices for mysterious reasons. This leads a techno-legal battle between companies that would, in an ideal world, be spending their time and effort developing better, cheaper products. And ultimately we end up in court, or creating more regulation.

We know a better end state is possible. But how do we get there from here?

[Clarification (2:20 PM): Added the “To add another layer …” paragraph. Thanks to Nathan Williams for pointing out my initial failure to mention the variation in connection fees.]

How Much Bandwidth is Enough?

It is a matter of faith among infotech experts that (1) the supply of computing and communications will increase rapidly according to Moore’s Law, and (2) the demand for that capacity will grow roughly as fast. This mutual escalation of supply and demand causes the rapid change we see in the industry.

It seems to be a law of physics that Moore’s Law must terminate eventually – there are fundamental physical limits to how much information can be stored, or how much computing accomplished in a second, within a fixed volume of space. But these hard limits may be a long way off, so it seems safe to assume that Moore’s Law will keep operating for many more cycles, as long as there is demand for ever-greater capacity.

Thus far, whenever more capacity comes along, new applications are invented (or made practical) to use it. But will this go on forever, or is there a point of diminishing returns where more capacity doesn’t improve the user’s happiness?

Consider the broadband link going into a typical home. Certainly today’s homeowner wants more bandwidth, or at least can put more bandwidth to use if it is provided. But at some point there is enough bandwidth to download any reasonable webpage or program in a split second, or to provide real-time ultra-high-def video streams to every member of the household. When that day comes, do home users actually benefit from having fatter pipes?

There is a plausible argument that a limit exists. The human sensory system has limited (though very high) bandwidth, so it doesn’t make sense to direct more than a certain number of bits per second at the user. At some point, your 3-D immersive stereo video has such high resolution that nobody will notice any improvement. The other senses have similar limits, so at some point you have enough bandwidth to saturate the senses of everybody in the home. You might want to send information to devices in the home; but how far can that grow?

Such questions may not matter quite yet, but they will matter a great deal someday. The structure of the technology industries, not to mention technology policies, are built around the idea that people will keep demanding more-more-more and the industry will be kept busy providing it.

My gut feeling is that we’ll eventually hit the point of diminishing returns, but it is a long way off. And I suspect we’ll hit the bandwidth limit before we hit the computation and storage limits. I am far from certain about this. What do you think?

(This post was inspired by a conversation with Tim Brown.)

Why So Little Attention to Botnets?

Our collective battle against botnets is going badly, according to Ryan Naraine’s recent article in eWeek.

What’s that? You didn’t know we were battling botnets? You’re not alone. Though botnets are a major cause of Internet insecurity problems, few netizens know what they are or how they work.

In this context, a “bot” is a malicious software agent that gets installed on an unsuspecting user’s computer. Bots get onto computers by exploiting security flaws. Once there, they set up camp and wait unobtrusively for instructions. Bots work in groups, called “botnets”, in which many thousands of bots (hundreds of thousands, sometimes) all over the Net work together at the instruction of a remote badguy.

Botnets can send spam or carry out coordinated security attacks on targets elsewhere on the Net. Attacks launched by botnets are very hard to stop because they come from so many places all at once, and tracking down the sources just leads to innocent users with infected computers. There is an active marketplace in which botnets are sold and leased.

Estimates vary, but a reasonable guess is that between one and five percent of the computers on the net are infected with bots. Some computers have more than one bot, although bots nowadays often try to kill each other.

Bots exploit the classic economic externality of network security. A well-designed bot on your computer tries to stay out of your way, only attacking other people. An infection on your computer causes harm to others but not to you, so you have little incentive to prevent the harm.

Nowadays, bots often fight over territory, killing other bots that have infected the same machine, or beefing up the machine’s defenses against new bot infections. For example, Brian Krebs reports that some bots install legitimate antivirus programs to defend their turf.

If bots fight each other, a rationally selfish computer owner might want his computer to be infected by bots that direct their attacks outward. Such bots would help to defend the computer against other bots that might harm the computer owner, e.g. by spying on him. They’d be the online equivalent of the pilot fish that swim into sharks’ mouths with impunity, to clean the sharks’ teeth.

Botnets live today on millions of ordinary users’ computers, leading to nasty attacks. Some experts think we’re losing the war against botnets. Yet there isn’t much public discussion of the problem among nonexperts. Why not?