August 27, 2016

Archives for December 2010


Burn Notice, season 4, and the abuse of the MacGuffin

One of my favorite TV shows is Burn Notice. It’s something of a spy show, with a certain amount of gadgets but generally no James Bond-esque Q to supply equipment that’s certainly beyond the reach of real-world spycraft. Burn Notice instead focuses on the value of teamwork, advance planning, and clever subterfuge to pull off its various operations combined with a certain amount of humor and romance to keep the story compelling and engaging. You can generally watch along and agree with the feasibility of what they’re doing. Still, when they get closer to technology I actually know something about, I start to wonder.

One thing they recently got right, at least in some broad sense, was the ability to set up a femtocell (cell phone base station) as a way of doing a man-in-the-middle attack against a target’s cell phone. A friend of mine has one of these things, and he was able to set it up to service my old iPhone without anything more than my phone number. Of course, it changed the service name (from “AT&T” to “AT&T Microcell” or something along those lines), but it’s easy to imagine, in a spy-vs-spy scenario, where that would be easy to fix. Burn Notice didn’t show the necessary longer-range antenna or amplifier in order to reach their target, who was inside a building while our wiretapping heroes were out on the street, but I’m almost willing to let the get away with that, never mind having to worry about GSM versus CDMA. Too much detail would detract from the story.

(Real world analogy: Rop Gonggrijp, a Dutch computer scientist who had some tangential involvement with WikiLeaks, recently tweeted: “Foreign intel attention is nice: I finally have decent T-Mobile coverage in my office in the basement. Thanks guys…”)

What’s really bothered me about this season’s Burn Notice, though, was the central plot MacGuffin. Quoting Wikipedia: “the defining aspect of a MacGuffin is that the major players in the story are (at least initially) willing to do and sacrifice almost anything to obtain it, regardless of what the MacGuffin actually is.” MacGuffins are essential to many great works of drama, yet it seems that Hollywood fiction writers haven’t yet adapted the ideas of MacGuffins to dealing with data, and it really bugs me.

Without spoiling too much, Burn Notice‘s MacGuffin for the second half of season 4 was a USB memory stick which happened to have some particularly salacious information on it (a list of employee ID numbers corresponding to members of a government conspiracy), and which lots of people would (and did) kill to get their hands on. Initially we had the MacGuffin riding around on the back of a motorcycle courier; our heroes had to locate and intercept it. Our heroes then had to decide whether to use the information themselves or pass it onto a trusted insider in the government. Later, after various hijinks, wherein our heroes lost the MacGuffin, the bad guy locked it a fancy safe which our heroes had to physically find and then remove from a cinderblock wall to later open with an industrial drill-press.

When the MacGuffin was connected to a computer, our heroes could read it, but due to some sort of unspecified “cryptography” they were unable to make copies. Had that essential element been more realistic, the entire story would have changed. Never mind that there’s no such “encryption” technology out there. For a show that has our erstwhile heroes regularly use pocket digital cameras to photograph computer screens or other sensitive documents, you’d think they would do something similar here. Nope. The problem is that any realistic attempt to model how easy it is to copy data like this would have blown apart the MacGuffin-centric nature of the plot. Our protagonists could have copied the data, early on, and handed the memory card over. They could have then handed over bogus data written to the same memory stick. They could have created thousands of webmail accounts, each holding copies of the data. They could have anonymously sent the incriminating data to any of a variety of third parties, perhaps borrowing some plot elements from the whole WikiLeaks fiasco. In short, there could still have been a compelling story, but it wouldn’t have followed the standard MacGuffin structure, and it would almost certainly have reached a very different conclusion.

All in all, it’s probably a good thing I don’t know too much about combat tactics, explosives, or actual spycraft, or I’d be completely unable to enjoy a show like this. I expect James Bond to do impossible things, but I appreciate Burn Notice for its ostensibility. I can almost imagine it actually happening.


The Flawed Legal Architecture of the Certificate Authority Trust Model

Researchers have recently criticized the Certificate Authority Trust Model — which involves the issuance and use of digital certificates to authenticate the identity of websites to end-users — because of an array of technical and institutional problems. The criticism is significant not only because of the systemic nature of the noted problems, but also because the Model is universally relied upon by websites offering secure connections (SSL and TLS) to end-users. The Model comes into play in virtually every commercial and business transaction occurring over the Internet, as well as in a wide variety of other confidential and private on-line communications. What has not been addressed to date, however, is the nature of the legal relationships between the parties involved with, or impacted by, the Model.

Steve Schultze and I tackle this topic in our recent article “The Certificate Authority Trust Model for SSL: A Defective Foundation for Encrypted Web Traffic and a Legal Quagmire.” We looked at the standard legal documents issued by the certificate authorities or “CAs,” including exemplar Subscriber Agreements (agreements between CAs and website operators); “Certification Practice Statements” (statements by CAs outlining their business practices); and Relying Party Agreements (purported agreements between CAs and “relying parties,” such as end-users). What we found was surprising:

  • “Relying Party Agreements” purport to bind end-users to their terms despite the apparent absence of any mechanism to either affirmatively alert the end-user as to the existence of the supposed Agreements or afford the end-user an opportunity to register his or her acceptance or rejection of the Agreements’ terms
  • Certification Practice Statements that suffer from the same problem (i.e. no affirmative notice to the end-user and no meaningful opportunity for acceptance or rejection of terms)

There were other issues as well. For example, the Relying Party Agreements and Certification Practice Statements set forth various obligations on the part of end-users (i.e. “relying parties”) such as: the requirement that end-users make an independent determination of whether it is reasonable to trust a website offering a secure connection (isn’t that the whole point of having a CA, so that the end-user doesn’t have to do that?); the requirement that the end-user be familiar with the crypto software and processes used to carry out the authentication process; and the end-user’s duty to indemnify and hold harmless the CA in the event of legal claims by third parties.

Given the absence of notice to the end-user and assent by the end-user, it would appear that many CAs would have a difficult time holding an end-user to the terms of the relying party agreements or certification practice statements. To date, the CA Trust Model’s legal architecture has apparently not been the subject of any published court decision and remains untested.

The bottom line is that the CA Trust Model’s legal architecture inures to the benefit of no one. Neither website operators, certificate authorities, nor end-users can be sure of their rights or exposure. The Model’s legal structure may therefore be just as troubling as its security vulnerabilities.

You can read the full article in PDF form.

[Editor: Steve Roosa gave a followup luncheon talk at CITP entitled The Devil is in the Indemnity Agreements: A Critique of the Certificate Authority Trust Model’s Putative Legal Foundation. Slides and audio are now posted.]


Ninth Circuit Ruling in MDY v. Blizzard

The Ninth Circuit has ruled on the MDY v. Blizzard case, which involves contract, copyright, and DMCA claims. As with the district court ruling, I’ll withhold comment due to my involvement as an expert in the case, but the decision may be of interest to FTT readers.

[Editor: The EFF has initial reactions here. Techdirt also has an overview.]


Court Rules Email Protected by Fourth Amendment

Today, the United States Court of Appeals for the Sixth Circuit ruled that the contents of the messages in an email inbox hosted on a provider’s servers are protected by the Fourth Amendment, even though the messages are accessible to an email provider. As the court puts it, “[t]he government may not compel a commercial ISP to turn over the contents of a subscriber’s emails without first obtaining a warrant based on probable cause.”

This is a very big deal; it marks the first time a federal court of appeals has extended the Fourth Amendment to email with such care and detail. Orin Kerr calls the opinion, at least on his initial read, “quite persuasive” and “likely . . . influential,” and I agree, but I’d go further: this is the opinion privacy activists and many legal scholars, myself included, have been waiting and calling for, for more than a decade. It may someday be seen as a watershed moment in the extension of our Constitutional rights to the Internet.

And it may have a more immediate impact on Capitol Hill, because in its ruling the Sixth Circuit also declares part of the Stored Communications Act (SCA) of the Electronic Communications Privacy Act unconstitutional. 18 U.S.C. 2703(b) allows the government to obtain email messages with less than a search warrant. This section has been targeted for amendment by the Digital Due Process coalition of companies, privacy groups, and academics (I have signed on) for precisely the reason now attacked by this opinion, because it allows warrantless government access to communications stored online. I am sure some congressional staffers are paying close attention to this opinion, and I hope it helps clear the way for an amendment to the SCA, to fix a now-declared unconstitutional law, if not during the lame duck session, then early in the next Congressional term.

Update: Other reactions from Dissent and the EFF.


Two Stories about the Comcast/Level 3 Dispute (Part 2)

In my last post I told a story about the Level 3/Comcast dispute that portrays Comcast in a favorable light. Now here’s another story that casts Comcast as the villain.

Story 2: Comcast Abuses Its Market Power

As Steve explained, Level 3 is an “Internet Backbone Provider.” Level 3 has traditionally been considered a tier 1 provider, which means that it exchanges traffic with other tier 1 providers without money changing hands, and bills everyone else for connectivity. Comcast, as a non-tier 1 provider, has traditionally paid Level 3 to carry its traffic to places Comcast’s own network doesn’t reach directly.

Steve is right that the backbone market is highly competitive. I think it’s worth unpacking why this is in a bit more detail. Let’s suppose that a Comcast user wants to download a webpage from Yahoo!, and that both are customers of Level 3. So Yahoo! sends its bits to Level 3, who passes it along to Comcast. And traditionally, Level 3 would bill both Yahoo! and Comcast for the service of moving data between them.

It might seem like Level 3 has a lot of leverage in a situation like this, so it’s worth considering what would happen if Level 3 tried to jack up its prices. There are reportedly around a dozen other tier 1 providers that exchange traffic with Level 3 on a settlement-free basis. This means that if Level 3 over-charges Comcast for transit, Comcast can go to one of Level 3’s competitors, such as Global Crossing, and pay it to carry its traffic to Level 3’s network. And since Global Crossing and Level 3 are peers, Level 3 gets nothing for delivering traffic to Global Crossing that’s ultimately bound for Comcast’s network.

A decade ago, when Internet Service Retailers (to use Steve’s terminology) were much smaller than backbone providers, that was the whole story. The retailers didn’t have the resources to build their own global networks, and their small size meant they had relatively little bargaining power against the backbone providers. So the rule was that Internet Service Retailers charged their customers for Internet access, and then passed some of that revenue along to the backbone providers that offered global connectivity. There may have been relatively little competition in the retailer market, but this didn’t have much effect on the overall structure of the Internet because no single retailer had enough market power to go toe-to-toe with the backbone providers.

A decade of consolidation and technological progress has radically changed the structure of the market. About 16 million households now subscribe to Comcast’s broadband service, accounting for at least 20 percent of the US market. This means that a backbone provider that doesn’t offer fast connectivity to Comcast’s customers will be putting themselves at a significant disadvantage compared with companies that do. Comcast still needs access to Level 3’s many customers, of course, but Level 3 needs Comcast much more than it needed any single Internet retailer a decade ago.

Precedent matters in any negotiated relationship. You might suspect that you’re worth a lot more to your boss than what he’s currently paying you, but by accepting your current salary when you started the job you’ve demonstrated you’re willing to work for that amount. So until something changes the equilibrium (like an competing job offer), your boss has no particular incentive to give you a raise. One strategy for getting a raise is to wait until the boss asks you to put in extra hours to finish a crucial project, and then ask for the raise. In that situation, not only does the boss know he can’t lose you, but he knows you know he can’t lose you, and therefore aren’t likely to back down.

Comcast seems to have pursued a similar strategy. If Comcast had simply approached Level 3 and demanded that Level 3 start paying Comcast, Level 3 would have assumed Comcast was bluffing and said no. But when Level 3 won the Netflix contract, Level 3 suddenly needed a rapid and unexpected increase in connectivity to Comcast. And Comcast bet, correctly as it turned out, that Level 3 was so desperate for that additional capacity that it would have no choice but to pay Comcast for the privilege.

If Comcast’s gambit becomes a template for future negotiations between backbone providers and broadband retailers, it could represent a dramatic change in the economics of the Internet. This is because it’s much harder for a backbone provider to route around a retailer than vice versa. As we’ve seen Comcast can get to Level 3’s customers by purchasing transit from some other backbone provider. But traffic bound for Comcast’s residential customers have to go through Comcast’s network. And Level 3’s major customers—online content providers like Netflix—aren’t going to pay for transit services that don’t reach 20 percent of American households. So Level 3 is in a weak bargaining position.

In the long run, this could be very bad news for online businesses like Netflix, because its bandwidth costs would no longer be constrained by the robust competition in the backbone market. Netflix apparently got a good deal from Level 3 in the short run. But if a general practice emerges of backbone providers paying retailers for interconnection, those costs are going to get passed along to the backbone providers’ own customers, e.g. Netflix. And once the precedent is established that retailers get to charge backbone providers for connectivity, their ability to raise prices may be much less constrained by competition.


So which story is right? If I knew the answer to that I wouldn’t have wasted your time with two stories. And it’s worth noting that these stories are not mutually exclusive. It’s possible that Comcast has been looking for an opportunity to shift the balance of power with its transit providers, and the clumsiness of Level 3’s CDN strategy gave them an opportunity to do so in a way that minimizes the fallout.

One sign that story #2 might be wrong is that content providers haven’t raised much of a stink. If the Comcast/Level 3 dispute represented a fundamental shift in power toward broadband providers, you’d expect the major content providers to try to form a united front against them. Yet there’s nothing about the dispute on (for example) the Google Public Policy blog, and I haven’t seen any statements on the subject from other content providers. Presumably they’re following this dispute more closely than I am, and understand the business issues better than I do, so if they’re not concerned that suggests maybe I shouldn’t be either.

A final thought: one place where I’m pretty sure Level 3 is wrong is in labeling this a network neutrality dispute. Although the dispute was precipitated by Netflix’s decision to switch CDN providers, there’s little reason to think Comcast is singling out Netflix traffic for special treatment. In story #1, Comcast is be happy to deliver Netflix (or any other) content via a well-designed CDN; they just object to having their bandwidth wasted. In story #2, Comcast’s goal is to collect payments for all inbound traffic, not just traffic from Netflix. Either way, Comcast hasn’t done anything that violates leading network neutrality proposals. Comcast is not, and hasn’t threatened to, discriminate against any particular type of traffic. And no, declining to upgrade a peering link doesn’t undermine network neutrality.


Two Stories about the Comcast/Level 3 Dispute (Part 1)

Like Steve and a lot of other people in the tech policy world, I’ve been trying to understand the dispute between Level 3 and Comcast. The combination of technical complexity and commercial secrecy has made the controversy almost impenetrable for anyone outside of the companies themselves. And of course, those who are at the center of the action have a strong incentive to mislead the public in ways that makes their own side look better.

So building on Steve’s excellent post, I’d like to tell two very different stories about the Level 3/Comcast dispute. One puts Level 3 in a favorable light and the other slants things more in Comcast’s favor.

Story 1: Level 3 Abuses Its Customer Relationships

As Steve explained, a content delivery network (CDN) is a network of caching servers that help content providers deliver content to end users. Traditionally, Netflix has used CDNs like Akamai and Limelight to deliver its content to customers. The dispute began shortly after Level 3 beat out these CDN providers for the Netflix contract.

The crucial thing to note here is that CDNs can save Comcast, and other broadband retailers, a boatload of money. In a CDN-free world, content providers like Netflix would send thousands of identical copies of its content to Comcast customers, consuming Comcast’s bandwidth and maybe even forcing Comcast to pay transit fees to its upstream providers.

Akamai reportedly installs its caching servers at various points inside the networks of retailers like Comcast. Only a single copy of the content is sent from the Netflix server to each Akamai cache; customers then access the content from the caches. Because these caches are inside Comcast’s network, they never require Comcast to pay for transit to receive them. And because there are many caches distributed throughout Comcast’s network (to improve performance), content delivered by them is less likely to consume bandwidth on expensive long-haul connections.

Now Level 3 wants to enter the CDN marketplace, but it decides to pursue a different strategy. For Akamai, deploying its servers inside of Comcast’s network saves both Comcast and Akamai money, because Akamai would otherwise have to pay a third party to carry its traffic to Comcast. But as a tier 1 provider, Level 3 doesn’t have to pay anyone for connectivity, and indeed in many cases third parties pay them for connectivity. Hence, placing the Level 3 servers inside of the Level 3 network is not only easier for Level 3, but in some cases it might actually generate extra revenue, as Level 3’s customers have to pay for the extra traffic.

This dynamic might explain the oft-remarked-upon fact that Comcast seems to be simultaneously a peer and a customer of Level 3. Comcast pays Level 3 to carry traffic to and from distant networks that Comcast’s own network does not reach—doing so is cheaper than building its own worldwide backbone network. But Comcast is less enthusiastic about paying Level 3 for traffic that originates from Level 3’s own network. (known as “on-net” traffic)

And even if Comcast isn’t paying for Level 3’s CDN traffic, it’s still not hard to understand Comcast’s irritation. When two companies sign a peering agreement, the assumption is typically that each party is doing roughly half the “work” of hauling the bits from source to destination. But in this case, because the bits are being generated by Level 3’s CDN servers, the bits are traveling almost entirely over Comcast’s network.

Hauling traffic all the way from the peering point to Comcast’s customers will consume more of Comcast’s network resources than hauling traffic from Akamai’s distributed CDN servers did. And to add insult to injury, Level 3 apparently only gave Comcast a few weeks’ notice of the impending traffic spike. So faced with the prospect of having to build additional infrastructure to accommodate this new, less efficient method for delivering Netflix bits to Comcast customers, Comcast asked Level 3 to help cover the costs.

Of course, another way to look at this is to say that Comcast (and other retailers like AT&T and Time Warner) brought the situation on themselves by over-charging Akamai for connectivity. I’ve read conflicting reports about whether and how much Comcast has traditionally charged Akamai for access to its network (presumably these details are trade secrets), but some people have suggested that Comcast charges Akamai for bandwidth and cabinet space even when their servers are deep inside Comcast’s own network. If that’s true, it may be penny wise and pound foolish on Comcast’s part, because if Akamai is not able to win big customers like Netflix, then Comcast will have to pay to haul that traffic halfway across the Internet itself.

In my next post I’ll tell a different story that casts Comcast in a less flattering light.


Smart electrical meters and their smart peripherals

When I was a college undergraduate, I lived in a 1920’s duplex and I recall my roommate and I trying to figure out where our electrical bill was going. He was standing outside by the electrical meter, I was turning things on and off, and we were yelling back and forth so we could sort out which gadgets were causing the wheel to spin faster. (The big power sinks? Our ancient 1950’s refrigerator and my massive-for-the-day 20-inch computer monitor.) Needless to say, this was more difficult than it should have been.

More recently, I got myself a Kill-a-Watt inline power meter which you can use at any power outlet, but it’s a pain. You have to unplug something to measure its usage. People with the big bucks will spring for a Ted 5000 system, which an electrician installs in your breaker box. That’s fantastic, but it’s not cheap or easy.

Today, I’m now the proud new owner of an LS Research “RateSaver”, which speaks ZigBee wireless to the “smart meter” that CenterPoint Energy installed on all the houses in our area. How did I get this thing? I went to a League of Women Voters “meet the candidates” event back in October and CenterPoint Energy had a display there. I asked the guy if I could get one of these things and he said he’s look into it for me. Fast forward two months later, and a box arrived in the mail. New toy!

So what exactly is it? It’s a battery-powered light-weight box with a tolerably readable two-inch monochrome LCD display. As I’m sitting here typing, it’s updating my “current usage” every few seconds and is giving me a number that’s ostensibly accurate to the watt. In the last minute, after I pressed the proper button, it’s been alternating between reading 650-750 watts, and 1400-1500 watts. (Hmm… maybe my fridge consumes 700 watts.) If you leave it alone, the refresh rate slows down to maybe once a minute. Also, it’s sometimes reading “0.000 kW” which is clearly incorrect but it returns to the proper number when I press the button. Wireless range is quite good. I’m on the opposite side of the house as our electrical meter and it’s working fine.

The user interface is all kinds of terrible. In addition to slow button response, the button labels are incorrect. LS Research is apparently just rebranding a Honeywell Home Energy Display (for which the Honeywell manual was included). LS Research apparently rearranged the button labels without changing the corrresponding software. Bravo! Thankfully, the Honeywell manuals have the proper labeling. Also amusing: there’s a message in the system saying that “non-peak price starts at 7:00 PM. Save Money by waiting” when in fact my electrical pricing deal is for a flat rate (which floats with market conditions and is presently $0.0631 per kWh).

Update: I’ve since learned that Honeywell acquired LS Research, so this is something of a transitional screw-up. Welcome to the world of beta products.

Since I’m a security guy, I assumed I’d have to go through some kind of protocol to get the thing activated, and the manual from inside the box describes an activation procedure where you make a phone call to your energy company, giving them the hardware ID numbers of the outdoor smart meter and the indoor display box. Conflicting instructions were also included with my display, describing setup which was as simple as “turn it on and hit the connect button” so I went with the easy instructions. Time passed and the box started working without requiring any additional input from me. I hope that my display box was pre-configured to work exclusively with my house, but this does lead me to wonder about whether they got the security right. (I experimentally turned lights on and off while watching the meter updates and validated that I am, in fact, looking at the usage of my own house.)

At the end of the day, I and everybody else here is now required to pay a $3.24 “advanced meter charge” in order to have all this functionality (which, incidentally, saves the electric company money since it no longer needs human meter readers). Is it worth it? Presumably, at some point I’ll have some kind of variable-priced electricity and I could then hack my refrigerator and air conditioning system to pay attention to the spot price of electricity. If electricity got extra cheap during a five minute window of the hot summer, the controller could then crank the A/C and drop the house an extra few degrees. Of course, if everybody was following this same algorithm, you’d either have insane demand swings, when everybody jumps on to consume cheaper electricity when it’s available, or you’d have to carefully engineer the pricing system such that you had stable demand. Presumably, if you got somebody who understood control theory to design this properly, you could end up incentivizing both demand and pricing to be fairly stable across the space of any given hour of the day.

Probably the biggest benefit of these smart meters will come the next time we have a major hurricane that comes through and knocks out power. Hurricane Ike left my house without power for ten days. At the time, CenterPoint Energy had a vague and useless web site that would give you an idea what neighborhoods were being repaired. Since it was too hot to stay in our house, we stayed instead with a friend who had power and drove by our place every day to see if it had power. This was very frustrating. (I unplugged all my computer equipment, since I didn’t want flakey power to nuke my equipment. Consequently, I couldn’t just do something simple like ping my home computer.) Today, I can log into CenterPoint Energy’s web site and see the power consumption of my house, in 15-minute intervals, and so can the people coordinating the repairs. If they integrated that with a mapping system, they’d have real-time blackout maps, which have obvious value to emergency planners and repair operations coordination.

I just hope they have somebody with a clue looking over the security of their system. (If somebody from CenterPoint reads this: people like me are more than happy to do private security evaluations, red-team exercises, and so forth.)

Future work: there’s a mini USB port on the side of the box. Now I just have to find some documentation. It’s probably bad form for me to go reverse-engineer it myself.


Unpeeling the mystique of tamper-indicating seals

As computer scientists have studied the trustworthiness of different voting technologies over the past decade, we notice that “security seals” are often used by election officials. It’s natural to wonder whether they really provide any real security, or whether they are just for show. When Professor Avi Rubin volunteered as an election judge (Marylandese for pollworker) in 2006, one of his observations that I found most striking was this:

Avi Rubin

“For example, I carefully studied the tamper tape that is used to guard the memory cards. In light of Hursti’s report, the security of the memory cards is critical. Well, I am 100% convinced that if the tamper tape had been peeled off and put back on, nobody except a very well trained professional would notice it. The tamper tape has a tiny version of the word “void” appear inside it after it has been removed and replaced, but it is very subtle. In fact, a couple of times, due to issues we had with the machines, the chief judge removed the tamper tape and then put it back. One time, it was to reboot a machine that was hanging when a voter was trying to vote. I looked at the tamper tape that was replaced and couldn’t tell the difference, and then it occurred to me that instead of rebooting, someone could mess with the memory card and replace the tape, and we wouldn’t have noticed. I asked if I could play with the tamper tape a bit, and they let me handle it. I believe I can now, with great effort and concentration, tell the difference between one that has been peeled off and one that has not. But, I did not see the judges using that kind of care every time they opened and closed them. As far as I’m concerned, the tamper tape does very little in the way of actual security, and that will be the case as long as it is used by lay poll workers, as opposed to CIA

Avi is a first-rate expert in the field of computer security, in part because he’s a good experimentalist—as in, “I asked if I could play with the tamper tape.” To the nonexpert,
security seals have a mystique: there’s this device there, perhaps a special tape or perhaps a thing that looks like a little blue plastic padlock. Most of us encounter these devices in a context where we can’t “play with” them, because that would be breaking the rules: on voting machines, on our electric meter, or whatever. Since we don’t play with them, we can’t tell whether they are secure, and the mystique endures. As soon
as Avi played with one, he discovered that it’s not all that secure.

In fact, we have a word for a piece of tape that only gives the appearance of working:

band-aid: (2) a temporary way of dealing with a problem that will not really solve it (Macmillan Dictionary)

In the last couple of years I’ve been studying security seals on voting machines in New Jersey. For many decades New Jersey law has required that each voting machine be “sealed with a numbered seal”, just after it is prepared for each election (NJSA 19:48-6). Unfortunately it’s hard for legislators to write into the statutes exactly how well these seals must work. Are tamper-indicating seals used in elections really secure? I’ll address that question in my next few articles.


Trying to Make Sense of the Comcast / Level 3 Dispute

[Update: I gave a brief interview to Marketplace Tech Report]

The last 48 hours has given rise to a fascinating dispute between Level 3 (a major internet backbone provider) and Comcast (a major internet service retailer). The dispute involves both technical principles and fuzzy facts, so I am writing this post more as an attempt to sort out the details in collaboration with commenters than as a definitive guide. Before we get to the facts, let’s define some terms:

Internet Backbone Provider: These are companies, like Level 3, that transport the majority of the traffic at the core of the Internet. I say the “core” because they don’t typically provide connections to the general public, and they do the majority of their routing using the Border Gateway Protocol (BGP) and deliver traffic from one Autonomous System (AS) to another. Each backbone provider is its own AS, but so are Internet Service Retailers. Backbone providers will often agree to “settlement free peering with each other in which they deliver each others’ traffic for no fee.

Internet Service Retailers: These are companies that build the “last mile” of internet infrastructure to the general public and sell service. I’ve called them “Retailers” even though most people have traditionally called them Internet Service Providers (the ISP term can get confusing). Retailers sign up customers with the promise of connecting them to the backbone, and then sign “transit” agreements to pay the backbone providers for delivering the traffic that their customers request.

Content Delivery Networks: These are companies like Akamai that provide an enhanced service compared to backbone providers because they specialize in physically locating content closer to the edges (such that many copies of the content are stored in a part of the network that is closer to end-users). The benefit of this is that the content is theoretically faster and more reliable for end-users to access because it has to traverse less “hops.” CDNs will often sign agreements with Retailers to interconnect at many locations that are close to the end-users, and even to rent space to put their servers in the Retailer’s facilities (a practice called co-location).

Akamai and LimeLight Networks have traditionally provided delivery of Netflix content to Comcast customers as CDNs, and paid Comcast for local interconnection and colocation. Level 3, on the other hand, has a longstanding transit agreement with Comcast in which Comcast pays Level 3 to provide its customers with access to the internet backbone. Level 3 signed a deal with Netflix to become the primary provider of their content instead of the existing CDNs. Rather than change its business relationship with Comcast to something more akin to a CDN, in which it pays to locally interconnect and colocate, Level 3 hoped to continue to be paid by Comcast for providing backbone connectivity for its customers. Evidently, it thought that the current terms of its transit agreement with Comcast provided sufficient speed and reliability to satisfy Netflix. Comcast realized that they would simultaneously be losing the revenue from the existing CDNs that paid them for local services, and it would have to pay Level 3 more for backbone connectivity because more traffic would be traversing those links (apparently a whole lot). Comcast decided to try to instead charge Level 3, which didn’t sound like a good deal to Level 3. Level 3 published a press release saying Comcast was trying to unfairly leverage their exclusive control of end-users. Comcast sent a letter to the FCC saying that nothing unfair was going on and this was just a run-of-the-mill peering dispute. Level 3 replied that it was no such thing. [Updates: Comcast told the FCC that they they really do originate a lot of traffic and should be considered a backbone provider. Level 3 released their own FAQ, discussing the peering issue as well as the competitive issues. AT&T blogged in support of Comcast, Level 3 said that AT&T “missed the point completely.”]

Comcast’s attempt to describe the dispute as something akin to a peering dispute between backbone providers strikes me as misleading. Comcast is not a backbone provider that can deliver packets to an arbitrary location on the internet (a location that many other backbone providers might also be able to deliver to). Instead, Comcast is representing only its end-users, and it is doing so exclusively. What’s more, it has never had a settlement-free peering agreement with Level 3 (always transit, with Comcast paying). [Edit: see my clarification below in which I raise the possibility that it may have had both agreements at the same time, but relating to different traffic.] Indeed, the very nature of retail broadband service is that download quantity (or the traffic going into the Comcast AS) far exceeds upload quantity. In Comcast’s view of the world, therefore, all of their transit agreements should be reversed such that the backbone providers pay them for the privilege of reaching their users.

Why is this a problem? Won’t the market sort it out? First, the backbone market is still relatively competitive, and within that market I think that economic forces stand a reasonable chance of finding the optimal efficiency and leave relatively less room for anti-competitive shenanigans. However, these market dynamics can fall apart when you add to the mix last-mile providers. Last mile providers by their nature have at least a temporary monopoly on serving a given customer and often (in the case of a provider like Comcast) a local near-monopoly on high-performance broadband service altogether. Historically, the segmentation between the backbone market and the last-mile market has prevented shenanigans in the latter from seeping into the former. Two significant changes have occurred that alter this balance: 1) Comcast has grown to the size that it exerts tremendous power over a large portion of the broadband retail customers, with far less competition than in the past (for example the era of dial-up) and 2) Level 3 has sought to become the exclusive provider of certain desirable online content, but without the same network and business structure as traditional CDNs.

The market analysis becomes even more complicated in a scenario in which the last-mile provider has a vertically integrated service that competes with services being provided over the backbone provider with which it interconnects. Comcast’s basic video service clearly competes with Netflix and other internet video. In addition, Comcast’s TV Everywhere service (in partnership with HBO) competes with other computer-screen on-demand video services. Finally, the pending Comcst/NBCU merger (under review by the FCC and DoJ) implicates Hulu and a far greater degree of vertical integration with content providers. This means that in addition to its general incentives to price-squeeze backbone providers, Comcast clearly has incentive to discriminate against other online video providers (either by altering speed or by charging more than what a competitive market would yield).

But what do you all think? You may also find it worthwhile to slog through some of the traffic on the NANOG email list, starting roughly here.

[Edit: I ran across this fascinating blog post on the issue by Global Crossing, a backbone provider similar to Level 3.]

[Edit: Take a look at this fantastic overview of the situation in a blog post from Adam Rothschild.]