May 17, 2024

The Next Step towards an Open Internet

Now that the FCC has finally acted to safeguard network neutrality, the time has come to take the next step toward creating a level playing field on the rest of the Information Superhighway. Network neutrality rules are designed to ensure that large telecommunications companies do not squelch free speech and online innovation. However, it is increasingly evident that broadband companies are not the only threat to the open Internet. In short, federal regulators need to act now to safeguard social network neutrality.

The time to examine this issue could not be better. Facebook is the dominant social network in countries other than Brazil, where everybody uses Friendster or something. Facebook has achieved near-monopoly status in the social networking market. It now dominates the web, permeating all aspects of the information landscape. More than 2.5 million websites have integrated with Facebook. Indeed, there is evidence that people are turning to social networks instead of faceless search engines for many types of queries.

Social networks will soon be the primary gatekeepers standing between average Internet users and the web’s promise of information utopia. But can we trust them with this new-found power? Friends are unlikely to be an unbiased or complete source of information on most topics, creating silos of ignorance among the disparate components of the social graph. Meanwhile, social networks will have the power to make or break Internet businesses built atop the enormous quantity of referral traffic they will be able to generate. What will become of these businesses when friendships and tastes change? For example, there is recent evidence that social networks are hastening the decline of the music industry by promoting unknown artists who provide their music and streaming videos for free.

Social network usage patterns reflect deep divisions of race and class. Unregulated social networks could rapidly become virtual gated communities, with users cut off from others who could provide them with a diversity of perspectives. Right now, there’s no regulation of the immense decision-influencing power that friends have, and there are no measures in place to ensure that friends provide a neutral and balanced set of viewpoints. Fortunately, policy-makers have a rare opportunity to preempt the dangerous consequences of leaving this new technology to develop unchecked.

The time has come to create a Federal Friendship Commission to ensure that the immense power of social networks is not abused. For example, social network users who have their friend requests denied currently have no legal recourse. Users should have the option to appeal friend rejections to the FFC to verify that they don’t violate social network neutrality. Unregulated social networks will give many users a distorted view of the world dominated by the partisan, religious, and cultural prejudices of their immediate neighbors in the social graph. The FFC can correct this by requiring social networks to give equal time to any biased wall post.

However, others have suggested lighter-touch regulation, simply requiring each person to have friends of many races, religions, and political persuasions. Still others have suggested allowing information harms to be remedied through direct litigation—perhaps via tort reform that recognizes a new private right of action against violations of the “duty to friend.” As social networking software will soon be found throughout all aspects of society, urgent intervention is needed to forestall “The Tyranny of The Farmville.”

Of course, social network neutrality is just one of the policy tools regulators should use to ensure a level playing field. For example, the Department of Justice may need to more aggressively employ its antitrust powers to combat the recent dangerous concentration of social networking market share on popular micro-blogging services. But enacting formal social network neutrality rules is an important first step towards a more open web.

Trying to Make Sense of the Comcast / Level 3 Dispute

[Update: I gave a brief interview to Marketplace Tech Report]

The last 48 hours has given rise to a fascinating dispute between Level 3 (a major internet backbone provider) and Comcast (a major internet service retailer). The dispute involves both technical principles and fuzzy facts, so I am writing this post more as an attempt to sort out the details in collaboration with commenters than as a definitive guide. Before we get to the facts, let’s define some terms:

Internet Backbone Provider: These are companies, like Level 3, that transport the majority of the traffic at the core of the Internet. I say the “core” because they don’t typically provide connections to the general public, and they do the majority of their routing using the Border Gateway Protocol (BGP) and deliver traffic from one Autonomous System (AS) to another. Each backbone provider is its own AS, but so are Internet Service Retailers. Backbone providers will often agree to “settlement free peering with each other in which they deliver each others’ traffic for no fee.

Internet Service Retailers: These are companies that build the “last mile” of internet infrastructure to the general public and sell service. I’ve called them “Retailers” even though most people have traditionally called them Internet Service Providers (the ISP term can get confusing). Retailers sign up customers with the promise of connecting them to the backbone, and then sign “transit” agreements to pay the backbone providers for delivering the traffic that their customers request.

Content Delivery Networks: These are companies like Akamai that provide an enhanced service compared to backbone providers because they specialize in physically locating content closer to the edges (such that many copies of the content are stored in a part of the network that is closer to end-users). The benefit of this is that the content is theoretically faster and more reliable for end-users to access because it has to traverse less “hops.” CDNs will often sign agreements with Retailers to interconnect at many locations that are close to the end-users, and even to rent space to put their servers in the Retailer’s facilities (a practice called co-location).

Akamai and LimeLight Networks have traditionally provided delivery of Netflix content to Comcast customers as CDNs, and paid Comcast for local interconnection and colocation. Level 3, on the other hand, has a longstanding transit agreement with Comcast in which Comcast pays Level 3 to provide its customers with access to the internet backbone. Level 3 signed a deal with Netflix to become the primary provider of their content instead of the existing CDNs. Rather than change its business relationship with Comcast to something more akin to a CDN, in which it pays to locally interconnect and colocate, Level 3 hoped to continue to be paid by Comcast for providing backbone connectivity for its customers. Evidently, it thought that the current terms of its transit agreement with Comcast provided sufficient speed and reliability to satisfy Netflix. Comcast realized that they would simultaneously be losing the revenue from the existing CDNs that paid them for local services, and it would have to pay Level 3 more for backbone connectivity because more traffic would be traversing those links (apparently a whole lot). Comcast decided to try to instead charge Level 3, which didn’t sound like a good deal to Level 3. Level 3 published a press release saying Comcast was trying to unfairly leverage their exclusive control of end-users. Comcast sent a letter to the FCC saying that nothing unfair was going on and this was just a run-of-the-mill peering dispute. Level 3 replied that it was no such thing. [Updates: Comcast told the FCC that they they really do originate a lot of traffic and should be considered a backbone provider. Level 3 released their own FAQ, discussing the peering issue as well as the competitive issues. AT&T blogged in support of Comcast, Level 3 said that AT&T “missed the point completely.”]

Comcast’s attempt to describe the dispute as something akin to a peering dispute between backbone providers strikes me as misleading. Comcast is not a backbone provider that can deliver packets to an arbitrary location on the internet (a location that many other backbone providers might also be able to deliver to). Instead, Comcast is representing only its end-users, and it is doing so exclusively. What’s more, it has never had a settlement-free peering agreement with Level 3 (always transit, with Comcast paying). [Edit: see my clarification below in which I raise the possibility that it may have had both agreements at the same time, but relating to different traffic.] Indeed, the very nature of retail broadband service is that download quantity (or the traffic going into the Comcast AS) far exceeds upload quantity. In Comcast’s view of the world, therefore, all of their transit agreements should be reversed such that the backbone providers pay them for the privilege of reaching their users.

Why is this a problem? Won’t the market sort it out? First, the backbone market is still relatively competitive, and within that market I think that economic forces stand a reasonable chance of finding the optimal efficiency and leave relatively less room for anti-competitive shenanigans. However, these market dynamics can fall apart when you add to the mix last-mile providers. Last mile providers by their nature have at least a temporary monopoly on serving a given customer and often (in the case of a provider like Comcast) a local near-monopoly on high-performance broadband service altogether. Historically, the segmentation between the backbone market and the last-mile market has prevented shenanigans in the latter from seeping into the former. Two significant changes have occurred that alter this balance: 1) Comcast has grown to the size that it exerts tremendous power over a large portion of the broadband retail customers, with far less competition than in the past (for example the era of dial-up) and 2) Level 3 has sought to become the exclusive provider of certain desirable online content, but without the same network and business structure as traditional CDNs.

The market analysis becomes even more complicated in a scenario in which the last-mile provider has a vertically integrated service that competes with services being provided over the backbone provider with which it interconnects. Comcast’s basic video service clearly competes with Netflix and other internet video. In addition, Comcast’s TV Everywhere service (in partnership with HBO) competes with other computer-screen on-demand video services. Finally, the pending Comcst/NBCU merger (under review by the FCC and DoJ) implicates Hulu and a far greater degree of vertical integration with content providers. This means that in addition to its general incentives to price-squeeze backbone providers, Comcast clearly has incentive to discriminate against other online video providers (either by altering speed or by charging more than what a competitive market would yield).

But what do you all think? You may also find it worthwhile to slog through some of the traffic on the NANOG email list, starting roughly here.

[Edit: I ran across this fascinating blog post on the issue by Global Crossing, a backbone provider similar to Level 3.]

[Edit: Take a look at this fantastic overview of the situation in a blog post from Adam Rothschild.]

A Major Internet Milestone: DNSSEC and SSL

On July 15th, a small but significant internet event occurred. On that day, years of planning culminated in the deployment of a cryptographic signature on the root DNS zone. To simplify greatly, this means that internet users will soon be able to have a much higher degree of trust in the hierarchical Domain Name System by utilizing the powers of recursion and cryptography. When a user’s computer is told that the IP address for “gmail.com” is 72.14.204.19, the user can be sure that this answer is true. This is important if you are someone such as a Chinese dissident who wants to reliably and securely reach gmail.com in order to communicate with your peers. The rollout of this throughout all domains, DNS resolvers, and client applications will take a little while, but the basic infrastructure is now in place.

This mitigates a certain class of vulnerabilities that web users used to face. Although it forecloses attacks at the domain name-to-IP address stage of requesting a web page, it does not necessarily foreclose attacks at other stages. For instance, an attacker that gets between you and the server you are trying to reach can simply claim that he is the server at 72.14.204.19. Our traditional way of protecting against this style of attack has been to rely on Certificate Authorities — trusted third-parties who certify digital key-pairs only for the true owners of a given domain name. Thus, even if an attacker tries to execute one of these “man-in-the-middle” attacks, he won’t possess the secret portion of the digital key-pair that is required to prove that his communications come from the true gmail.com. Your browser checks for a certified corresponding public key in the process of setting up a secure SSL/TLS connection to https://gmail.com.

Unfortunately, there are several technical, operational, and jurisdictional shortcomings of the Certificate Authority model. As I discussed in an earlier post, many of these problems are not present in the hierarchical and delegated model of DNS. However, DNS does not inherently provide the ability to store domain name-to-key-pair information. But could it? At one of the recent DNSSEC deployment ceremonies, Vint Cerf noted:

More has happened here today than meets the eye. An infrastructure has been created for a hierarchical security system, which can be purposed and re-purposed in a number of different ways. And so I would predict that although we started out putting this system together to assure that the domain name lookups return valid internet addresses, that in the long run this hierarchical structure of trust will be applied to a number of other functions that require strong authentication. And so you will have seen a new major milestone in the internet story.

I believe that storing SSL/TLS keys in DNSSEC-secured DNS records will be the first significant “other function” that will emerge. An alternative to Certificate Authorities for domain-to-key mapping is sorely needed. There are two major practical hurdles to getting there: 1) We must define a standard for placing keys in DNS and 2) We must secure the “last mile” from the service provder’s DNS resolver to the end-user’s computer.

The first hurdle involves the type of standard-setting that the internet community is quite familiar with. On a technical level, it means that we need to collectively decide what these DNS records look like. The second hurdle involves building more functionality into end users’ software so that it can do cryptographic validation of DNS results rather than blindly trusting its upstream DNS resolver. There may be temporary ways to do this within web browser code, but ultimately it will probably have to be built into what is called the “stub resolver” — a local service running on your computer that usually just asks for the results from the upstream resolver.

It is important to note that none of his makes Certificate Authorities obsolete. Although the DNS-based approach replaces the most basic type of SSL certificates, the Certificate Authorities will continue to be the only entities that can offer validation of real-world identity of site owners. The DNS-based approach and basic “domain validated” Certificate Authority certificates both verify only that whoever controls the domain name is the entity that your computer is communicating with, without saying who that is. In recent years, “Extended Validation” certificates (the ones that make your browser bar glow green) have begun to be offered by all major certificate authorities. These certificates require more rigorous validation of the identity of the owner, so that for example you know that the person who controls bankofamerica.com is really Bank of America Corporation.

At this year’s Black Hat and Defcon, Dan Kaminsky demonstrated some new software he is releasing that could make deploying DNSSEC more easy in general, and that could also address the two main hurdles to placing keys in DNS. He readily admits that his particular implementation is not perfect, and has encouraged critiques and changes. [Update: His slides are available here.]

Hopefully, with the input of the many smart folks in the security, internet standards, and software development communities, we will see a production-quality DNSSEC-secured solution to domain-to-key authentication in the near future.

Broadband Politics and Closed-Door Negotiations at the FCC

The last seven days at the FCC have been drama-filled, and that’s not something you can often say about an administrative agency. As I noted in my last post, the FCC is considering reclassifying broadband as a “common carrier” service. This would subject the access portion of the service to some additional regulations which currently do not apply, but have (to some extent) been applied in the past. Last Thursday, the FCC voted 3-2 along party lines to pursue a Notice of Inquiry about this approach and others, in order to help solidify its ability to enforce consumer protections and implement the National Broadband Plan in the wake of the Comcast decision in the DC Circuit Court. There was a great deal of politicking and rhetoric around the vote. Then, on Monday, the Wall Street Journal reported that lobbyists were engaged in closed-door meetings at the FCC, discussing possible legislative compromises that would obviate the need for reclassification. This led to public outcry from everyone who was not involved in the meetings, and allegations of misconduct by the FCC for its failure to disclose the meetings. If you sit through my description of the intricacies of reclassification, I promise to give you the juicy bits about the controversial meetings.

The Reclassification Vote and the NOI
As I explained in my previous post, the FCC faces a dilemma. The DC Circuit said it did not have the authority under Title I of the Communications Act to enforce the broadband openness principles it espoused in 2005. This cast into doubt the FCC’s ability to not only police violations of the principles but also to implement many portions of the National Broadband Plan. In the past, the Commission would have had unquestioned authority under Title II of the Act, but in a series of decisions from 2002-2007 it voluntarily “deregulated” broadband by classifying it as a Title I service. Chairman Genachowski has floated what he calls a “Third Way” approach in which broadband is not classified as a Title I service anymore, and is not subject to all provisions of Title II, but instead is classified under Title II but with extensive “forbearance” from portions of that title.

From a legal perspective, the main question is whether the FCC has the authority to reclassify the transmission component of broadband internet service as a Title II service. This gets into intricacies of how broadband service fits into statutory definitions of “information service” (aka Title I), “telecommunications”, “telecommunications service” (aka Title II), and the like. I was going to lay these out in detail, but in the interest of getting to the juicy stuff I will simply direct you to Harold Feld’s excellent post. For the “Third Way” approach to work, the FCC’s interpretation of a “telecommunications service” will have to be articulated to include broadband internet access while not also swallowing a variety of internet services that everyone thinks should remain unregulated — sites like Facebook, content delivery networks like Akamai, and digital media providers like Netflix. However, this narrow definition must not be so narrow that the FCC does not have jurisdiction to police the types of practices it is concerned about (for instance, providers should not be able to discriminate in their delivery of traffic simply by moving the discrimination from their transport layer of the network to the logical layer, or by partnering with an affiliated “ISP” that does discrimination for them). I am largely persuaded of Harold’s arguments, but the AT&T lobbyists present the other side as well. One argument that I don’t see anyone making (yet) is that presuming the transmission component is subject to Title II, the FCC would seem to have a much stronger argument for exercising ancillary jurisdiction with respect to interrelated components like non-facilities-based ISPs that rely on that transmission component.

The other legal debate involves an even more arcane discussion about whether — assuming there is a “telecommunications service” offered as part of broadband service — that “telecommunications service” is something that can be regulated separately from the other “information services” (Title I) that might be offered along with it. This includes things like an email address from your provider, DNS, Usenet, and the like. Providers have historically argued that these were inseparable from the internet access component, and the so-called “Stevens Report” of 1998 introduced the notion that the “inextricably intertwined” nature of broadband service might have the result of classifying all such services as entirely Title I “information services.” To the extent that this ever made any sense, it is far from true today. What consumers believe they are purchasing is access to the internet, and all of those other services are clearly extricable from a definitional and practical standpoint (indeed, customers can and do opt for competitors for all of them on a regular basis).

But none of these legal arguments are at the fore of the current debate, which is almost entirely political. Witness, for example, John Boehner’s claim that the “Third Way” approach was a “government takeover of the Internet,” Fred Upton’s (R-MI) claim that the approach is a “blind power grab,” modest Democratic sign-on to an industry-penned and reasoning-free opposition letter, and an attempt by Republican appropriators to block funding for the FCC unless they swore off the approach. This prompted a strong response from Democratic leaders indicating that any such effort would not see the light of day. Ultimately, the FCC voted in favor of the NOI to explore the issue. Amidst this tumult, the WSJ reported that the FCC had started closed-door meetings with industry representatives in order to discuss a possible legislative compromise.

Possible Legislation and Secret Meetings
It is not against the rules to communicate with the FCC about active proceedings. Indeed, such communications are part of a healthy policymaking process that solicits input from stakeholders. The FCC typically conducts proceedings under the “permit but disclose” regime in which all discussions pertaining to the given proceeding must be described in “ex parte” filings on the docket. Ars has a good overview of the ex parte regime. The NOI passed last week is subject to these rules.

It therefore came as a surprise that a subset of industry players were secretly meeting with the FCC to discuss possible legislation that could make the NOI irrelevant. This issue is made even more egregious by the fact that the FCC just conducted a proceeding on improving ex parte disclosures, and the Chairman remarked:

“Given the complexity and importance of the issues that come before us, ex parte communications remain an essential part of our deliberative process. It is essential that industry and public stakeholders know the facts and arguments presented to us in order to express informed views.”

The Chairman’s Chief of Staff Edward Lazarus sought to explain away the obligation for ex parte disclosure, and nevertheless attached a brief disclosure letter from the meeting attendees that didn’t describe any of the details. There is perhaps a case to be made that the legislative options do not directly fall under the subject matter of the NOI, but even if this position were somehow legally justifiable it clearly falls afoul of the policy intent of the ex parte rules. Harold Feld has a great post in which he describes his nomination for “Worsht Ex Parte Ever“. The letter attached to the Lazarus post would certainly take the title if it were a formal ex parte letter. The industry participants in the meetings deserve some criticism, but ultimately the problems can only be resolved by the FCC by demanding comprehensive openness rather than perpetuating a culture of loopholes.

The public outcry continues, from both public interest groups and in the comments on the Lazarus post. If it’s true that the FCC admits internally that “they f*cked up”, they should do far more to regain the public’s trust in the integrity of the notice-and-comment process.

Update: The Lazarus post was just updated to replace the link to the brief disclosure letter with two new links to letters that describe themselves as Ex Parte letters. The first contains the exact same text as the original, and the second has a few bullet points.

Regulating and Not Regulating the Internet

There is increasingly heated rhetoric in DC over whether or not the government should begin to “regulate the internet.” Such language is neither accurate nor new. This language implies that the government does not currently involve itself in governing the internet — an implication which is clearly untrue given a myriad of laws like CFAA, ECPA, DMCA, and CALEA (not to mention existing regulation of consumer phone lines used for dialup and “special access” lines used for high speed interconnection). It is more fundamentally inaccurate because referring simply to “the internet” blurs important distinctions, like the difference between communications transport providers and the communications that occur over those lines.

However, there is a genuine policy debate being had over the appropriate framework for regulation by the Federal Communications Commission. In light of recent events, the FCC is considering revising the way it has viewed broadband since the mid-2000s, and Congress is considering revising the FCC’s enabling statute — the Communications Act. At stake is the overall model for government regulation of certain aspects of internet communication. In order to understand the significance of this, we have to take a step back in time.

Before 2005

In pre-American British law, there prevailed a concept of “common carriage.” Providers of transport services to the general public were required to conduct their business on equal and fair terms for all comers. The idea was that all of society benefited when these general-purpose services, which facilitated many types of other commerce and cultural activities, were accessible to all. This principle was incorporated into American law via common-law precedent and ultimately a series of public laws culminating in the Communications Act of 1934. The structure of the Act remains today, albeit with modifications and grafts. The original Act included two regulatory regimes: Title II regulated Common Carriers (telegraph and telephone, at the time), whereas Title III regulated Radio (and, ultimately, broadcast TV). By 1984, it became necessary to add Title VI for Cable (Titles IV and V have assorted administrative provisions), and in 1996 the Act was revised to focus the FCC on regulating for competition rather than assuming that some of these markets would remain monopolies. During this period, early access to the internet began to emerge via dial-up modems. In a series of decisions called the Computer Inquiries, the FCC decided that it would continue to regulate phone lines used to access the internet as common carriers, but it disclaimed direct authority over any “enhanced” services that those lines were used to connect to. The 1996 Telecommunications act called these “enhanced” services “information services”, and called the underlying telephone-based “basic” transport services “telecommunications services”. Thus the FCC both did and did not “regulate the internet” in this era.

In any event, the trifurcated nature of the Communications Act put it on a collision course with technology convergence. By the early 2000s, broadband internet access via Cable had emerged. DSL was being treated as a common carrier, but how should the FCC treat Cable-based broadband? Should it classify it as a Title II common carrier, a Title VI cable service, or something else?

Brand X and Its Progeny

This question arose during a period in which a generally deregulatory spirit prevailed at the FCC and in Congress. The 1996 Telecommunications Act contained a great deal of hopeful language about the flourishing competition that it would usher in, making unneccessary decades of overbearing regulation. At the turn of the milennium, a variety of revolutionary networking platforms seemed just around the corner. The FCC decided that it should remove as much regulation from broadband as possible, and it had to choose between two basic approaches. First, it could declare that Cable-based broadband service was essentially the same thing as DSL-based broadband service, and regulate it under Title II (aka, a “telecommunications service”). This had the advantage of being consistent with decades of precedent, but the disadvantage of introducing a new regulatory regime to a portion of the services offered by cable operators, who had never before been subject to that sort of thing (except in the 9th Circuit, but that’s another story). The 1996 Act had given the FCC the authority to “forbear” from any obligations that it deemed unnecessary due to sufficient competition, so the FCC could still “deregulate” broadband to a significant extent. The other option was to reclassify cable broadband as a Title I service (aka, an “information service”). What is Title I, you ask? Well, there’s very little in Title I of the Communications Act (take a look). It mostly contains general pronouncements of the FCC’s purpose, so classifying a service as such is a more extreme way of deregulating a service. How extreme? We will return to this.

The FCC chose this more extreme approach, announcing its decision in the 2002 Cable Modem Order. This set off a prolonged series of legal actions, pitting the deregulatory-spirited FCC against those that wanted cable to be regulated under Title II so that operators could be forced to provide “open access” to competitors who would use their last-mile infrastructure (the same way that the phone company must allow alternative long distance carriers today). This all culminated in a decision by the 9th Circuit that Title I classification was unacceptable, and a reversal of that decision by the Supreme Court in 2005. The case is commonly referred to by its shorthand, Brand X. The majority opinion essentially states that the statute is ambiguous as to whether cable broadband is a Title I “information service” or Title II “telecommunications service”, and the Court deferred to the expert-agency: the FCC. The FCC immediately followed up by reclassifying DSL-based broadband as a Title I service as well, in order to develop a, “consistent regulatory framework across platforms.” At the same time, it released a Policy Statement outlining the so-called “Four Freedoms” that nevertheless would guide FCC policy on broadband. The extent to which such a statement was binding and enforceable would be the subject of the next chapter of the debate on “regulating the internet.”

Comcast v. FCC

After Brand X and the failure of advocates to gain “open access” provisions on broadband generally, much of the energy in the space focused to a fallback position: at the very least, they argued, the FCC should enforce its Policy Statement (aka, the “Four Freedoms”) which seemed to embody the spirit of some components of the non-discriminatory legacy of common carriage. This position came to be known as “net neutrality,” although the term has been subject to a diversity of definitions over the years and is also only one part of a potentially broader policy regime. In 2008, the FCC was forced to confront the issue when it was discovered that Comcast had begun interfering with the Bittorrent traffic of customers. The FCC sought to discipline Comcast under its untested Title I authority, Comcast thought that it had no such authority, and the DC Circuit Court agreed with Comcast. It appears that the Title I approach to deregulation was more extreme than even the FCC thought (although ex-Chairman Powell had no problem blaming the litigation strategy of the current FCC). To be clear, the Circuit Court said that the FCC did not have authority under Title I. But, what if the FCC had taken the alternate path back in 2002, deciding to classify broadband as a Title II service and “forbear” from all of the portions of the statute deemed irrelevant? Can the FCC still choose that path today?

Reclassification

Chairman Genachowski recently announced a proposed approach that would reclassify the transport portion of broadband as a Title II service, while simultaneously forbearing from the majority of the statute. This approach is motivated by the fact that Comcast cast a pall over the FCC’s ability to fulfill its explicit mandate from Congress to develop a National Broadband Plan, which requires regulatory jurisdiction in order for the FCC to be able to implement many of its components. I will discuss the reclassification debate in my next post. I’ll be at a very interesting event in DC tomorrow morning on the subject, titled The FCC’s Authority Over Broadband Access. For a preview of some of what will be discussed there, I recommend FCC General Counsel’s presentation from yesterday (starting at 30 minutes in), and Jon Neuchterlein’s comments at this year’s Silicon Flatirons conference. I am told that the event tomorrow will not be streamed live, but that the video will be posted online shortly thereafter. I’ll update this post when that happens. You can also follow tweets at #bbauth. [Update: the video and transcripts for Panel 1 and Panel 2 are now posted]

A New Communications Act?

In parallel, there has been growing attention to a revision of the Communications Act itself. The theory here is that the old structure just simply doesn’t speak sufficiently to the current telecommunications landscape. I’ll do a follow-up post on this topic as well, mapping out the poles of opinion on what such a revised Act should look like.

Bonus: If you just can’t get enough history and contemporary context on the structure of communications regulation, I did an audio interview with David Weinberger back in January 2009.