August 25, 2016

Archives for September 2008


Popular Websites Vulnerable to Cross-Site Request Forgery Attacks

Update Oct 15, 2008 We’ve modified the paper to reflect the fact that the New York Times has fixed this problem. We also clarified that our server-side protection techniques do not protect against active network attackers.

Update Oct 1, 2008 The New York Times has fixed this problem. All of the problems mentioned below have now been fixed.

Today Ed Felten and I (Bill Zeller) are announcing four previously unpublished Cross-Site Request Forgery (CSRF) vulnerabilities. We’ve described these attacks in detail in a technical report titled Cross-Site Request Forgeries: Exploitation and Prevention.

We found four major vulnerabilities on four different sites. These vulnerabilities include what we believe is the first CSRF vulnerability that allows the transfer of funds from a financial institution. We contacted all the sites involved and gave them ample time to correct these issues. Three of these sites have fixed the vulnerabilities listed below, one has not.

CSRF vulnerabilities occur when a website allows an authenticated user to perform a sensitive action but does not verify that the user herself is invoking that action. The key to understanding CSRF attacks is to recognize that websites typically don’t verify that a request came from an authorized user. Instead they verify only that the request came from the browser of an authorized user. Because browsers run code sent by multiple sites, there is a danger that one site will (unbeknownst to the user) send a request to a second site, and the second site will mistakenly think that the user authorized the request.

If a user visits an attacker’s website, the attacker can force the user’s browser to send a request to a page that performs a sensitive action on behalf of the user. The target website sees a request coming from an authenticated user and happily performs some action, whether it was invoked by the user or not. CSRF attacks have been confused with Cross-Site Scripting (XSS) attacks, but they are very different. A site completely protected from XSS is still vulnerable to CSRF attacks if no protections are taken. For more background on CSRF, see Shiflett, Grossman, Wikipedia, or OWASP.

We describe the four vulnerabilities below:

1. ING Direct (

Status: Fixed

We found a vulnerability on ING’s website that allowed additional accounts to be created on behalf of an arbitrary user. We were also able to transfer funds out of users’ bank accounts. We believe this is the first CSRF vulnerability to allow the transfer of funds from a financial institution. Specific details are described in our paper.

2. YouTube (

Status: Fixed

We discovered CSRF vulnerabilities in nearly every action a user could perform on YouTube. An attacker could have added videos to a user’s "Favorites," added himself to a user’s "Friend" or "Family" list, sent arbitrary messages on the user’s behalf, flagged videos as inappropriate, automatically shared a video with a user’s contacts, subscribed a user to a "channel" (a set of videos published by one person or group) and added videos to a user’s "QuickList" (a list of videos a user intends to watch at a later point). Specific details are described in our paper.

3. MetaFilter (

Status: Fixed

A vulnerability existed on Metafilter that allowed an attacker to take control of a user’s account. A forged request could be used to set a user’s email address to the attacker’s address. A second forged request could then be used to activate the "Forgot Password" action, which would send the user’s password to the attacker’s email address. Specific details are described in our paper.

(MetaFilter fixed this vulnerability in less than two days. We appreciate the fact that MetaFilter contacted us to let us know the problem had been fixed.)

4. The New York Times (

Status: Not Fixed. We contacted the New York Times in September, 2007. As of September 24, 2008, this vulnerability still exists. This problem has been fixed.

A vulnerability in the New York Time’s website allows an attacker to find out the email address of an arbitrary user. This takes advantage of the NYTimes’s "Email This" feature, which allows a user to send an email about a story to an arbitrary user. This emails contains the logged-in user’s email address. An attacker can forge a request to active the "Email This" feature while setting his email address as the recipient. When a user visit’s the attacker’s page, an email will be sent to the attacker’s email address containing the user’s email address. This attack can be used for identification (e.g., finding the email addresses of all users who visit an attacker’s site) or for spam. This attack is particularly dangerous because of the large number of users who have NYTimes’ accounts and because the NYTimes keeps users logged in for over a year.

Also, TimesPeople, a social networking site launched by the New York Times on September 23, 2008, is also vulnerable to CSRF attacks.

We hope the New York Times will decide to fix these vulnerabilities now that they have been made public. The New York Times appears to have fixed the problems detailed above.


Our paper provides recommendations for preventing these attacks. We provide a server-side plugin for the PHP MVC framework Code Igniter that can completely prevent CSRF. We also provide a client-side Firefox extension that can protect users from certain types of CSRF attacks (non-GET request attacks).

The Takeaway

We’ve found CSRF vulnerabilities in sites that have a huge incentive to do security correctly. If you’re in charge of a website and haven’t specifically protected against CSRF, chances are you’re vulnerable.

The academic literature on CSRF attacks has been rapidly expanding over the last two years and we encourage you to see our bibliography for references to other work. On the industry side, I’d like to especially thank Chris Shiflett and Jeremiah Grossman for tirelessly working to educate developers about CSRF attacks.


Quanta Case Preserved the Distinction Between Patent Law and Contract Law

Thanks to Ed for the invitation to contribute to FTT and for the gracious introduction. In addition to being a grad student here at Princeton, I’m also an adjunct scholar at the Cato Institute. Cato recently released the latest edition of its annual Supreme Court Review, a compilation of scholarly articles about the most recent Supreme Court term. It includes interesting articles on a broad range of topics considered by the high court this year, including the Second Amendment, detainee rights, and federalism. Arguably the most important decision from a technology perspective—and the case that was of most interest to me personally—was LG v. Quanta, which dealt with a doctrine known as patent exhaustion. Ironically, I have a somewhat different take on the decision than F. Scott Kieff, the legal scholar who contributed an article to the Review. In today’s post I’ll discuss where I think Kieff’s legal analysis goes astray. Tomorrow, I’ll discuss why the Supreme Court’s ruling turns out to be a good thing in policy terms.

What happened, in a nutshell, was this: Intel wanted to manufacture some chips that were covered by some patents held by LG electronics. Intel obtained a patent license from LG, manufactured the chips, and sold them to Quanta. The contract between LG and Intel stated that the patent license covered only Intel’s manufacturing of the chips, but did not extend to the use of those chips by downstream customers. And so LG sued Quanta for patent infringement, arguing that even though the chips had been manufactured with LG’s consent, Quanta was still guilty of patent infringement for using the chips in its own products.

At the heart of the case is the patent exhaustion doctrine, which says that once a patent holder has allowed the manufacture of a product covered by one of its patents, it exhausts its rights with regard to that product: the patent holder can’t sue downstream customers for patent infringement for using those same products. In a unanimous decision, the Supreme Court ruled that patent exhaustion applies in this case, and Quanta was not required to pay royalties to LG, despite explicit language in LG’s contract with Intel stating otherwise.

Kieff argues that this was a case about freedom of contract. He argues that by limiting the flexibility of patent licensing, it will force the initial licensee to pay the licensing fees for the entire supply chain. As a libertarian, I’m certainly a supporter of freedom of contract. But I think this analysis puts the cart before the horse. Remember that Quanta was accused of patent infringement, not a breach of contract. Therefore, an analysis of the case has to start with patent law. We must first determine whether Quanta’s actions constitute patent infringement under patent law, and only once we’ve determined that Quanta needed a license does it make sense to turn to contract law to see if it had one.

On the other hand, if patent law says that Quanta’s actions are non-infringing, then it’s completely irrelevant what a contract between LG and Intel might have said, because Quanta doesn’t need a license and wasn’t a party to Intel’s contract with LG. Freedom of contract means that parties are free to sign contracts with one another with the confidence that they will be faithfully enforced. It does not mean that contracts can bind third parties who never consented to them. And in this case, the only relevant contract was between LG and Intel. Whether Quanta infringed LG’s patents is a matter of patent law, not contract law.

One reason it’s important to clearly distinguish patent law from contract law is that the law provides patent holders with much stronger remedies than parties to contract disputes. Probably the most important difference is the one I’ve already alluded to: a patent is binding on everyone, whereas contracts only bind those who have explicitly consented to them. On top of that, patent holders can often get injunctive relief—an order from the judge to stop infringing—whereas breaches of contract often result only in money damages. Contract law also frowns on punitive contract terms, whereas patent law offers treble damages for willful infringement.

One way of looking at patent exhaustion, then, is as a doctrine designed to preserve the fundamental distinction between patent law and contract law. LG attempted to bootstrap its patents into an alternative contract-enforcement mechanism. The Supreme Court rejected this gambit, holding that patent law doesn’t allow patent holders to slice their patent licenses infinitely thin in order to force third parties to enter into an implicit contractual relationship with them.

LG is still free to use ordinary contract law to achieve the same result. It could, for example, contractually prohibit Intel from selling its chips to anyone who didn’t already have a licensing agreement with LG. But it must do so under the ordinary rules of contract law. In this case, that would mean requiring Intel to obtain consent to LG’s terms before selling chips, and it would be enforced by filing a lawsuit against Intel for any breach of contract. An important difference is that under this strategy, most of the transaction costs fall directly on LG, rather than being foisted on third parties through the magic of patent law. I think that’s the right result from a legal point of view. In my next post, I’ll explain why this turns out to be an important outcome from a policy perspective.


Election Machinery blog

Students will be studying election technology and election administration in freshman seminar courses taught by at Princeton (by me) and at Stanford (by David Dill).  The students will be writing short articles on the Election Machinery blog.  I invite you all to read that blog over the next three months, to see what a small nonrandom sample of 18-year-olds is writing about the machinery of voting and elections.



Will cherry picking undermine the market for ad-supported television?

Want to watch a popular television show without all the ads? Your options are increasing. There’s the iTunes store, moving toward HD video formats, in which a growing range of shows can be bought on a per-episode or per-season basis, to be watched without advertisements on a growing range of devices at a time of your chooing. Or you could buy a Netflix subscription and Roku streaming box on top of your existing media expenditures, and stream many TV episodes directly over the web. Thirdly, there’s the growing market for DVDs or Blu-ray discs themselves, which are higher definition and particularly rewarding for those who are able to shell out for top-end home theater systems that can make the most of the added information in a disc as opposed to a  broadcast. I’m sure there are yet more options for turning a willingness to pay into an ad-free viewing experience — video-on-demand over the pricey but by most accounts great FiOS service, perhaps? Finally, TiVo and other options like it reward those who can afford DVRs, and further reward those savvy enough to bother programming their remotes with the 30-second skip feature.

In any case, the growing popularity of these options and others like them pose a challenge, or at least a subtle shift in pricing incentives, for the makers of television content. Traditionally, content has been monetized by ads, where advertisers could be confident that the whole viewership of a given show would be tuned in for whatever was placed in the midst of an episode. Now, the wealthiest, best educated, most consumer electronics hungry segments of the television audience–among the most valuable viewers to advertisers–is able to absent itself from the ad viewing public.

This problem is worse than just losing some fraction of the audience: it’s about losing a particular fraction of the audience. If x percent of the audience skips the ads for the reasons mentioned in the first paragraph, then the remaining 100-x percent of the audience is the least tech-savvy, least consumer electronics acquistive part of the audience, by and large a much less attractive demographic for advertisers. (A converse version of this effect may be true for the online advertising market, where every viewer is in front of a web browser or relatively fancy phone, but I’m less confident of that because of the active interest in ad-blocking technologies. Maybe online ad viewers will be a middle slice, savvy enough to be online but not to block ads?)

What will this mean for TV? Here’s one scenario: Television bifurcates. Ad-supported TV goes after the audience that still watches ads, those toward the lower part of the socioeconomic spectrum. Ads for Walmart replace those for designer brands. The content of ad-supported TV itself trends toward options that cater to the ad-watching demographic. Meanwhile, high end TV emerges as an always ad-free medium supported by more direct revenue channels, with more and more of it coming along something like the HBO route. These shows are underwritten by, and ultimately directed to, the ad-skipping but high-income crowd. So there won’t be advertisers clamoring to attract the higher income viewers, as such, but those who invest in creating the shows in the first place will learn over time to cater to the interests and viewing habits of the elite.

Another scenario, that could play out in tandem with the first, is that there may be a strong appetite for a truly universal advertising medium, either because of the ease this creates for certain advertisers or because of the increasing revenue premium as such broad audiences become rarer and are bid up in value. In this case, you could imagine a Truman Show-esque effort to integrate advertising with the TV content. The ads would be unskippable because they wouldn’t exist or, put another way, would be the only thing on (some parts of) television.


Hurricane Ike status report: electrical power is cool

Today, we checked out the house, again, and lo and behold, it finally has power again!  Huzzah!

All in all, it hasn’t been that bad for us.  We crashed with friends, ate out all the time, and (thankfully) had daycare for our daughter as of Thursday last week.  Indeed, I’m seeing fewer people’s kids around the office this week and more people seem to be getting back into the groove.

Even though Rice wanted classes to restart on Tuesday of last week, the unstated unofficial everybody-get-back-to-work day was really yesterday, Monday, just over a week after the hurricane.  What’s the status of the city?

Many people are still without power, and the power crews are now dealing with harder cases, individually damaged lines, and so forth.  Getting the rest of the city online may well take a good long time.  Another interesting effect is that the rush-hour traffic is beyond insane.  Luckily, our daily commute is short enough that we’re largely immute to this, but traffic lights which reset to blinking red are slowing down everything, to the point that remote freeway exits are backing up into the freeways due to the malfunctioning traffic lights at the intersections below.  The Chron estimates it could be until November until all the traffic lights are repaired.  Ouch.

Naturally, one of the tempting purchases for us is some kind of natural gas powered, permanently installed generator.  I’m sure if I shop around for one now, I’d pay a mint to get it.  Maybe in the off season… Needless to say, I don’t see the city investing to bury all the power lines that run above ground.  They have legitimately higher priorities.  As to me, I sure would have been happy to have had power all the way through this thing, brought to us by the one utility that never had any downtime: our natural gas line.

[Sidebar: it takes a major power outage for you to really appreciate how people got by in the days before electrical power.  Pickling, preserving, and other techniques suddenly seem awfully clever.  Some candles put out an awful lot more light than others.  You can also see why it was a standard architectural feature of old Southern homes to have big outdoor porches — so you’d have somewhere slightly cooler to sleep than indoors.]


How Yahoo could have protected Palin's email

Last week I criticized Yahoo for their insecure password recovery mechanism that allowed an intruder to take control of Sarah Palin’s email account. Several readers asked me the obvious follow-up question: What should Yahoo have done instead?

Before we discuss alternatives, let’s take a minute to appreciate the delicate balance involved in designing a password recovery mechanism for a free, mass-market web service. On the one hand, users lose their passwords all the time; they generally refuse to take precautions in advance against a lost password; and they won’t accept being locked out of their own accounts because of a lost password. On the other hand, password recovery is an obvious vector for attack — and one exploited at large scale, every day, by spammers and other troublemakers.

Password recovery is especially challenging for email accounts. A common approach to password recovery is to email a new password (or a unique recovery URL) to the user, which works nicely if the user has a stable email address outside the service — but there’s no point in sending email to a user who has lost the password to his only email account.

Still, Yahoo could be doing more to protect their users’ passwords. They could allow users to make up their own security questions, rather than offering only a fixed set of questions. They could warn users that security questions are a security risk and that users with stable external email addresses might be better off disabling the security-question functionality and relying instead on email for password recovery.

Yahoo could also have followed Gmail’s lead, and disabled the security-question mechanism unless no logged-in user had accessed the account for five days. This clever trick prevents password “recovery” when there is evidence that somebody who knows the password is actively using the account. If the legitimate user loses the password and doesn’t have an alternative email account, he has to wait five days before recovering the password, but this seems like a small price to pay for the extra security.

Finally, Yahoo would have been wise, at least from a public-relations standpoint, to give extra protection to high-profile accounts like Palin’s. The existence of these accounts, and even the email addresses, had already been published online. And the account signup at Yahoo asks for a name and postal code so Yahoo could have recognized that this suddenly-important public figure had an account on their system. (It seems unlikely that Palin gave a false name or postal code in signing up for the account.) Given the public allegations that Palin had used her Yahoo email accounts for state business, these accounts would have been obvious targets for freelance “investigators”.

Some commenters on my previous post argued that all of this is Palin’s fault for using a Yahoo mail account for Alaska state business. As I understand it, the breached account included some state business emails along with some private email. I’ll agree that it was unwise for Palin to put official state email into a Yahoo account, for security reasons alone, not to mention the state rules or laws against doing so. But this doesn’t justify the break-in, and I think anyone would agree that it doesn’t justify publishing non-incriminating private emails taken from the account.

Indeed, the feeding frenzy to grab and publish private material from the account, after the intruder had published the password, is perhaps the ugliest aspect of the whole incident. I don’t know how many people participated — and I’m glad that at least one Good Samaritan tried to re-lock the account — but I hope the republishers get at least a scary visit from the FBI. It looks like the FBI is closing in on the initial intruder. I assume he is facing a bigger punishment.


Palin's email breached through weak Yahoo password recovery mechanism

This week’s breach of Sarah Palin’s Yahoo Mail account has been much discussed. One aspect that has gotten less attention is how the breach occurred, and what it tells us about security and online behavior.

(My understanding of the facts is based on press stories, and on reading a forum post written by somebody claiming to be the perpetrator. I’m assuming the accuracy of the forum post, so take this with an appropriate grain of salt.)

The attacker apparently got access to the account by using Yahoo’s password reset mechanism, that is, by following the same steps Palin would have followed had she forgotten her own password.

Yahoo’s password reset mechanism is surprisingly weak and easily attacked. To simulate the attack on Palin, I performed the same “attack” on a friend’s account (with the friend’s permission, of course). As far as I know, I followed the same steps that the Palin attacker did.

First, I went to Yahoo’s web site and said I had forgotten my password. It asked me to enter my email address. I entered my friend’s address. It then gave me the option of emailing a new password to my friend’s alternate email address, or doing an immediate password reset on the site. I chose the latter. Yahoo then prompted me with my friend’s security question, which my friend had previously chosen from a list of questions provided by Yahoo. It took me six guesses to get the right answer. Next, Yahoo asked me to confirm my friend’s country of residence and zip code — it displayed the correct values, and I just had to confirm that they were correct. That’s all! The next step had me enter a new password for my friend’s account, which would have allowed me to access the account at will.

The only real security mechanism here is the security question, and it’s often easy to guess the right answer, especially given several tries. Reportedly, Palin’s question was “Where did you meet your spouse?” and the correct answer was “Wasilla high”. Wikipedia says that Palin attended Wasilla High School and met her husband-to-be in high school, so “Wasilla high” is an easy guess.

This attack was not exactly rocket science. Contrary to some news reports, the attacker did not display any particular technical prowess, though he did display stupidity, ethical blindness, and disrespect for the law — for which he will presumably be punished.

Password recovery is often the weakest link in password-based security, but it’s still surprising that Yahoo’s recovery scheme was so weak. In Yahoo’s defense, it’s hard to verify that somebody is really the original account holder when you don’t have much information about who the original account holder is. It’s not like Sarah Palin registered for the email account by showing up at a Yahoo office with three forms of ID. All Yahoo knows is that the original account holder claimed to have the name Sarah Palin, claimed to have been born on a particular date and to live in a particular zip code, and claimed to have met his/her spouse at “Wasilla high”. Since this information was all in the public record, Yahoo really had no way to be sure who the account holder was — so it might have seemed reasonable to give access to somebody who showed up later claiming to have the same name, email address, and spouse-meeting place.

Still, we shouldn’t let Yahoo off the hook completely. Millions of Yahoo customers who are not security experts (or are security experts but want to delegate security decisions to someone else) entrusted the security of their email accounts to Yahoo on the assumption that Yahoo would provide reasonable security. Palin probably made this assumption, and Yahoo let her down.

If there’s a silver lining in this ugly incident, it is the possibility that Yahoo and other sites will rethink their password recovery mechanisms, and that users will think more carefully about the risk of email breaches.


Hurricane Ike status report

Many people have been emailing me to send their best wishes. I thought it would be helpful to post a brief note on what happened and where we’re all at.

As you know, Hurricane Ike hit shore early Saturday morning. The wind, combined with a massive storm surge, caused staggering devastation along the Texas coast. Houston is further inland, so the big issue for us was and still is fallen trees and downed power lines. Rice University, as a result of what must have been a huge amount of advance effort, came through with flying colors. They had power and a working network pretty much the whole time. They didn’t have any water pressure for a while, but that came online Monday. Our main data center, built recently with an explicit goal of surviving events like this, apparently lost power for a while, at least in part. (I don’t have the full story yet. I do know that a failed DNS server caused our email server to experience problems.)

Our own house had no particular damage, although the back fence came down. We still have no power, but we’ve had water pressure (initially low, now fine) and natural gas the whole time. The hardwired telephone had a few outages, but continues to work reliably. Cellular phones were initially dicey but are now working great.

Luckily, the weather has been unseasonably cool, so we and all our neighbors have been leaving windows open. Over the weekend, the highs are in the mid 80’s (28-30C), with cooler weather at night, so we’ll do okay on that front. At this point, many restaurants are open, so the lack of power doesn’t mean living off canned food. Likewise, some gas stations and supermarkets are coming online again. Life, at least in this part of the city, is starting to resemble normality.

A looming concern is mosquitos. After Tropical Storm Allison in 2001 (see my photos), the big issue was clearly mosquitos. Lots of rain means lots of standing water, and that means mosquitos are on their way. Back then, few people lost power. This time, it’s going to get ugly.

Rice had a full faculty meeting on Tuesday morning. Our president announced that we would be resuming classes on Tuesday afternoon, but we could not have any assignments due or exams given this week. Last night, we got an email saying that everybody has made assignments due Monday next week, and that we needed do something else (without saying what). Apparently, there’s been an outpouring, among our students, of interest in volunteering to help the community (a good thing!), and I’d certainly like our students to get out and help. But if we’re supposed to get back to teaching, then that means work. I’m not sure how we’ll ultimately resolve this.

Unscientific data: our president asked for a show of hands at the meeting. How many faculty had no power? Maybe 90%. How many faculty had no daycare for their kids? Maybe 80%. How many faculty had significant damage to their homes? Maybe 20%.

For any of you who want to see what I saw, I took a bunch of pictures.

Meanwhile, I need to get back to work myself. We’ve got a research paper due Friday. Life goes on.


Welcome to the new Freedom to Tinker

Welcome to the new, redesigned Freedom to Tinker. Beyond giving it a new look, we have rebuilt the site as a blogging community, to highlight the contributions of more authors. The front page and main RSS feed will offer a combination of posts from all authors. We have also added a blog page (and feed) for each author, so you can read posts by your favorite author or subscribe to your favorite author’s RSS feed. Over time, Freedom to Tinker has evolved from a single-author blog into a group effort, and these changes better recognize the efforts of all of our authors.

Along with the redesign, we’re thrilled to add three authors to our roster: Tim Lee, Paul Ohm, and Yoshi Kohno.

Tim Lee is a prominent tech policy analyst, journalist, and blogger who has written for sites such as Ars Technica, Techdirt, and the Technology Liberation Front. He is now a computer science grad student at Princeton, and a member of the Center for Information Technology Policy.

Paul Ohm is an Associate Professor of Law at the University of Colorado, specializing in computer crime law,criminal procedure, intellectual property, and information privacy. He worked previously as a trial attorney in the Computer Crime and Intellectual Property Section of the U.S. Department of Justice; and before law school he worked as a computer programmer and network administrator.

Yoshi Kohno is an assistant professor of computer science and engineering at the University of Washington. His research focuses on assessing and improving the security and privacy properties of current and future technologies. In 2007 he was recognized by MIT’s Technology Review magazine as one of the world’s top innovators under the age of 35. He is known for his research on the security of implantable medical devices and voting machines, among other technologies.

Finally, Freedom to Tinker is now officially hosted by Princeton’s Center for Information Technology Policy. A major goal of CITP is to foster discussion of infotech policy issues, so it makes sense for CITP to host this kind of blog community for CITP members and friends.

We hope you enjoy the new Freedom to Tinker. As always, we welcome your comments and suggestions.


On digital TV and natural disasters

As I’m writing this, the eye of Hurricane Ike is roughly ten hours from landfall.  The weather here, maybe 60 miles inland, is overcast with mild wind.  Meanwhile, the storm surge has already knocked out power for ten thousand homes along the coast, claims the TV news, humming along in the background as I write this, which brings me to a thought.

Next year, analog TV gets turned off, and it’s digital or nothing.  Well, what happens in bad weather?  Analog TV degrades somewhat, but is still watchable.  Digital TV works great until it starts getting uncorrectable errors.  There’s a brief period where you see block reconstruction errors and, with even a mild additional amount of error, it’s just unwatchable garbage.  According to AntennaWeb, most of the terrestrial broadcast towers are maybe ten miles from my house, but that’s ten miles closer to the coast.  However, I get TV from Comcast, my local cable TV provider.  As I’ve watched the HD feed today, it’s been spotty.  Good for a while, unwatchable for a while.  The analog feed, which we also get on a different channel, has been spot on the whole time.

From this, it would appear that Comcast is getting its feed out of the air, and thus has all the same sorts of weather effects that I would have if I bothered to put my own antenna on the roof.  Next year, when the next hurricane is bearing down on the coast, and digital TV is the only TV around, it’s an interesting question whether I’ll get something useful on my TV during a disaster.  Dear Comcast, Engineering Department: please get a hard line between you and each of the local major TV stations.  Better yet, get two of them, each, and make sure they don’t share any telephone poles.

[Sidebar: In my old house, I used DirecTV plus a terrestrial antenna for HD locals, run through a DirecTV-branded HD TiVo.  Now, I’m getting everything from Comcast, over telephone poles, into a (series 3) TiVo-HD.  In any meaningful disaster, the telephone poles are likely to go down, taking out my TV source material. I get power and telephone from the same poles, so to some extent, they make a single point of failure, and thus no meaningful benefit from putting up my own antenna.

Once the storm gets closer, I’ll be moving the UPS from my computer to our, umm, shelter-in-place location.  I don’t expect I’d want to waste precious UPS battery power running my power-hungry television set.  Instead, I’ve got an AM/FM portable radio that runs on two AA’s.  Hopefully, the amount of useful information on the radio will be better than the man-on-the-street TV newscasters, interviewing fools standing along the ocean, watching the pretty waves breaking.  Hint: you can’t “ride through” a storm when the water is ten feet over your head.]