April 29, 2024

Judge Geeks Out, Says Cablevision DVR Infringes

In a decision that has triggered much debate, a Federal judge ruled recently that Cablevision’s Digital Video Recorder system infringes the copyrights in TV programs. It’s an unusual decision that deserves some unpacking.

First, some background. The case concerned Digital Video Recorder (DVR) technology, which lets cable TV customers record shows in digital storage and watch them later. TiVo is the best-known DVR technology, but many cable companies offer DVR-enabled set-top boxes.

Most cable-company DVRs are delivered as shiny set-top boxes which contain a computer programmed to store and replay programming, using an onboard hard disc drive for storage. The judge called this a Set-Top Storage DVR, or STS-DVR.

Cablevision’s system worked differently. Rather than putting a computer and hard drive into every consumer’s set-top box, Cablevision implemented the DVR functionality in its own data center. Everything looked the same to the user: you pushed buttons on a remote control to tell the system what to record, and to replay it later. The main difference is that rather than storing your recordings in a hard drive in your set-top box, Cablevision’s system stored them in a region allocated for you in some big storage server in Cablevision’s data center. The judge called this a Remote Storage DVR, or RS-DVR.

STS-DVRs are very similar to VCRs, which the Supreme Court found to be legal, so STS-DVRs are probably okay. Yet the judge found the RS-DVR to be infringing. How did he reach this conclusion?

For starters, the judge geeked out on the technical details. The first part of the opinion describes Cablevision’s implementation in great detail – I’m a techie, and it’s more detail than even I want to know. Only after unloading these details does the judge get around, on page 18 of the opinion, to the kind of procedural background that normally starts on page one or two of an opinion.

This matters because the judge’s ruling seems to hinge on the degree of similarity between RS-DVRs and STS-DVRs. By diving into the details, the judge finds many points of difference, which he uses to justify giving the two types of DVRs different legal treatment. Here’s an example (pp. 25-26):

In any event, Cablevision’s attempt to analogize the RS-DVR to the STS-DVR fails. The RS-DVD may have the look and feel of an STS-DVR … but “under the hood” the two types of DVRs are vastly different. For example, to effectuate the RS-DVR, Cablevision must reconfigure the linear channel programming signals received at its head-end by splitting the APS into a second stream, reformatting it through clamping, and routing it to the Arroyo servers. The STS-DVR does not require these activities. The STS-DVR can record directly to the hard drive located within the set-top box itself; it does not need the complex computer network and constant monitoring by Cablevision personnel necessary for the RS-DVR to record and store programming.

The judge sees the STS-DVR as simpler than the RS-DVR. Perhaps this is because he didn’t go “under the hood” in the STS-DVR, where he would have found a complicated computer system with its own internal stream processing, reformatting, and internal data transmission facilities, as well as complex software to control these functions. It’s not the exact same design as in the RS-DVR, but it’s closer than the judge seems to think.

All of this may have less impact than you might expect, because of the odd way the case was framed. Cablevision, for reasons known only to itself, had waived any fair use arguments, in exchange for the plaintiffs giving up any indirect liability claims (i.e., any claims that Cablevision was enabling infringement by its customers). What remained was a direct infringement claim against Cablevision – a claim that Cablevision itself (rather than its customers) was making copies of the programs – to which Cablevision was not allowed to raise a fair use defense.

The question, in other words, was who was recording the programming. Was Cablevision doing the recording, or were its customers doing the recording? The customers, by using their remote controls to navigate through on-screen menus, directed the technology to record certain programs, and controlled the playback. But the equipment that carried out those commands was owned by Cablevision and (mostly) located in Cablevision buildings. So who was doing the recording? The question doesn’t have a simple answer that I can see.

This general issue of who is responsible for the actions of complex computer systems crops up surprisingly
often in law and policy disputes. There doesn’t seem to be a coherent theory about it, which is too bad, because it will only become more important as systems get more complicated and more tightly intereconnected.

FreeConference Suit: Neutrality Fight or Regulatory Squabble?

Last week FreeConference, a company that offers “free” teleconferencing services, sued AT&T for blocking access by AT&T/Cingular customers to FreeConference’s services. FreeConference’s complaint says the blocking is anticompetitive and violates the Communications Act.

FreeConference’s service sets up conference calls that connect a group of callers. Users are given an ordinary long-distance phone number to call. When they call the assigned number, they are connected to their conference call. Users pay nothing beyond the cost of the ordinary long-distance call they’re making.

As of last week, AT&T/Cingular started blocking access to FreeConference’s long-distance numbers from AT&T/Cingular mobile phones. Instead of getting connected to their conference calls, AT&T/Cingular users are getting an error message. AT&T/Cingular has reportedly admitted doing this.

At first glance, this looks like an unfair practice, with AT&T trying to shut down a cheaper competitor that is undercutting AT&T’s lucrative conference-call business. This is the kind of thing net neutrality advocates worry about – though strictly speaking this is happening on the phone network, not the Internet.

The full story is a bit more complicated, and it starts with FreeConference’s mysterious ability to provide conference calls for free. These days many companies provide free services, but they all have some way of generating revenue. FreeConference appears to generate revenue by exploiting the structure of telecom regulation.

When you make a long-distance call, you pay your long-distance provider for the call. The long-distance provider is required to pay connection fees to the local phone companies (or mobile companies) at both ends of the call, to offset the cost of connecting the call to the endpoints. This regulatory framework is a legacy of the AT&T breakup and was justified by the desire to have a competitive long-distance market coexist with local phone carriers that were near-monopolies.

FreeConference gets revenue from these connection fees. It has apparently cut a deal with a local phone carrier under which the carrier accepts calls for FreeConference, and FreeConference gets a cut of the carrier’s connection fees from those calls. If the connection fees are large enough – and apparently they are – this can be a win-win deal for FreeConference and the local carrier.

But of course somebody has to pay the fees. When an AT&T/Cingular customer calls FreeConference, AT&T/Cingular has to pay. They can pass on these fees to their customers, but this hardly seems fair. If I were an AT&T/Cingular customer, I wouldn’t be happy about paying more to subsidize the conference calls of other users.

To add another layer of complexity, it turns out that connection fees vary widely from place to place, ranging roughly from one cent to seven cents per minute. FreeConnection, predictably, has allied itself with a local carrier that gets a high connection fee. By routing its calls to this local carrier, FreeConnection is able to extract more revenue from AT&T/Cingular.

For me, this story illustrates everything that is frustrating about telecom. We start with intricately structured regulation, leading companies to adopt business models shaped by regulation rather than the needs of customers. The result is bewildering to consumers, who end up not knowing which services will work, or having to pay higher prices for mysterious reasons. This leads a techno-legal battle between companies that would, in an ideal world, be spending their time and effort developing better, cheaper products. And ultimately we end up in court, or creating more regulation.

We know a better end state is possible. But how do we get there from here?

[Clarification (2:20 PM): Added the “To add another layer …” paragraph. Thanks to Nathan Williams for pointing out my initial failure to mention the variation in connection fees.]

Is there any such thing as “enough” technological progress?

Yesterday, Ed considered the idea that there may be “a point of diminishing returns where more capacity doesn’t improve the user’s happiness.” It’s a provocative concept, and one that I want to probe a bit further.

One observation that seems germane is that such thoughts have a pedigree. Henry L. Ellsworth, , in his 1843 report to Congress, wrote that “the advancement of the arts, from year to year, taxes our credulity and seems to presage the arrival of that period when human improvement must end.”

It seems to me that the idea of diminishing marginal returns is most at home in settings where the task or process under consideration has well-defined boundaries. For example, making steel: Larger steel mills, up to a point, are more efficient that smaller ones. Larger furnaces reduce capital costs per unit of output, and secondary functions like logistics, training and bookkeeping can be spanned across larger amounts of steel without commensurate increases in their cost. But consolidating an industry, and replacing small production facilities with a larger one, does not necessarily involve any fundamental advancement in the state of the art. (It may, of course.)

Innovation—which is the real wellspring of much of human progress—tends not to follow such predictable patterns. Science textbooks like to present sanitized stories of incremental, orderly advancement, but as Thomas Kuhn famously argued, history actually abounds with disjointed progress, serendipitous accidents, and unanticipated consequences, both good and bad.

There are areas in which incremental improvement is the norm: shaving razors, compression algorithms, mileage per gallon. But in each of these areas, the technology being advanced is task-specific. Nobody is going to use their car to shave or their Mach 3 to commute to the office.

But digital computers—Turing machines—are different. It’s an old saw that a digital computer can be used to change or analyze literally any information. When it comes to computers, advancement means faster Turing machines with larger memories, in smaller physical footprints and with lower costs (including, e.g., manufacturing expense and operational electricity needs).

Ed’s observation yesterday that there is an ultimate limit to the bandwidth leading into the human brain is well taken. But in terms of all transmission of digital content globally, the “last hop” from computer to human is already a very small part of the total traffic. Mostly, traffic is among nodes on end-to-end computer networks, among servers in a Beowulf cluster or similar setup, or even traffic among chips on a motherboard or cores in the same chip. Technologies that advance bandwidth capabilities are useful primarily because of the ways they change what computers can do (at the human time scale). The more they advance, the more things, and the more kinds of things, computers will be capable of. It’s very unlikely we’ve thought of them all.

It is also striking how far our capability to imagine new uses for digital technology has lagged behind the advancement of the technology itself. Blogs like this one were effectively possible from the dawn of the World Wide Web (or even before), and they now seem to be a significant part of what the web can most usefully be made to do. But it took years, after the relevant technologies were available, for people to recognize and take advantage of this possibility. Likewise, much of “web 2.0” has effectively meant harnessing relatively old technologies, such as Javascript, in new and patently unanticipated ways.

The literature of trying to imagine far-out implications of technological advancement is at once both exciting and discouraging: Exciting because it shows that much of what we can imagine probably will happen eventually, and discouraging because it shows that the future is full of major shifts, obvious in retrospect, to which we were blind up until their arrival.

I occasionally try my hand at the “big picture” prognostication game, and enjoy reading the efforts of others. But in the end I’m left feeling that the future, though bright, is mysterious. I can’t imagine a human community, even in the distant future, that has exhausted its every chance to create, innovate and improve its surroundings.

How Much Bandwidth is Enough?

It is a matter of faith among infotech experts that (1) the supply of computing and communications will increase rapidly according to Moore’s Law, and (2) the demand for that capacity will grow roughly as fast. This mutual escalation of supply and demand causes the rapid change we see in the industry.

It seems to be a law of physics that Moore’s Law must terminate eventually – there are fundamental physical limits to how much information can be stored, or how much computing accomplished in a second, within a fixed volume of space. But these hard limits may be a long way off, so it seems safe to assume that Moore’s Law will keep operating for many more cycles, as long as there is demand for ever-greater capacity.

Thus far, whenever more capacity comes along, new applications are invented (or made practical) to use it. But will this go on forever, or is there a point of diminishing returns where more capacity doesn’t improve the user’s happiness?

Consider the broadband link going into a typical home. Certainly today’s homeowner wants more bandwidth, or at least can put more bandwidth to use if it is provided. But at some point there is enough bandwidth to download any reasonable webpage or program in a split second, or to provide real-time ultra-high-def video streams to every member of the household. When that day comes, do home users actually benefit from having fatter pipes?

There is a plausible argument that a limit exists. The human sensory system has limited (though very high) bandwidth, so it doesn’t make sense to direct more than a certain number of bits per second at the user. At some point, your 3-D immersive stereo video has such high resolution that nobody will notice any improvement. The other senses have similar limits, so at some point you have enough bandwidth to saturate the senses of everybody in the home. You might want to send information to devices in the home; but how far can that grow?

Such questions may not matter quite yet, but they will matter a great deal someday. The structure of the technology industries, not to mention technology policies, are built around the idea that people will keep demanding more-more-more and the industry will be kept busy providing it.

My gut feeling is that we’ll eventually hit the point of diminishing returns, but it is a long way off. And I suspect we’ll hit the bandwidth limit before we hit the computation and storage limits. I am far from certain about this. What do you think?

(This post was inspired by a conversation with Tim Brown.)

Will It Copy?

In the spirit of the cult “Will It Blend?” videos, today’s question on Freedom to Tinker is “Will It Copy?” As we saw with the CopyBot in Second Life, when something becomes easily copyable, the economics of its production change: users benefit more from already-existing objects, but the incentive to make new objects decreases.

This is exactly what happened to the music industry when computers and the Internet suddenly made small files, including digitized music, easily copyable. In the case of music, we know that the business is changing, but we don’t know yet what will be the net effect on the availability of good music.

Like the music business, the software business is challenged by cheap copying. If you make software that runs on users’ computers, your software will be copied by at least some users. By contrast, if you provide an interactive service, delivered across the net but implemented on your own servers – a search engine, perhaps – then your product can’t be trivially copied. You have an inherent advantage over the sellers of packaged software.

A similar story holds for the Second Life CopyBot. Objects in Second Life can be described by shape, coloration, and behavior. Shape and coloration are duplicated perfectly by the CopyBot, but behavior (the script code describing what the object does) is not. So if your business makes beautiful but passive objects – clothing, perhaps– your objects can be copied perfectly and you have a problem. But if you make functional objects – a magic wand that does tricks in response to voice commands, perhaps – then the CopyBot won’t affect you much.

Second Life users are reportedly fighting back by building anti-CopyBot technologies, but this is ultimately futile. As long as shape and coloration are visible, it will be possible to observe and copy them. It will be easier to build a three-dimensional scanner-copier in Second Life than in real life. Copying of beautiful, nonfunctional objects will remain possible.

Eventually, this will happen in real life too. Tools for analyzing and replicating real objects will get better and better; knockoffs will get closer and closer to the real thing; and the time window when only the original is available will get shorter and shorter. Today, fashion flourishes despite relatively free copying. Indeed, some argue that the high-fashion world is so dynamic because of copying – always moving, to stay ahead of the masses. So it’s not a given that the fashion world will dry up, in real life or Second Life, if copying gets faster and more accurate.

Part of the fun of “Will It Blend?” is that the answer is almost always “yes”. Increasingly, the answer to “Will It Copy?” will be the same.