April 21, 2014

avatar

Miracle Fruit: Tinkering with our Taste Buds

Miraculin, the extract of a West African fruit, is said to make sour foods taste sweet. It’s not sugary, but it’s said to trick your taste buds into misreporting the flavor of the food you’re eating. One of my students, Bill Zeller, bought some miraculin and a group of us tried it out. Here, in the interest of science, is my report.

Miraculin is a lumpy powder, dull red in color, that results from freeze-drying the flesh of the so-called miracle fruit. Here’s about twenty-five grams of miraculin, with a lime for size comparison.

Bill bought fifty grams of miraculin, which came by mail from Ghana. Both Ghana and the U.S. required customs paperwork before the fruit-based product could be shipped. Here’s the Republic of Ghana export permit.

I took a lump of miraculin, weighing a gram or two, and carefully ate it, pushing it around on my tongue as it dissolved.

It didn’t have much taste, and the texture was a bit gummy. Once it was all dissolved I waited a minute or so for the effect to kick in. The effect is said to wear off after about twenty minutes, so it was time for the taste test to begin.

As predicted, the miraculin made sour things taste sweet. Lemon wedges tasted like sweet lemonade. Lime wedges were sweet too. I could still sense the acidity of the fruit, and there was a detectable sour taste but it seemed to be covered over with a pleasant citrus sweetness. I could have eaten whole lemons or limes with no problem.

The grapefruit was stunning, perhaps the best-tasting fruit I have ever eaten. The ones we had were pretty sweet already as grapefruit go, but with miraculin they were distinctly but not overly sweet, and the underlying grapefruit flavor came through beautifully. I had to stop myself from wolfing down several grapefruit.

After the fruit I tried some other foods that were handy. Pizza tasted about the same as usual, though the tomato sauce had a slightly sweet tinge. Diet Dr. Pepper tasted normal. I tried some Indian food – samosas and curried chickpeas – and found the flavor unchanged except that the spiciness was intensified. The normally mild potato-based samosa filling had a spicy kick. Miraculin did nothing for a sweet dessert.

My verdict on miraculin? It’s pleasant and I’m glad I tried it, but it’s not a life-changing experience. I can imagine it becoming popular. It makes some healthy foods taste better, and it’s not too expensive. The amount I had would cost less than a dollar today if you bought in bulk, and there must be unexploited economies of scale.

Thanks to Bill Zeller for getting the miraculin,

to my co-investigators,

and Alex Halderman for taking the photos.

avatar

Botnet Briefing

Yesterday I spoke at a Washington briefing on botnets. The event was hosted by the Senate Science and Technology Caucus, and sponsored by ACM and Microsoft. Along with opening remarks by Senators Pryor and Bennett, there were short briefings by me, Phil Reitinger of Microsoft, and Scott O’Neal of the FBI.

(Botnets are coordinated computer intrusions, where the attacker installs a long-lived software agent or “bot” on many end-user computers. After being installed, the bots receive commands from the attacker through a command-and-control mechanism. You can think of bots as a more advanced form of the viruses and worms we saw previously.)

Botnets are a serious threat, but as usual in cybersecurity there is no obvious silver bullet against them. I gave a laundry list of possible anti-bot tactics, including a mix of technical, law enforcement, and policy approaches.

Phil Reitinger talked about Microsoft’s anti-botnet activities. These range from general efforts to improve software security, to distribution of patches and malicious code removal tools, to investigation of specific bot attacks. I was glad to hear him call out the need for basic research on computer security.

Scott O’Neal talked about the FBI’s fight against botnets, which he said followed the Bureau’s historical pattern in dealing with new types of crime. At first, they responded to specific attacks by investigating and trying to identify the perpetrators. Over time they have adopted new tactics, such as infiltrating the markets and fora where botmasters meet. Though he didn’t explicitly prioritize the different types of botnet (mis)use, it was clear that commercially motivated denial-of-service attacks were prominent in his mind.

Much of the audience consisted of Senate and House staffers, who are naturally interested in possible legislative approaches to the botnet problem. Beyond seeing that law enforcement has adequate resources, there isn’t much that needs to be done. Current laws such as the Computer Fraud and Abuse Act, and anti-fraud and anti-spam laws, already cover botnet attacks. The hard part is catching the bad guys in the first place.

The one legislative suggestion we heard was to reduce the threshold for criminal violation in the Computer Fraud and Abuse Act. Using computers without authorization is a crime, but there are threshold requirements to make sure that trivial offenses can’t bring down the big hammer of felony prosecution.

The concern is that a badguy who breaks into a large number of computers and installs bots, but hasn’t yet used the bots to do harm, might be able to escape prosecution. He could still be prosecuted if certain types of bad intent can be proved, but where that is not possible he arguably might not meet the $5000 damage threshold. The law might be changed to allow prosecution when some designated number of computers are affected.

Paul Ohm has expressed skepticism about this kind of proposal. He points to a tendency to base cybersecurity policy on anecdote and worst-case predictions, even though a great deal of preventable harm is caused by simpler, more mundane attacks.

I’d like to see more data on how big a problem the current CFAA thresholds are. How many real badguys have escaped CFAA prosecution? Of those who did, how many could be prosecuted for other, equally serious violations? With data in hand, the cost-benefit tradeoffs in amending the CFAA will be easier.

Senator Bennett, in his remarks, characterized cybersecurity as a long-term fight. “You guys have permanent job security…. You’re working on a problem that will never be solved.”

avatar

Internet So Crowded, Nobody Goes There Anymore

Once again we’re seeing stories, like this one from Anick Jesdanun at AP, saying that the Internet is broken and needs to be redesigned.

The idea may seem unthinkable, even absurd, but many believe a “clean slate” approach is the only way to truly address security, mobility and other challenges that have cropped up since UCLA professor Leonard Kleinrock helped supervise the first exchange of meaningless test data between two machines on Sept. 2, 1969.

The Internet “works well in many situations but was designed for completely different assumptions,” said Dipankar Raychaudhuri, a Rutgers University professor overseeing three clean-slate projects. “It’s sort of a miracle that it continues to work well today.”

It’s absolutely worthwhile to ask what kind of Net we would design if we were starting over, knowing what we know now. But it’s folly to think we can or should actually scrap the Net and build a new one.

For one thing, the Net is working very nicely already. Sure, there are problems, but they mostly stem from the fact that the Net is full of human beings – which is exactly what makes the Net so great. The Net has succeeded brilliantly at lowering the cost of communication and opening the tools of mass communication to many more people. That’s why most members of the redesign-the-Net brigade spend hours everyday online.

Let’s stop to think about what would happen if we really were going to redesign the Net. Law enforcement would show up with their requests. Copyright owners would want consideration. ISPs would want some concessions, and broadcasters. The FCC would show up with an anti-indecency strategy. We’d see an endless parade of lawyers and lobbyists. Would the engineers even be allowed in the room?

The original design of the Internet escaped this fate because nobody thought it mattered. The engineers were left alone while everyone else argued about things that seemed more important. That’s a lucky break that won’t be repeated.

The good news is that despite the rhetoric, hardly anybody believes the Internet will be rebuilt, so these research efforts have a chance of avoiding political entanglements. The redesign will be a useful intellectual exercise, and maybe we’ll learn some tricks useful for the future. But for better or worse, we’re stuck with the Internet we have.

avatar

Is SafeMedia a Parody?

[UPDATE (Dec. 2011): I wrote the post below a few years ago. SafeMedia's website and product offerings have changed since then. Please don't interpret this post as a commentary on SafeMedia's current products.]

Peter Eckersley at EFF wrote recently about a new network-filtering company called SafeMedia that claims it can block all copyrighted material in a network. We’ve seen companies like this before and they tend to have the warning signs of security snake oil.

But SafeMedia was new so I decided to look at their website. My reaction was: what a brilliant parody!

The biggest clue is that the company’s detection product is called Clouseau – named for a detective who is not only spectacularly incompetent but also fictional.

The next clue is the outlandish technical claims. Here’s an example:

Pirates are smart and innovative, and so is Clouseau. Our technology is dynamic, sees through all multi-layered encryptions, adaptively analyzes network patterns and constantly updates itself. Packet examinations are noninvasive and infallible. There are no false positives.

Sees through all encryption? Even our best intelligence agencies don’t make that claim. Perhaps that’s because the intelligence agencies know about provably unbreakable encryption.

Wait a minute, you may be saying. Perhaps SafeMedia was just making the usual exaggeration, implying that they can stop all bad traffic when what they really mean is that they can stop the most common, obvious kinds of bad traffic. Good guess – that’s the usual fallback position for companies like this – but SafeMedia doesn’t shrink from the most outlandish claims of infallibility:

What if illegal P2P no longer worked? What if, no matter how intelligent, devious, or well-funded an Internet pirate was, they absolutely could not transmit copyrighted material via P2P? SafeMedia’s goal was to create the technology that would achieve exactly this. And we succeeded.

Employing our new technology, Clouseau and Windows + Transport Control, makes illegal P2P transmission of copyrighted material impossible. IMPOSSIBLE. Not difficult and not improbable. IMPOSSIBLE!

The next clue that SafeMedia is a parody is the site’s blatant rent-seeking. There’s even a special page for lawmakers that starts with over-the-top rhetoric about P2P (“America is at war here at home within our own borders. And we are taking casualties. Women, men, and children.”) and ends by asking the U.S. government to act as SafeMedia’s marketing department:

We need the Congress to pass legislation appropriating funds for installing the technology on every Federally-supported computer network in the country, most importantly in educational institutions (schools, colleges, universities, libraries)…. We need the Department of Commerce to promote using the technology in all American businesses big and small, and to push for its international adoption. We need the Department of Education to insure that every educational institution in the USA, private and public, primary and secondary, college and university, is obeying the law.

You now have the right weapons. Let’s end the war!

Add up all this, plus the overdesigned home page that makes maddening fingers-on-a-blackboard noises when you mouse over its main menu area, and the verdict is clear: this is a parody.

Yet SafeMedia appears to be real. The CEO appears to be a real guy who has done a few e-commerce startups. The site has more detailed help-wanted ads than any parodist would bother with. According to the Internet Archive, the site has been around for a while. And most convincingly of all, an expensive DC law firm has registered as a lobbyist for SafeMedia.

So SafeMedia really exists and company management thought it a good idea to set up a parody-simulating website and name their product Clouseau. What an entertaining world we live in.

(Thanks to Peter Eckersley for sharing the results of his un-Clouseau-ish investigation of SafeMedia’s existence.)

avatar

Cablevision and Anti-Efficiency Policy

I wrote recently about the Cablevision decision, in which a judge appeared to draw a line between two kinds of Digital Video Recorder (DVR) technologies. (DVRs let home viewers record TV shows and play them later.) The judge found unlawful a Remote Storage DVR (RS-DVR) in which recorded shows are captured and stored in the cable TV company’s data center, but he apparently would have allowed a Set-Top Storage DVR (STS-DVR) in which shows are recorded on a device kept in the customer’s home.

Why should the law prefer that recorded shows be stored in the customer’s home? The judge’s reasoning was that the cable company is more involved in an activity if that activity happens in its data center. This appears to follow from the judge’s reasoning even if the alternative in-home STS-DVR is owned and controlled by the cable TV company. But I’m not asking what the law says; I’m asking instead what it should say. Why should the law prefer STS-DVRs over RS-DVRs?

If the goal of the law is to protect copyrighted material – and remember that this was a copyright case – then you might expect it to favor solutions that are more controllable or more resistant to content ripping. But the court got the opposite result: Cablevision was liable because it had more control. The result will be more customer control, which is a benefit for many law-abiding customers.

The court’s ruling also has implications for technical efficiency. Central storage is arguably more efficient than set-top storage in the customer’s home, because of economies of scale in managing a central facility. The court’s decision pushes companies toward set-top storage, even though it is probably less efficient and offers virtually the same functionality as central storage.

It might seem at first glance that public policy should never try to increase the cost of a lawful activity, but in fact there are exceptions. It can sometimes make sense for policy to raise the cost of an activity, if that activity has benefits but can harm nonparticipants. Raising costs rather than banning the activity outright can prevent marginal uses while allowing those uses that provide greater benefit. Of course, if you want to argue that raising the cost of DVRs is good policy, you’ll have to make several assumptions about the costs and benefits of DVRs – assumptions that are very likely untrue.

Even before the suit was brought, Cablevision was already reducing the efficiency of its system in the hope of improving its legal position. For example, their storage facility had a separate storage area for each customer, even though it would have been much more efficient to use a single shared pool of storage. If 5000 customers asked to record last week’s episode of Lost, Cablevision would store 5000 identical copies of that episode, one in each customer’s areas. It would have been easy, and much more efficient, to store a single copy. The only sensible reason to keep redundant copies is that a system with individual storage areas might look to a judge more like a set-top DVR system, thereby bolstering the argument that the system is just like a (presumably lawful) STS-DVR. In other words, even before the recent ruling, legal factors were pushing Cablevision toward a less efficient implementation.

For the companies who filed the suit, the goal was not to serve the public but to maximize their own economic advantage. What they cared about, most likely, was simply establishing that one had better come to them for approval before doing anything new. By that standard, they must see the suit as a big success.

avatar

Software HD-DVD/Blu-ray Players Updated

The central authority that runs AACS (the anticopying/DRM system used on commercial HD-DVD and Blu-ray discs) announced [April 6, 2007 item] last week the reissue of some software players that can play the discs, “[i]n response to attacks against certain PC-based applications”. The affected applications include WinDVD and probably others.

Recall that analysts had previously extracted from software players a set of decryption keys sufficient to decrypt any disc sold thus far. The authority could have responded to these attacks by blacklisting the affected applications or their decryption keys, which would have limited the effect of the past attacks but would have rendered the affected applications unable to play discs, even for law-abiding customers – that’s too much collateral damage.

To reduce the harm to law-abiding customers, the authority apparently required the affected programs to issue free online updates, where the updates contain new software along with new decryptions keys. This way, customers who download the update will be able to keep playing discs, even though the the software’s old keys won’t work any more.

The attackers’ response is obvious: they’ll try to analyze the new software and extract the new keys. If the software updates changed only the decryption keys, the attackers could just repeat their previous analysis exactly, to get the new keys. To prevent this, the updates will have to restructure the software significantly, in the hope that the attackers will have to start their analysis from scratch.

The need to restructure the software explains why several months expired between the attacks and this response. New keys can be issued quickly, but restructuring software takes time. The studios reportedly postponed some planned disc releases to wait for the software reissue.

It seems inevitable that the attackers will succeed, within a month or so, in extracting keys from the new software. Even if the guts of the new software are totally unlike the old, this time the attackers will be better organized and will know more about how AACS works and how implementations tend to store and manage keys. In short, the attackers’ advantage will be greater than it was last time.

When the attackers manage to extract the new keys, a new round of the game will start. The player software will have to be restructured again so that a new version with new keys can replace the old. Then it will be the attackers’ turn, and the game will continue.

It’s a game that inherently favors the attackers. In my experience, software analysts always beat the obfuscators, if the analysts are willing to work hard, as they are here. Every round of the game, the software authors will have to come up with new and unexpected tricks for restructuring their software – tricks that will have to resist the attackers’ ever-growing suite of analysis tools. And each time the attackers succeed, they’ll be able to decrypt all existing discs.

We can model the economic effect of this game. The key parameter is the attackers’ reaction time, that is, how long it takes the attackers to extract keys from each newly issued version of the player software. If this time is short – say, a few weeks – then the AACS authority won’t benefit much from playing this game, and the authority would be nearly as well off if it simply gave up and let the extracted keys remain valid and the exploited software stay in the field.

My guess is that the attackers will extract keys from the new software within about three weeks of its availability.

avatar

Why So Many False Positives on the No-Fly List?

Yesterday I argued that Walter Murphy’s much-discussed encounter with airport security was probably just a false positive in the no-fly list matching algorithm. Today I want to talk about why false positives (ordinary citizens triggering mistaken “matches” with the list) are so common.

First, a preliminary. It’s often argued that the high false positive rate proves the system is poorly run or even useless. This is not necessarily the case. In running a system like this, we necessarily trade off false positives against false negatives. We can lower either kind of error, but doing so will increase the other kind. The optimal policy will balance the harm from false positives against the harm from false negatives, to minimize total harm. If the consequences of a false positive are relatively minor (brief inconvenience for one traveler), but the consequences of a false negative are much worse (non-negligible probability of multiple deaths), then the optimal choice is to accept many false positives in order to drive the false negative rate way down. In other words, a high false positive rate is not by itself a sign of bad policy or bad management. You can argue that the consequences of error are not really so unbalanced, or that the tradeoff is being made poorly, but your argument can’t rely only on the false positive rate.

Having said that, the system’s high false positive rate still needs explaining.

The fundamental reason for the false positives is that the system matches names , and names are a poor vehicle for identifying people, especially in the context of air travel. Names are not as unique as most people think, and names are frequently misspelled, especially in airline records. Because of the misspellings, you’ll have to do approximate matching, which will make the nonuniqueness problem even worse. The result is many false positives.

Why not use more information to reduce false positives? Why not, for example, use the fact that the Walter Murphy who served in the Marine Corps and used to live near Princeton is not a threat?

The reason is that using that information would have unwanted consequences. First, the airlines would have to gather much more private information about passengers, and they would probably have to verify that information by demanding documentary proof of some kind.

Second, checking that private information against the name on the no-fly list would require bringing together the passenger’s private information with the government’s secret information about the person on the no-fly list. Either the airline can tell the government what it knows about the passenger’s private life, or the government can tell the airline what it knows about the person on the no-fly list. Both options are unattractive.

A clumsy compromise – which the government is apparently making – is to provide a way for people who often trigger false positives to supply more private information, and if that information distinguishes the person from the no-fly list entry, to give the person some kind of “I’m not really on the no-fly list” certificate. This imposes a privacy cost, but only on people who often trigger false positives.

Once you’ve decided to have a no-fly list, a significant false positive rate is nearly inevitable. The bigger policy question is whether, given all of its drawbacks, we should have a no-fly list at all.

avatar

Walter Murphy Stopped at Airport: Another False Positive

Blogs are buzzing about the story of Walter Murphy, a retired Princeton professor who reported having triggered a no-fly list match on a recent trip. Prof. Murphy suspects this happened because he has given speeches criticizing the Bush Administration.

I studied the no-fly list mechanism (and the related watchlist) during my service on the TSA’s Secure Flight Working Group. Based on what I learned about the system, I am skeptical of Prof. Murphy’s claim. I think he reached, in good faith, an incorrect conclusion about why he was stopped.

Based on Prof. Murphy’s story, it appears that when his flight reservation was matched against the no-fly list, the result was a “hit”. This is why he was not allowed to check in at curbside but had to talk to an airline employee at the check-in desk. The employee eventually cleared him and gave him a boarding pass.

(Some reports say Prof. Murphy might have matched the watchlist, a list of supposedly less dangerous people, but I think this is unlikely. A watchlist hit would have caused him to be searched at the security checkpoint but would not have led to the extended conversation he had. Other reports say he was chosen at random, which also seems unlikely – I don’t think no-fly list challenges are issued randomly.)

There are two aspects to the no-fly list, one that puts names on the list and another that checks airline reservations against the list. The two parts are almost entirely separate.

Names are put on the list through a secret process; about all we know is that names are added by intelligence and/or law enforcement agencies. We know the official standard for adding a name requires that the person be a sufficiently serious threat to aviation security, but we don’t know what processes, if any, are used to ensure that this standard is followed. In short, nobody outside the intelligence community knows much about how names get on the list.

The airlines check their customers’ reservations against the list, and they deal with customers who are “hits”. Most hits are false positives (innocent people who trigger mistaken hits), who are allowed to fly after talking to an airline customer service agent. The airlines aren’t told why any particular name is on the list, nor do they have special knowledge about how names are added. An airline employee, such as the one who told Prof. Murphy that he might be on the list for political reasons, would have no special knowledge about how names get on the list. In short, the employee must have been speculating about why Prof. Murphy’s name triggered a hit.

It’s well known by now that the no-fly list has many false positives. Senator Ted Kennedy and Congressman John Lewis, among others, seem to trigger false positives. I know a man living in Princeton who triggers false positives every time he flies. Having many false positives is inevitable given that (1) the list is large, and (2) the matching algorithm requires only an approximate match (because flight reservations often have misspelled names). An ordinary false positive is by far the most likely explanation for Prof. Murphy’s experience.

Note, too, that Walter Murphy is a relatively common name, making it more likely that Prof. Murphy was being confused with somebody else. Lycos PeopleSearch finds 181 matches for Walter Murphy and 307 matches for W. Murphy in the U.S. And of course the name on the list could be somebody’s alias. Many false positive stories involve people with relatively common names.

Given all of this, the most likely story by far is that Prof. Murphy triggered an ordinary false positive in the no-fly system. These are very annoying to the affected person, and they happen much too often, but they aren’t targeted at particular people. We can’t entirely rule out the possibility that the name “Walter Murphy” was added to the no-fly list for political reasons, but it seems unlikely.

(The security implications of the false positive rate, and how the rate might be reduced, are interesting issues that will have to wait for another post.)

avatar

Judge Geeks Out, Says Cablevision DVR Infringes

In a decision that has triggered much debate, a Federal judge ruled recently that Cablevision’s Digital Video Recorder system infringes the copyrights in TV programs. It’s an unusual decision that deserves some unpacking.

First, some background. The case concerned Digital Video Recorder (DVR) technology, which lets cable TV customers record shows in digital storage and watch them later. TiVo is the best-known DVR technology, but many cable companies offer DVR-enabled set-top boxes.

Most cable-company DVRs are delivered as shiny set-top boxes which contain a computer programmed to store and replay programming, using an onboard hard disc drive for storage. The judge called this a Set-Top Storage DVR, or STS-DVR.

Cablevision’s system worked differently. Rather than putting a computer and hard drive into every consumer’s set-top box, Cablevision implemented the DVR functionality in its own data center. Everything looked the same to the user: you pushed buttons on a remote control to tell the system what to record, and to replay it later. The main difference is that rather than storing your recordings in a hard drive in your set-top box, Cablevision’s system stored them in a region allocated for you in some big storage server in Cablevision’s data center. The judge called this a Remote Storage DVR, or RS-DVR.

STS-DVRs are very similar to VCRs, which the Supreme Court found to be legal, so STS-DVRs are probably okay. Yet the judge found the RS-DVR to be infringing. How did he reach this conclusion?

For starters, the judge geeked out on the technical details. The first part of the opinion describes Cablevision’s implementation in great detail – I’m a techie, and it’s more detail than even I want to know. Only after unloading these details does the judge get around, on page 18 of the opinion, to the kind of procedural background that normally starts on page one or two of an opinion.

This matters because the judge’s ruling seems to hinge on the degree of similarity between RS-DVRs and STS-DVRs. By diving into the details, the judge finds many points of difference, which he uses to justify giving the two types of DVRs different legal treatment. Here’s an example (pp. 25-26):

In any event, Cablevision’s attempt to analogize the RS-DVR to the STS-DVR fails. The RS-DVD may have the look and feel of an STS-DVR … but “under the hood” the two types of DVRs are vastly different. For example, to effectuate the RS-DVR, Cablevision must reconfigure the linear channel programming signals received at its head-end by splitting the APS into a second stream, reformatting it through clamping, and routing it to the Arroyo servers. The STS-DVR does not require these activities. The STS-DVR can record directly to the hard drive located within the set-top box itself; it does not need the complex computer network and constant monitoring by Cablevision personnel necessary for the RS-DVR to record and store programming.

The judge sees the STS-DVR as simpler than the RS-DVR. Perhaps this is because he didn’t go “under the hood” in the STS-DVR, where he would have found a complicated computer system with its own internal stream processing, reformatting, and internal data transmission facilities, as well as complex software to control these functions. It’s not the exact same design as in the RS-DVR, but it’s closer than the judge seems to think.

All of this may have less impact than you might expect, because of the odd way the case was framed. Cablevision, for reasons known only to itself, had waived any fair use arguments, in exchange for the plaintiffs giving up any indirect liability claims (i.e., any claims that Cablevision was enabling infringement by its customers). What remained was a direct infringement claim against Cablevision – a claim that Cablevision itself (rather than its customers) was making copies of the programs – to which Cablevision was not allowed to raise a fair use defense.

The question, in other words, was who was recording the programming. Was Cablevision doing the recording, or were its customers doing the recording? The customers, by using their remote controls to navigate through on-screen menus, directed the technology to record certain programs, and controlled the playback. But the equipment that carried out those commands was owned by Cablevision and (mostly) located in Cablevision buildings. So who was doing the recording? The question doesn’t have a simple answer that I can see.

This general issue of who is responsible for the actions of complex computer systems crops up surprisingly
often in law and policy disputes. There doesn’t seem to be a coherent theory about it, which is too bad, because it will only become more important as systems get more complicated and more tightly intereconnected.

avatar

EMI To Sell DRM-Free Music

EMI, the world’s third largest record company, announced yesterday that it will sell its music without DRM (copy protection) on Apple’s iTunes Music Store. Songs will be available in two formats: the original DRMed format for the original $0.99 price, or a higher-fidelity DRM-free format for $1.29.

This is a huge step forward for EMI and the industry. Given the consumer demand for DRM-free music, and the inability of DRM to stop infringement, it was only a matter of time before the industry made this move. But there was considerable reluctance to take the first step, partly because a generation of industry executives had backed DRM-based strategies. The industry orthodoxy has been that DRM (a) reduces infringement a lot, and (b) doesn’t lower customer demand much. But EMI must disbelieve at least one of these two propositions; if not, its new strategy is irrational. (If removing DRM increases piracy a lot but doesn’t create many new customers, then it will cost EMI money.) Now that EMI has broken the ice, the migration to DRM-free music can proceed, to the ultimate benefit of record companies and law-abiding customers alike.

Still, it’s interesting how EMI and Apple decided to do this. The simple step would have been to sell only DRM-free music, at the familiar $0.99 price point, or perhaps at a higher price point. Instead, the companies chose to offer two versions, and to bundle DRM-freedom with higher fidelity, with a differentiated price 30% above the still-available original.

Why bundle higher fidelity with DRM-freedom? It seems unlikely that the customers who want higher fidelity are the same ones who want DRM-freedom. (Cory Doctorow argues that customers who want one are probably less likely to want the other.) Given the importance of the DRM issue to the industry, you’d think they would want good data on customer preferences, such as how many customers will pay thirty cents more to get DRM-freedom. By bundling DRM-freedom with another feature, the new offering will obscure that experiment.

Another possibility is that it’s Apple that wants to obscure the experiment. Apple has taken heat from European antitrust authorities for using DRM to lock customers in to the iTunes/iPod product line; the Euro-authorities would like Apple to open its system. If DRM-free tracks cost thirty cents extra, Apple would in effect be selling freedom from lockin for thirty cents a song – not something Apple wants to do while trying to convince the authorities that lockin isn’t a real problem. By bundling the lockin-freedom with something else (higher fidelity) Apple might obscure the fact that it is charging a premium for lockin-free music.

One effect of selling DRM-free music will be to increase the market for complementary products that make other (lawful) uses of music. Examples include non-Apple music players, jukebox software, collaborative recommendation systems, and so on. (DRM frustrates the use of such complements.) Complements will multiply and improve, which over time will make DRM-free music even more attractive to consumers. This process will take some time, so the full benefits of the new strategy to EMI won’t be evident immediately. Even if the switch to DRM-free music is only a break-even proposition of EMI in the short run, it will look better and better in the long run as complements create customer value, some of which will be capturable by EMI through higher prices or increased sales.

The growth of complements will also increase other companies’ incentives to sell DRM-free music. And each company that switches to DRM-free sales will only intensify this effect, boosting complements more and making DRM-free sales even more attractive to the remaining holdout companies. Expect a kind of tipping effect among the major record companies. This may not happen immediately, but over time it seems pretty much inevitable.

In the meantime, EMI will look like the most customer-friendly and tech-savvy major record company.