November 28, 2024

Bernard Lang Reports on the Proposed French DRM Law

[Bernard Lang, a prominent French computer scientist and infotech policy commentator, sent me an interesting message about the much-discussed legislative developments in France. It includes the first English translation I have seen of the proposed French law mandating open access to DRM technologies. He has graciously given me permission to post his message here, with some minor edits (mostly formatting) by me. Here is his report and commentary:]

The new French law on copyright (our own local version of DMCA), is called DADVSI for “Droit d’Auteur et Droits Voisins dans la Société de l’Information.”. “Droit voisins” stands for derived activities and works, mainly the work of performing artists – I translate it below as “adjacent rights”, not knowing a better or standard translation.

This copyright law is supposed to transpose into French Legislation the European Copyright directive of 22 May 2001.

The law was sent on a fast track procedure (meaning only one reading, rather than three, in each chamber), because it should have been passed a long time ago, and France may be fined by Brussels for being late. It has now passed the MP reading. This unique reading was supposed to take fifteen hours. It took sixty and got more publicity than the government wanted. It will be submitted to the senate in May. The current text and related documents are available online (just in case you read French and are interested).

I will not go into all details of that law, and keep to one aspect that is actually positive. The law also has many regressions that go beyond DMCA or anything accepted in other countries, such as the so-called “Vivendi-Universal” amendments, that have become articles 12-bis and 14-quater (this is temporary numbering) in the current text. These somewhat unprecise articles allow penal (12 bis) or civil (14 quater) suits against software authors whose software is “manifestly” used for illegal access to works.

The point I want to discuss is mostly in article 7, which essentially tries to turn any technical protection measure (TPM) into an open standard. We are lucky in that we have here a legal definition of what is an open standard, which specifies that the standard must be freely usable (including that it is not encumbered by IP).

One interesting fact is that this article 7 did not have most of these clauses when first voted during the debate. Then, on the last day (night ?) of the debate, after the last article, they reopened the debate on article 7 and voted the current version at 3h00 am. This was not a complete surprise, since it was known that several majority MPs were negotiating with the government.

Article 7 of the law (I am losing some technical legal subtleties in the translation, for lack of knowledge of legal vocabulary) actually creates a new article in the French Intellectual Property Code that states :

Article L. 331-5. –

Effective technical measures intended to prevent or limit uses unauthorised by the rightholder of a copyright or an adjacent right of any work, other than software, interpretation, phonogram, videogram or audiovisual program, are legally protected under the condition stipulated here.

Technical measures, in the sense of the previous paragraph, are understood as any technology, device, component, which, within the normal course of its operation, realizes the function intended in the previous paragraph. These technical measures are deemed effective when a use considered in the previous paragraph is controlled by means of an access code, a protection process, such as encryption, scrambling or any other transformation of the protected object, or a copy control mechanism, which achieves the protection objective.

A protocol, a format, a method for encryption, scrambling or transforming does not constitute as such a technical measure as understood in this article.

The technical measures should not result in preventing actual use of interoperability, not infringing copyright. Technical measures providers must give access to the information essential to interoperability.

By information essential to interoperability, we mean the technical documentation and the programming interfaces necessary to obtain, according to an open standard in the sense of article 4 of law n° 2004-575 of june 21st 2004 for trust in numerical economy, a copy of a reproduction protected by a technical measure, and a copy of the numerised information attached to this reproduction.

Anyone concerned may ask the president of the district court, in a fast track procedure, to compel a technical measures provider to provide information essential for interoperability. Only the logistic costs can be requested in return by the provider.

Any person desiring to use interoperability is allowed to proceed to decompiling steps that might be necessary to make essential information available. This disposition is applicable without prejudice to those of article L. 122-6-1. [note: this is the article regarding software interoperability that transposes into French law the part of the 1991 European directive regarding interoperability and some other provisions.]

Technical measures cannot be an obstacle to the free use of the work or the protected object within the limits of the rights set by this code [i.e. the French code of Intellectual Property] as well as those granted by the rights owners.

These stipulations are without prejudice to those of article 79-1 to 79-6 of law n° 86-1067 of September 30, 1986 regarding freedom of communication.

One cannot forbid the publication of the source code and technical documentation of independent software interoperating for legal purposes with a technical protection measure of a work.

No guaranties are offered for this translation, and I am not a lawyer 🙂

Some of the stipulations of this article are a little bit unclear, because of other articles (13 and 14) that may limit certains rights, especially in the 3rd paragraph from bottom. … It is not clear which prevails.

This text does not say that TPM must be open standards, but they they should be essentially like open standards, as long as they are not covered by patents … and we are not supposed to have software patents at this time, in Europe.

Now there have been strong international reactions to this text, some of which are reviewed on my web site, in English and/or French.

I was particularly interested in the comment by U.S. Commerce Secretary Carlos Gutierrez, in an article, “Commerce chief supports Apple’s protest over French law,” from America’s Network on March 24:

“But any time something like this happens, any time that we believe that intellectual property rights are being violated, we need to speak up and, in this case, the company is taking the initiative,” AFP quoted [Gutierrez] as saying [on MSNBC]. “I would compliment that company because we need companies to also stand up for their intellectual property rights.”

This is interesting, because I have been supporting for some time the view that DMCA-like legislation was actually attempting to create a new intellectual property right, a “DRM right”, that gives exclusive rights to the initial users of a DRM format to develop software interacting with it. Of course, no one, to my knowledge, would actually acknowledge the fact. [This is similar to what Peter Jaszi and others have called “paracopyright” in the U.S. – Ed]

Interestingly, one purpose of this new IP right is to prey on cultural creation and creators by controlling the distribution channels, while pretending to offer what seems to be mostly an illusion of protection.

The limitations of the French law just restrict technical measures to be what they are supposed to be: a protective device (for whatever it is worth), without giving any control to people other than the (rightful ?) rightowners of the work.

Without interoperability as required in the French law, DRMs (or TPMs if you prefer) behave pretty much like patents on formats and distribution models, without even requiring innovation, nor official application and examination, and without a time limit or compulsory licensing.

Now, I seem to recall that an obscure American legal document stating that:

The Congress shall have Power […] To promote Progress of Science and useful Arts, by securing for limited Times to Authors and Inventors the exclusive Right to their respective Writings and Discoveries

is the basis for the existence of IP in the United-States.

If indeed, as asserted by Mr Carlos Gutierrez, the French law will infringe on Apple’s IP rights, these rights can only be in Europe (no software patents, recall) the new “DRM rights” I have been discussing above, and that are the consequence of the DMCA.

But if that is the case, this “DRM rights” require no novelty, nor are they limited in time, even in a formal way. Hence they can only be unconstitutional.

There are other interesting comments in the press. My preferred ones are :

French on to something with iTunes law, say analysts
Reuters, ZDNet, March 20, 2006.

Analysts say the French are on to something that the rest of the world has yet to figure out: It needs to set rules for this new market now or risk one or two U.S. companies taking control of online access to music, video and TV.

France debates new tunes for iPod
Thomas Crampton, International Herald Tribune, March 17, 2006 .

The French government’s approach is bold and the only one that makes sense,” said Michael Bartholomew, the director of the European Telecommunications Network Operators’ Association, a trade group based in Brussels.

And apparently, some professional organizations are finally coming to understand on which side their bread is buttered :

France May Force Apple to Open Up iTunes as Bill Moves Ahead
Rudy Ruitenberg, Bloomberg, March 20, 2006.

“The music industry is in favor of interoperability, it would make music accessible on more platforms. It’s quite a technical and complex provision, so it’s not quite clear how it’s going to work in practice,” [Olivia] Regnier [European regional counsel for the London-based International Federation of the Phonographic Industry] said.

The irony of this is that it is the free software organizations, presented by the “cultural community” (read “those who make pots of money in the name of culture”) as the utmost evil, who have been fighting for this interoperability clause.

I remember that, while some partners and I were being auditioned by government officials, their faces expressed surprise that we worried that artists should be able to publish their work, possibly protect their work, freely and without having to submit to the technology leveraged market control of a few large companies. My feeling was that no one else had expressed that concern before.

And, as usual, France Is Saving Civilization. But for the first time, Americans recognize the fact 🙂

How France Is Saving Civilization
Leander Kahney, Wired, March 22, 2006.

Well, that is all. I still have to read the week-end developments and prepare for the senate hearing of the law.

Apples, Oranges, and DRM

Last week mp3.com reported on its testing of portable music players, which showed that playing DRM (copy-protected) songs drained battery power 25% faster in Windows Media players, and 8% faster on iPods, than playing the same songs using the unprotected MP3 format. As more information came to light, it became clear that they hadn’t done a completely fair apples-to-apples comparison, and the story faded from view.

Critics pointed out that the story compared DRMed files at one level of compression to MP3 files at a different level of compression – the DRMed files were just bigger, so of course they would eat more battery power. It’s a valid criticism, but we shouldn’t let it obscure the real issue, because the battery-life story has something to teach us despite its flaws.

Different file formats offer different tradeoffs between storage space, battery life, and audio quality. And, of course, audio quality is not a single dimension – some dimensions of quality may matter more to you than to me. Your preference in formats may be different from mine. It may even be different from the preference you had last week – maybe you care more about storage space this week, or more about battery life, because you’ll be listening to music more, with fewer opportunities to recharge.

This is where DRM hurts you. In the absence of DRM, you’re free to store and use your music in the format, and the level of compression, that suits your needs best. DRM takes away that option, giving you only one choice, or at most a few choices. That leaves you with a service that doesn’t meet your needs as well as a non-DRM one would.

Grocery stores know the true point of the apples-to-oranges comparison. Apples and oranges are different. Some customers want one and some want the other. So why not offer both?

Nuts and Bolts of Net Discrimination: Encryption

I’ve written several times recently about the technical details of network discrimination, because understanding these details is useful in the network neutrality debate. Today I want to talk about the role of encryption.

Scenarios for network discrimination typically involve an Internet Service Provider (ISP) who looks at users’ traffic and imposes delays or other performance penalties on certain types of traffic. To do this, the ISP must be able to tell the targeted data packets apart from ordinary packets. For example, if the ISP wants to penalize VoIP (Internet telephony) traffic, it must be able to distinguish VoIP packets from ordinary packets.

One way for users to fight back is to encrypt their packets, on the theory that encrypted packets will all look like gibberish to the ISP, so the ISP won’t be able to tell one type of packet from another.

To do this, the user would probably use a Virtual Private Network (VPN). The idea is that whenever the user’s computer wanted to send a packet, it would encrypt that packet and then send the encrypted packet to a “gateway” computer that was outside the ISP’s network. The gateway computer would then decrypt the packet and send it on to its intended destination. Incoming packets would follow the same path in reverse – they would be sent to the gateway, where they would be encrypted and forwarded on to the user’s computer. The ISP would see nothing but a bi-directional stream of packets, all encrypted, flowing between the user’s computer and the gateway.

The most the user can hope for from a VPN is to force the ISP to handle all of the user’s packets in the same way. The ISP can still penalize all of the user’s packets, or it can single out randomly chosen packets for special treatment, but those are the only forms of discrimination available to it. The VPN has some cost – packets must be encrypted, decrypted, and forwarded – but the user might consider it worthwhile if it stops network discrimination.

(In practice, things are a bit more complicated. The ISP might be able to infer which packets are which by observing the size and timing of packets. For example, a sequence of packets, all of a certain size and flowing with metronome-like regularity in both directions, is probably a voice conversation. The user might use countermeasures, such as altering the size and timing of packets, but that can be costly too. To simplify our discussion, let’s pretend that the VPN gives the ISP no way to distinguish packets from each other.)

The VPN user and the ISP are playing an interesting game of chicken. The ISP wants to discriminate against some of the user’s packets, but doesn’t want to inconvenience the user so badly that the user discontinues the service (or demands a much lower price). The user responds by making his packets indistinguishable and daring the ISP to discriminate against all of them. The ISP can back down, by easing off on discrimination in order to keep the user happy – or the ISP can call the user’s bluff and hamper all or most of the user’s traffic.

But the ISP may have a different and more effective strategy. If the ISP wants to hamper a particular application, and there is a way to manipulate the user’s traffic that affects that application much more than it does other applications, then the ISP has a way to punish the targeted application. Recall my previous discussion of how VoIP is especially sensitive to jitter (unpredictable changes in delay), but most other applications can tolerate jitter without much trouble. If the ISP imposes jitter on all of the user’s packets, the result will be a big problem for VoIP apps, but not much impact on other apps.

So it turns out that even using a VPN, and encrypting everything in sight, isn’t necessarily enough to shield a user from network discrimination. Discrimination can work in subtle ways.

Facebook and the Campus Cops

An interesting mini-controversy developed at Princeton last week over the use of the Facebook.com web site by Princeton’s Public Safety officers (i.e., the campus police).

If you’re not familiar with Facebook, you must not be spending much time on a college campus. Facebook is a sort of social networking site for college students, faculty and staff (but mostly students). You can set up a home page with your picture and other information about you. You can make links to your friends’ pages, by mutual consent. You can post photos on your page. You can post comments on your friends’ pages. You can form groups based on some shared interest, and people can join the groups.

The controversy started with a story in the Daily Princetonian revealing that Public Safety had used Facebook in two investigations. In one case, a student’s friend posted a photo of the student that was taken during a party in the student’s room. The photo reportedly showed the student hosting a dorm-room party where alcohol was served, which is a violation of campus rules. In another case, there was a group of students who liked to climb up the sides of buildings on campus. They had set up a building-climbers’ group on Facebook, and Public Safety reportedly used the group to identify the group’s members, so as to have Serious Discussions with them.

Some students reacted with outrage, seeing this as an invasion of privacy and an unfair tactic by Public Safety. I find this reaction really interesting.

Students who stop to think about how Facebook works will realize that it’s not very private. Anybody with a princeton.edu email address can get an account on the Princeton Facebook site and view pages. That’s a large group, including current students, alumni, faculty, and staff. (Public Safety officers are staff members.)

And yet students seem to think of Facebook as somehow private, and they continue to post lots of private information on the site. A few weeks ago, I surfed around the site at random. Within two or three minutes I spotted Student A’s page saying, in a matter of fact way, that Student A had recently slept with Student B. Student B’s page confirmed this event, and described what it was like. Look around on the site and you’ll see many descriptions of private activities, indiscretions, and rule-breaking.

I have to admit that I find this pretty hard to understand. Regular readers of this blog know that I reveal almost nothing about my personal life. If you have read carefully over the last three and a half years, you have learned that I live in the Princeton area, am married, and have at least one child (of unspecified age(s)). Not exactly tabloid material. Some bloggers say more – a lot more – but I am more comfortable this way. Anyway, if I did write about my personal life, I would expect that everybody in the world would find out what I wrote, assuming they cared.

It’s easy to see why Public Safety might be interested in reading Facebook, and why students might want to keep Public Safety away. In the end, Public Safety stated that it would not hunt around randomly on Facebook, but it would continue to use Facebook as a tool in specific investigations. Many people consider this a reasonable compromise. It feels right to me, though I can’t quite articulate why.

Expect this to become an issue on other campuses too.

NYU/Princeton Spyware Workshop Liveblog

Today I’m at the NYU/Princeton spyware workshop. I’ll be liveblogging the workshop here. I won’t give you copious notes on what each speaker says, just a list of things that strike me as interesting. Videos of the presentations will be available on the net eventually.

I gave a basic tutorial on spyware last night, to kick off the workshop.

The first panel today is officially about the nature of the spyware problem, but it’s shaping up as the law enforcement panel. The first speaker is Mark Eckenwiler from the U.S. Department of Justice. He is summarizing the various Federal statutes that can be used against spyware purveyors, including statutes against wiretapping and computer intrusions. One issue I hadn’t heard before involves how to prove that a particular spyware purveyor caused harm, if the victim’s computer was also infected with lots of other spyware from other sources.

Second speaker is Eileen Harrington of the Federal Trade Commission. The FTC has two main roles here: to enforce laws, especially relating to unfair and deceptive business practices, and to run hearings and study issues. In 1995 the FTC ran a series of hearing on online consumer protection, which identified privacy as important but didn’t identify spam or spyware. In recent years their focus has shifted more toward spyware. FTC enforcement is based on three principles: the computer belongs to the consumer; disclosure can’t be buried in a EULA; and software must be reasonably removable. These seem sensible to me. She recommends a consumer education website created by the FTC and other government agencies.

Third speaker is Justin Brookman of the New York Attorney General’s office. To them, consent is the biggest issue. He is skeptical of state spyware laws, saying they are often too narrow and require high level of intent to be proven for civil liability. Instead, they enforce based on laws against deceptive business practices and false advertising, and on trespass to chattels. They focus on the consumer experience, and don’t always need to dig very deeply into all of the technical details. He says music lyric sites are often spyware-laden. In one case, a screen saver came with a 188-page EULA, which mentioned the included adware on page 131. He raises the issue of when companies are responsible for what their “affiliates” do.

Final speaker of the first panel is Ari Schwartz of CDT, who runs the Anti-Spyware Coalition. ASC is a big coalition of public-interest groups, companies, and others to build consensus around a definition of spyware and principles for dealing with it. The definition problem is both harder and more important than you might think. The goal was to create a broadly accepted definition, to short-circuit debates about whether particular pieces of unpleasant software are or are not spyware. He says that many of the harms caused by software are well addressed by existing law (identity theft, extortion, corporate espionage, etc.), but general privacy invasions are not. In what looks like a recurring theme for the workshop, he talks about how spyware purveyors use intermediaries (“affiliates”) to create plausible deniability. He shows a hair-raising chain of emails obtained in discovery in an FTC case against Sanford Wallace and associates. This was apparently an extortion-type scheme, where extreme spyware was locked on to a user’s computer, and the antidote was sold to users for $30.

Question to the panel about what happens if the perpetrator is overseas. Eileen Harrington says that if there are money flows, they can freeze assets or sometimes get money repatriated for overseas. The FTC wants statutory changes to foster information exchange with other governments. Ari Schwartz says advertisers, ad agencies, and adware makers are mostly in the U.S. Distribution of software is sometimes from the U.S., sometimes from Eastern Europe, former Soviet Union, or Asia.

Q&A discussion of how spyware programs attack each other. Justin Brookman talks about a case where one spyware company sued another spyware company over this.

The second panel is on “motives, incentives, and causes”. It’s two engineers and two lawyers. First is Eric Allred, an engineer from Microsoft’s antispyware group. “Why is this going on? For the money.”

Eric talks about game programs that use spyware tactics to fight cheating code, e.g. the “warden” in World of Warcraft. He talks about products that check quality of service or performance provided by, e.g., network software, by tracking some behaviors. He thinks this is okay with adequate notice and consent.

He takes a poll of the room. Only a few people admit to having their machines infected by spyware – I’ll bet people are underreporting. Most people say that friends have caught spyware.

Second speaker is Markus Jakobsson, an engineer from Indiana University and RavenWhite. He is interested in phishing and pharming, and the means by which sites can gather information about you. As a demonstration, he says his home page tells you where you do your online banking.

He describes an experiment they did that simulated phishing against IU students. Lots of people fell for it. Interestingly, people with political views on the far left or far right were more likely to fall for it than people with more moderate views. The experimental subjects were really mad (but the experiment had proper institutional review board approval).

“My conclusion is that user education does not work.”

Third is Paul Ohm, a law professor at Colorado. He was previously a prosecutor at the DOJ. He talks about the “myth of the superuser”. (I would have said “superattacker”.) He argues that Internet crime policy is wrongly aimed to stop the superuser.

What happens? Congress writes prohibitions that are broad and vague. Prosecutors and civil litigants use the broad language to pursue novel theories. Innocent people get swept in.

He conjectures that most spyware purveyors aren’t technological superuser. In general, he argues that legislation should focus on non-superuser methods and harms.

He talks about the SPYBLOCK Act language, which bans certain actions, if done with certain bad intent. “The FBI agent stops reading after the list of actions.”

Fourth is Marc Rotenberg from EPIC. His talk is structured as a list of observations, presented in random order. I’ll repeat some of them here. (1) People tend to behave opportunistically online – extract information if you can. (2) “Spyware is a crime of architectural opportunity.” (3) Motivations for spyware: money, control, exploitation, investigation.

He argues that cookies are spyware. This is a controversial view. He argues for reimagining cookies or how users can control them.

Q&A session begins. Alex asks Paul Ohm whether it makes sense in the long run to focus on attackers who aren’t super, given that attackers can adapt. Paul says, first, that he hopes technologists will help stop the superattackers. (The myth of the super-defender?) He advocates a more incremental and adaptive approach to drafting the statutes; aim at the 80% case, then adjust every few years.

Question to Marc Rotenberg about what can be done about cookies. Marc says that originally cookies contained, legibly, the information they represented, such as your zip code. But before long cookies morphed into unique identifiers, opaque to the user. Eric Allred points out that the cookies can be strongly, cryptographically opaque to users.

The final session is on solutions. Ben Edelman speaks first. He shows a series of examples of unsavory practices, relating to installation without full consent and to revenue sources for adware.

He shows a scenario where a NetFlix popup ad appears when a user visits blockbuster.com. This happened through a series of intermediaries – seven HTTP redirects – to pop up the ad. Netflix paid LinkShare, LinkShare paid Azoogle, Azoogle paid MyGeek, and MyGeek paid DirectRevenue. He’s got lots of examples like this, from different mainstream ad services.

He shows an example of Google AdSense ads popping up in 180solutions adware popup windows. He says he found 4600+ URLs where this happened (as of last June).

Orin Kerr speaks next. “The purpose of my talk is to suggest that there are no good ways for the law to handle the spyware problem.” He suggests that technical solutions are a better idea. A pattern today: lawyers want to rely more on technical solutions, technologists want to rely more on law.

He says criminal law works best when the person being prosecuted is clearly evil, even to a juror who doesn’t understand much about what happened. He says that spyware purveyors more often operate in a hazy gray area – so criminal prosecution doesn’t look like the right tool.

He says civil suits by private parties may not work, because defendants don’t have deep enough pockets to make serious suits worthwhile.

He says civil suits by government (e.g., the FTC) may not work, because they have weaker investigative powers than criminal investigators, especially against fly-by-night companies.

It seems to me that his arguments mostly rely on the shady, elusive nature of spyware companies. Civil actions may work against large companies that portray themselves as legitimate. So they may have the benefit of driving spyware vendors underground, which could make it harder for them to sell to some advertisers.

Ira Rubinstein of Microsoft is next. His title is “Code Signing As a Spyware Solution”. He describes (the 64-bit version of) Windows Vista, which will require any kernel-mode software to be digitally signed. This is aimed to stop rootkits and other kernel-mode exploits. It sounds quite similar to AuthentiCode, Microsoft’s longstanding signing infrastructure for ActiveX controls.

Mark Miller of HP is the last speaker. His talk starts with an End-User Listening Agreement, in which everyone in the audience must agree that he can read our minds and redistribute what he learns. He says that we’re not concerned about this because it’s infeasible for him to install hostile code into our brains.

He points out that the Solitaire program has the power to read, analyze or transmit any data on the computer. Any other program can do the same. He argues that we need to obey the principle of least privilege. It seems to me that we already have all the tools to do this, but people don’t do it.

He shows an example of how to stop a browser from leaking your secrets, by either not letting it connect to the Net, or not letting it read any local files. But even a simple browser needs to do both. This is not a convincing demo.

In the Q&A, Ben Edelman recommend’s Eric Howes’s web site as a list of which antispyware tools are legit and which are bogus or dangerous.

Orin Kerr is asked whether we should just give up on using the law. He says no, we should use the law to chip away at the problem, but we shouldn’t expect it to solve the problem entirely. Justin Brookman challenges Orin, saying that civil subpoenia power seems to work for Justin’s group at the NY AG office. Orin backtracks slightly but sticks to his basic point that spyware vendors will adapt or evolve into forms more resistant to enforcement.

Alex asks Orin how law and technology might work together to attack the problem. Orin says he doesn’t see a grand solution, just incremental chipping away at the problem. Ira Rubinstein says that law can adjust incentives, to foster adoption of better technology approaches.

And our day draws to a close. All in all, it was a very interesting and thought-provoking discussion. I wish it had been longer – which I rarely say at the end of this kind of event.