November 30, 2024

Entertainment Industry Pretending to Have Won Grokster Case

Most independent analysts agree that the entertainment industry didn’t get what it wanted from the Supreme Court’s Grokster ruling. Things look grim for the Grokster defendants themselves; but what the industry really wanted from the Court was a ruling that a communication technologies that are widely used to infringe should not be allowed to exist, regardless of the behavior and intentions of the technologies’ creators. The Court rejected this theory.

Last week the Senate Commerce Committee held a hearing (a video stream is available) on the Grokster aftermath. This was a chance for witnesses representing various interests to put their official spin on the Grokster ruling. All of the witnesses praised the ruling and asked Congress to wait and see what develops, rather than legislating right away. But different witnesses put different spins on the ruling.

The entertainment industry line was presented by Mitch Bainwol of the RIAA, Fritz Attaway of the MPAA, and Gregory Kerber of Wurld Media (a music distribution service). Their strategy was essentially to pretend that the Court did give the industry what it wanted, and that P2P technologies were now presumptively illegal unless they had cut licensing deals with the industry. They didn’t argue this directly, but the message was clear. For example, they tried to draw a line between “legitimate” P2P technologies and others, where legitimacy was apparently achieved by signing a licensing deal with major recording or movie companies.

For example, in response to concerns from Mark Heesen of the National Venture Capital Association about venture capitalists’ fears of financial ruin from investing in even well-intentioned communication technology companies, Mr. Kerber said this:

It’s very clear how you get investment. The rules are there. We’re a P2P – we’re a real peer-to-peer – it’s centrally controlled, we can control that … we can respect the copyright holder’s wants during – through a contractual process.

And the way that investors realize that is when we go out and get deals with the record labels, movie studios; and … the venture capitalists do their due diligence, they call and they find out that … the content owner of these assets [says] yes, we will allow this to be transferred and distributed and sold … within – on the network.

So … it’s very, very clear. If you have a contract with a major label, indy label, movie studio, publisher, what they have said is, we will allow the content to be sold in this manner across our network. So I’m a little confused by – there’s an absolute clear path for an investor to understand what’s right and wrong in the process.

It’s a simple message. Investing in technologies that have been blessed by the entertainment industry: right; investing in other technologies: wrong.

But it’s not what the Court said. The Court rejected the proposition that P2P or other communication technologies can exist only at the pleasure of the entertainment industry.

Despite this, we can expect to hear more of this rhetoric of “legitimacy”. And when P2P technologies continue to exist and be popular, we can expect calls for legislation to control the scourge of “illegitimacy”.

WiFi Freeloading Now a Crime in U.K.

A British man has been fined and given a suspended prison sentence for connecting to a stranger’s WiFi access point without permission, according to a BBC story. There is no indication that he did anything improper while connected; all he did was to park his car in front of a stranger’s house and connect his laptop to the stranger’s open WiFi network. He was convicted of “dishonestly obtaining an electronic communications service”.

As the story notes, this case is quite different from previous WiFi-related convictions, in which people were convicted not of connecting to an open network but of committing other crimes, such as swiping financial information, once connected.

Most WiFi equipment operates in an open fashion by default, allowing anybody to connect. It’s well known that few people change their network settings. I used to find quite often that my laptop was connected accidentally to my neighbor’s WiFi network – failing to get a strong enough signal from my own (secured) network, the laptop would connect automatically to any open network it found.

Often the person who set up the network is happy to let strangers use it. Many businesses set up open access points to attract customers. Unfortunately, the technology offers no agreed-upon way for the network owner to say whether he welcomes connections. Taking steps to secure an access point is a clear statement that connections are not welcome; but many people worry that changing security settings will break their network, so the lack of security precautions doesn’t always indicate that the owner welcomes connections.

It would be nice if people used the SSID to indicate their preference. (Joe Gratz says he uses the SSID “PleaseUseSparingly”.) Changing the SSID is easy and is unlikely to break anything that is already working.

Another part of the BBC article is even scarier:

“There have been incidences where paedophiles deliberately leave their wireless networks open so that, if caught, they can say that is wasn’t them that used the network for illegal purposes,” said NetSurity’s Mr Cracknell.

Such a defence would hold little water as the person installing the network, be they a home user or a business, has ultimate responsibility for any criminal activity that takes place on that network, whether it be launching a hack attack or downloading illegal pornography.

I doubt this is true. If it is, everybody who runs a WiFi network is at risk of a long jail sentence.

ISS Caught in the Middle in Cisco Security Flap

The cybersecurity world is buzzing with news about Cisco’s attempt to silence Michael Lynn’s discussion of a serious security flaw in the company’s product. Here’s the chronology, which I have pieced together from news reports (so the obvious caveats apply):

Michael Lynn worked for ISS, a company that sells security scanning software. In the course of his work, he found a serious security flaw in IOS, the operating system that runs on Cisco’s routers. (Routers are specialized computers that shunt Internet packets from link to link, getting them gradually from source to destination. Cisco is the leading maker of routers.)

It has long been believed that a buffer overflow bug (the most common types of security bug) in IOS could be exploited by a remote party to crash the router, but not to seize control of it. What Lynn discovered is a way for an attacker to leverage a buffer overflow bug in IOS into full control over the router. Buffer overflow bugs are common, and Cisco routers handle nearly all Internet traffic, so this is a big problem.

Lynn was planning to discuss this in a presentation Wednesday at the Black Hat conference. At the last minute Cisco convinced ISS (Lynn’s employer) to cancel the talk. Cisco employees ripped Lynn’s paper out of every copy of the already-printed conference proceedings, and ISS ordered Lynn to talk about another topic during his already-scheduled slot in the Black Hat conference schedule.

Lynn quit his ISS job and gave a presentation about the Cisco flaw.

Cisco ran to court, asking for an injunction barring Lynn from further disclosing the information. They argued that the information was a trade secret and Lynn had obtained it illegally by reverse engineering.

The parties have now agreed that Lynn will destroy any documents or files he has on the topic, and will refrain from disclosing the information to anyone. The Black Hat organizers will destroy their videotape of Lynn’s presentation.

What distinguishes this from the standard “vendor tries to silence security researcher” narrative is the role of ISS. Recall that Lynn did his research as an ISS employee. This kind of research is critical to ISS’s business – it has to know about flaws before it can help protect its customers from them. Which means that ISS can’t be happy with the assertion that the research done in ISS’s lab was illegal.

So it looks like all of the parties lose. Cisco failed to cover up its security vulnerability, and only drew more attention with the legal threats. Lynn is out of a job. And ISS is the big loser, with its research enterprise potentially at risk.

The public, on the other hand, got useful information about the (in)security of the Internet infrastructure. Despite Cisco’s legal action, the information is out there – Lynn’s PowerPoint presentation is already available at Cryptome.

[Updated at 11:10 AM with minor modification to the description of what Lynn discovered, and to add the last sentence about the information reaching the public via Cryptome.]

Update (1:10 PM): The FBI is investigating whether Lynn committed a crime by giving his talk. The possible crime, apparently, was the alleged disclosure of ISS trade secrets.

U.S. Computer Science Malaise

There’s a debate going on now among U.S. computer science researchers and educators, about whether the U.S. as a nation is serious about maintaining its lead in computer science. We have been the envy of the world, drawing most of the worlds’ best and brightest in the field to our country, and laying the foundations of a huge industry that has fostered wealth and national power. But there is a growing sense within the field that all of this may be changing. This sense of malaise is a common topic around faculty water coolers across the country, and in speeches by industry figures like Bill Gates and Vint Cerf.

Whatever the cause – and more on that below – there two main symptoms. First is a sharp decrease in funding for computer science research, especially in strategic areas such as cybersecurity. For example, DARPA, the Defense Department research agency that funded the early Internet and other breakthroughs, has cut its support for university computer science research by more than 40% in the last three years, and has redirected the remaining funding toward short-term advanced development efforts. Corporate research is not picking up the slack.

The second symptom, which in my view is more worrisome, is the sharp decrease in the number of students majoring in computer science. One reputable survey found a 60% drop in the last four years. One would have expected a drop after the dotcom crash – computer science enrollments have historically tracked industry business cycles – but this is a big drop! (At Princeton, we’ve been working hard to make our program more compelling, so we have seen a much smaller decrease.)

All this despite fundamentals that seem sound. Our research ideas seem as strong as ever (though research is inherently a hit-and-miss affair), and the job market for our graduates is still very strong, though not as overheated as a few years ago. Our curricula aren’t perfect but are better than ever. So what’s the problem?

The consensus seems to be that computer science has gotten a bad rap as a haven for antisocial, twinkie-fed nerds who spend their nights alone in cubicles wordlessly writing code, and their days snoring and drooling on office couches. Who would want to be one of them? Those of us in the field know that this stereotype is silly; that computer scientists do many things beyond coding; that we work in groups and like to have fun; and that nowadays computer science plays a role in almost every field of human endeavor.

Proposed remedies abound, most of them attempts to show people who computer scientists really are and what we really do. Stereotypes take a long time to overcome, but there’s no better time than the present to get started.

UPDATE (July 28): My colleagues Sanjeev Arora and Bernard Chazelle have a thoughtful essay on this issue in the August issue of Communications of the ACM.

Privacy, Price Discrimination, and Identification

Recently it was reported that Disney World is fingerprinting its customers. This raised obvious privacy concerns. People wondered why Disney would need that information, and what they were going to do with it.

As Eric Rescorla noted, the answer is almost surely price discrimination. Disney sells multi-day tickets at a discount. They don’t want people to buy (say) a ten-day ticket, use it for two days, and then resell the ticket to somebody else. Disney makes about $200 more by selling five separate two-day tickets than by selling a single ten-day ticket. To stop this, they fingerprint the users of such tickets and verify that the fingerprint associated with a ticket doesn’t change from day to day.

Price discrimination often leads to privacy worries, because some price discrimination strategies rely on the ability to identify individual customers so the seller knows what price to charge them. Such privacy worries seem to be intensifying as technology advances, since it is becoming easier to keep records about individual customers, easier to get information about customers from outside sources, and easier to design and manage complex price discrimination strategies.

On the other hand, some forms of price discrimination don’t depend on identifying customers. For example, early-bird discounts at restaurants cause customers to self-select into categories based on willingness to pay (those willing to come at an inconvenient time to get a lower price vs. those not willing) without needing to identify individuals.

Disney’s type of price discrimination falls into a middle ground. They don’t need to know who you are; all they need to know is that you are the same person who used the ticket yesterday. I think it’s possible to build a fingerprint-based system that stores just enough information to verify that a newly-presented fingerprint is the same one seen before, but without storing the fingerprint itself or even information useful in reconstructing or forging it. That would let Disney get what it needs to prevent ticket resale, without compromising customers’ fingerprints.

If this is possible, why isn’t Disney doing it? I can only guess, but I can think of two reasons. First, in designing identity-based systems, people seem to gravitate to designs that try to extract a “true identity”, despite the fact that this is more privacy-compromising and is often unnecessary. Second, if Disney sees customer privacy mainly as a public-relations issue, then they don’t have much incentive to design a more privacy-protective system, when ordinary customers can’t easily tell the difference.

Researchers have been saying for years that identification technologies can be designed cleverly to minimize unneeded information flows; but this suggestion hasn’t had much effect. Perhaps bad publicity over information leaks will cause companies to be more careful.