December 25, 2024

Archives for 2012

My Public Comments to the CA/Browser Forum Organizational Reform Working Group

Today, I submitted public comments to the CA/Browser Forum. CA/B Forum is an industry group started by Certificate Authorities — the companies that sell digital certificates to web sites so that your browser can encrypt your communications and can tell you whether it’s connecting to the genuine site. It is important that CAs do a good job, and there have been several examples of Bad Guys getting fraudulent certificates for major web sites recently. You can read the comments below, or download a pretty PDF version.

Public Comments to the CA/Browser Forum Organizational Reform Working Group
March 30, 2012

I am pleased to respond to the CA/Browser Forum’s request for comments on its plan to establish an Organizational Reform Working Group.[1] For more than a decade, Internet users have relied upon digital certificates to encrypt and authenticate their most valuable communications. Nevertheless, few users understand the technical intricacies of the Public Key Infrastructure (PKI) and the policies that govern it. Their expectations of secure communication with validated third-parties are set by the software that they use on a daily basis–typically web browsers–and by faith in the underlying certificates that are issued by Certificate Authorities (CAs). CAs and browser vendors have therefore been entrusted with critically important processes, and the public reasonably relies on them to observe current best practices and to relentlessly pursue even better practices in response to new threats.

[continue reading…]

Tech@FTC

Professor Ed Felten, while on loan to the Federal Trade Commission for 2011 and Spring 2012, has a new Tech Policy Blog, Tech@FTC. When he’s in his role as Chief Technologist of the FTC, he’ll blog there; when he’s wearing his regular hat as Professor of Computer Science and Director of the Center for Information Technology Policy, he’ll blog here at freedom-to-tinker.

Of course, the big news from the FTC this week is the official report, Protecting Consumer Privacy in an Era of Rapid Change, and I see that Ed has something to say about that. But he’s also got an article about SQL injection and our friend, little Bobby Tables.

Join Us at Princeton Tomorrow for "Copyright Cat-and-Mouse: New Developments in Online Enforcement"

Tomorrow afternoon, the Center for Information Technology Policy is hosting an event that looks at the state of online copyright enforcement and the policy perspectives of the parties involved. We’ve got a great lineup, with folks from the content industry, internet service providers, web companies, academics, and the press.

Date: Tuesday, March 13, 2012
Time: 1:00 PM – 5:00 PM
Location: The Friend Center, Princeton University, Convocation Room
hashtag: #copyrightcitp

This conference is free and open to the public. Please register here.

Copyright enforcement in the digital era has been an ongoing game of cat-and-mouse. As new technologies emerge for storing and transmitting creative works, content creators struggle to identify the best response. The content industry has employed different tactics over time — including technological copy protection, litigation against infringers, and collaboration with Internet Service Providers (ISPs). In August of 2011, some members of the content industry signed an historic Memorandum of Understanding (MOU) with some of the largest ISPs, agreeing to a “graduated response” system of policing. ISPs agreed to notify their subscribers if allegedly infringing activity was detected from their connection and, if infringement continued after multiple warnings, to impede access. Meanwhile, a wave of “copyright troll” litigation has continued to sweep the country and burden the courts. Use of takedown notices under the Digital Millenium Copyright Act has continued to evolve. This event will examine enforcement efforts to date, and debate the merits of the new private approach embodied in the MOU framework.

New York, New Jersey, and Pennsylvania CLE credit is available for attorneys who attend. (details)

Keynote: Technology and Trends (1:00 PM – 1:30 PM)

Mike Freedman, Assistant Professor in Computer Science, Princeton University

Panel 1: The Existing US Legal Landscape (1:30 PM – 3:00 PM)

Moderator: Bart Huffman, Locke Lord LLP

  • Preston Padden, Adjunct Professor at Colorado Law School and former Executive VP of Government Relations, The Walt Disney Company
  • Timothy B. Lee, Ars Technica
  • Randy Cadenhead, Privacy Counsel, Cox Communications Inc.
  • Katherine Oyama, Copyright Counsel, Google Inc.

Break (3:00 PM – 3:30 PM)

Panel 2: The 2011 Content-ISP MoU (3:30 PM – 5:00 PM)

Moderator: Stephen Schultze, Princeton CITP

  • Joe Karaganis, Vice President, the American Assembly, Columbia University
  • Keith Epstein, Associate General Counsel at AT&T
  • Annemarie Bridy, Fellow, Princeton CITP
  • Daniel M. Mandil, Senior Vice President, Associate General Counsel, Litigation, Viacom Inc.

Don't Upset the Intellectual Property Fashion Police

A student group at the University of Pennsylvania Law School has put together a fantastic symposium on the state of fashion law, but along the way they (allegedly) snagged themselves on Louis Vuitton’s trademarks. After creating a poster with a creative parody of the Louis Vuitton logo, they received a Cease & Desist letter from the company’s attorneys claiming:

While every day Louis Vuitton knowingly faces the stark reality of battling and interdicting the proliferation of infringements of the LV Trademarks, I was dismayed to learn that the University of Pennsylvania Law School’s Penn Intellectual Property Group had misappropriated and modified the LV Trademarks and Toile Monogram as the background for its invitation and poster for the March 20, 2012 Annual Symposium on “IP Issues in Fashion Law.”

Ironically, the symposium aims to further education and understanding of the state of intellectual protection in the fashion industry, and to discuss controversial new proposals to expand the scope of protection, such as the proposed bill H.R. 2511, the “Innovative Design Protection and Piracy Prevention Act”.

The attorneys at Penn responded by letter, indicating that Louis Vuitton’s complaint failed any conceivable interpretation of trademark law — outlining the standard claims such as confusion, blurring, or tarnishment — and asserted the obvious defenses provided by law for noncommercial and educational fair use. It indicated that the general counsel had told the students to “make it work” with the unmodified version of the poster, and concluded by inviting Louis Vuitton attorneys to attend the symposium (presumably to learn a bit more about how trademark law actually works.)

I, for one, am offended that the Center for Information Technology Policy here at Princeton has not received any Cease & Desist letters accusing us of “egregious action [that] is not only a serious willful infringement” of fashion trademarks, but “may also may mislead others into thinking that this type of unlawful behavior is somehow ‘legal’ or constitutes ‘fair use’.” You see, our lecture this Thursday at 12:30pm at Princeton by Deven Desai, “An Information Approach to Trademarks”, has a poster that includes portions of registered fashion industry trademarks as well. Attorneys from Christian Dior and Ralph Lauren, we welcome you to attend our event.

DHS OIG study of scanners silent on computer threats

The U.S. Department of Homeland Security Office of Inspector General (DHS OIG) released their report on safety of airport backscatter machines on February 29. The report has received criticism from ProPublica among others for what it says as well as what it doesn’t, mostly focusing on issues of incremental risk to the traveling public, the large number of repair services, and the lack of data analyzing whether the machines serve their claimed purpose. (The report does not address millimeter wave machines, which most scientists believe are safer.)

But what’s surprising in both the report and the critiques about it is that they have only discussed the radiation aspects when used as intended, and not the information systems embedded in the devices, or what happens if the scanners are used in unintended ways, as could happen with a computer system malfunction. Like any modern system, the scanners almost certainly have a plethora of computer systems, controlling the scanning beam, analysis of what the beam finds, etc. It’s pretty likely that there’s Windows and Linux systems embedded in the device, and it’s certain that the different parts of the device are networked together, for example so a technician in a separate room can see the images without seeing the person being scanned (as TSA has done to head off the complaints about invasion of privacy).

The computer systems are the parts that concern me the most. We should be considered about security, safety, and privacy with such complex systems. But the report doesn’t use the word “software” even once, and the word “computer” is used twice in reference to training but not to the devices themselves.

On the safety front, we know that improperly designed software/hardware interaction can lead to serious and even fatal results – Nancy Leveson’s report on the failure of the Therac-25 system should be required reading for anyone considering building a software-controlled radiation management system, or anyone assessing the safety of such a system. We can hope that the hardware design of the scanners is such that even malicious software would be unable to cause the kind of failures that occurred with the Therac-25, but the OIG report gives no indication whether that risk was considered.

On the security and privacy front, we know that the devices have software update capabilities – that became clear when they were “upgraded” to obscure the person’s face as a privacy measure, and future planned upgrades to provide only a body outline showing items of concern, rather than an actual image of the person. So what protections are in place to ensure that insiders or outsiders can’t install “custom” upgrades that leak images, or worse yet change the radiation characteristics of the machines? Consider the recent case of the Air Force drone control facility that was infected by malware, despite being a closed classified network – we should not assume that closed networks will remain closed, especially with the ease of carrying USB devices.

Since we know that the scanners include networks, what measures are in place to protect the networks, and to prevent their being attacked just like the networks used by government and private industry? Yes, it’s possible to build the devices as closed networks protected by encryption – and it’s also possible to accidentally or intentionally subvert those networks by connecting them up using wireless routers.

Yes, I know that the government has extensive processes in place to approve any computer systems, using a process known as Certification and Accreditation. Unfortunately, C&A processes tend to focus too much on the paperwork, and not enough on real-world threat assessments. And perhaps the C&A process used for the scanners really is good enough, but we just don’t know, and the OIG report by neglecting to discus the computer side of the scanners gives no reassurance.

Over the past few years, Stuxnet and research into embedded devices such as those used in cars and medical devices have taught us that embedded systems software can impact the real world in surprising ways. And with software controlled radiation devices potentially causing unseen damage, the risks to the traveling public are too great for the OIG to ignore this critical aspect of the machines.