April 17, 2014

avatar

CITP Announces 2009-10 Visitors

Today, I’m pleased to announce CITP’s visitors for the upcoming academic year.

Deven R. Desai, Visiting Fellow: Deven is an Associate Professor of Law at the Thomas Jefferson School of Law, and a permanent blogger at Concurring Opinions. Professor Desai’s scholarship centers on intellectual property, information theory, and Internet-related law. He plans to work on a major project exploring the ways trademark law can foster, or limit, online innovation.

James Katz, Visiting Fellow. Jim is Professor, Chair of the Department of Communication, and Director of the Center for Mobile Communication Studies at Rutgers, where he holds the University’s highest professorial rank. He has devoted much of his career to exploring the social consequences of new communication technology, especially the mobile phone and Internet. Currently he is looking at how personal communication technologies can be used by teens from urban environments to engage in informal science and health learning. This research is being carried out through an NSF-sponsored project with New Jersey’s Liberty Science Center.

Rebecca MacKinnon, Visiting Fellow (spring term): Rebecca is an Assistant Professor at the University of Hong Kong’s Journalism and Media Studies Centre. She is currently on leave, as an Open Society Fellow, to work on a book tentatively titled “Internet Freedom and Control: Lessons from China for the World.” She will spend the spring 2010 semester at CITP, continuing to work on the book. Rebecca is a cofounder of Global Voices, a founding member of the Global Network Initiative, and a former television journalist, having served as CNN’s bureau chief in Beijing and, later, Tokyo.

Jens Grossklags, Postdoctoral Research Associate: Jens, a new PhD from the UC Berkeley School of Information, studies information economics and technology policy. He focuses on the intersection of privacy, security, and network systems. His approach is highly interdisciplinary, combining economics, computer science, and public policy. Currently, he is investigating the ways institutions and end users make decisions about complex computer security risks under conditions of uncertainty and limited information.

Joseph Lorenzo Hall, Visiting Postdoctoral Research Associate: Joe, whose work is supported by the NSF ACCURATE Center, also earned his PhD from the UC Berkeley School of Information. His dissertation examined public policy mechanisms for making computerized voting systems more transparent. He continues to work along the same lines, drawing lessons from voting machines, gaming machines and other technologies on how to best protect users from error and malicious activity.

In addition to these full time appointments, the Center will also welcome two Visiting Research Collaborators on an occasional basis: Alex Halderman, an Assistant Professor of Computer Science at the University of Michigan (and recently in the news for his research group’s analysis of China’s Green Dam software), and David Lukens, an attorney who has been collaborating on the Center’s transparency work.

avatar

Did the Sanford E-Mail Tipster or the Newspaper Break the Law?

Part of me doesn’t want to comment on the Mark Sanford news, because it’s all so tawdry and inconsistent with the respectable, family-friendly tone of Freedom to Tinker. But since everybody from the Gray Lady on down is plastering the web with stories, and because all of this reporting is leaving unanalyzed some Internet law questions, let me offer this:

On Wednesday, after Sanford’s confessional press conference, The State, the largest newspaper in South Carolina, posted email messages appearing to be love letters between the Governor and his mistress. (The paper obscured the name of the mistress, calling her only “Maria.”) The paper explained in a related news story that they had received these messages from an anonymous tipster back in December, but until yesterday’s unexpected corroboration of their likely authenticity, they had just sat on them.

Did the anonymous tipster break the law by obtaining or disclosing the email messages? Did the paper break the law by publishing them? After the jump, I’ll offer my take on these questions.

Three disclaimers: First, the paper has not yet revealed (and may not even know) most of the important facts I would need to know to thoroughly analyze whether a law has been broken. Like a first year law student, I am trying to spot legal issues that will turn on what might be the facts. Second, I know nothing about the law of South Carolina (or, for that matter, Argentina). I am analyzing three specific federal laws with which I am very familiar. Third, I am barely scratching the surface of some very complex laws.

The Anonymous Tipster

Let’s start with the anonymous tipster (AT). AT might have broken three federal laws, depending on who AT is and how he or she obtained the messages. First, the Stored Communications Act (SCA) prohibits unauthorized access to a “facility through which an electronic communication service is provided” to obtain messages “in electronic storage.” In a separate provision, the SCA prohibits providers from disclosing the content of user communications. Second, the Wiretap Act prohibits the interception of electronic communications and the disclosure and use of illegally intercepted communications. Third, the Computer Fraud and Abuse Act (CFAA) prohibits certain types of unauthorized conduct on computers and computer networks.

All three of these laws provide both civil remedies (Maria, Sanford, or an affected ISP can sue the anonymous tipster for damages) and criminal prohibitions. So should AT worry about jail or a hefty fine? Probably not, but it turns on who AT turns out to be.

What if AT turns out to be Maria herself? Even putting to one side whether these laws apply outside the U.S., she almost certainly would not have broken any of them. Each of these laws provides an exception or defense for consent of the communicating party or authorization of the email account owner. To take one example, under the SCA it is not illegal for the owner of an email account to access or disclose his or her email messages.

These defenses would also protect AT if he turns out, in a bizarre twist, to be Sanford himself.

For the same reasons, AT probably did not break these laws if it turns out Maria or Sanford intentionally disclosed the email messages to AT, perhaps a friend or acquaintance or employee, who then passed them on to the newspaper. This is probably true even if Maria or Sanford asked AT to promise to protect the secret. As in other parts of the law, misplaced trust is no defense under these three laws.

But now we get to more difficult cases. What if AT is a friend or acquaintance or employee of Maria or Sanford who had access to Maria’s or Sanford’s email account, but did not have specific permission to access these particular messages? For example, what if AT was Sanford’s secretary, a person likely to have permission to view his inbox? On these facts, the case against AT would turn on hard questions of authorization. Did Sanford or Maria limit AT’s authorized access to the inbox? If so, how? With written rules, technological access controls, or vague admonitions? Courts have interpreted the word “authorization” in the CFAA, in particular, quite narrowly, ruling that otherwise-authorized users may no longer act with authorization once they violate rules or contractual promises. (This is the legal theory being advanced by DOJ in the Lori Drew CFAA prosecution.)

Next, what if AT works for an ISP—perhaps on the IT staff for the State of South Carolina or for a commercial email provider? In this case, AT should worry a little more. Although ISPs tend to have many legal reasons to access the content of communications stored on their servers or passing through their wires, this authority is not unlimited, as I have written about elsewhere. The ISP employee’s liability or culpability will turn on factors like terms of service and motive. For example, if the employee stumbled upon the messages during routine server maintenance, there may be a good defense.

The Newspaper

Lastly, let’s turn to the newspaper, The State. First, if AT did not break any of these laws by obtaining or disclosing the messages, then the newspaper likewise did not break any of these laws by publishing them.

Even if AT has broken the CFAA or SCA, the newspaper probably has no downstream liability for its subsequent publication. These two laws focus on initial access or disclosure, not on subsequent, downstream uses and disclosures.

The Wiretap Act, on the other hand, restricts the downstream use and disclosure of illegally intercepted communications. Here, however, the First Amendment probably provides a defense.

In Bartnicki v. Vopper, the Supreme Court held that the First Amendment shields the media from liability for the publication of content illegally intercepted under the Wiretap Act if the content is “about a matter of public concern.” Granted, the private communications in Bartnicki—a phone call between a union negotiator and the union’s president about the status of negotiations—seem more a matter of public concern and less private than the intimate love letters between a politician and his mistress. But, I am no First Amendment expert, so I will leave it to others to decide how these facts fare under Bartnicki. To my nonexpert eye, given the sweeping language both in Bartnicki and in the cases cited by Bartnicki (starting with New York Times v. Sullivan), it seems that the First Amendment shield applies here.

Final Thought: So, Who is the Tipster?

Finally, Sanford or Maria might sue the newspaper and AT (as a so-called “John Doe” defendant) in order to discover AT’s identity. A plaintiff in a civil lawsuit can ask a judge to order a subpoena to discover an unknown defendant’s identity. No doubt, the newspaper would fight such a subpoena vigorously, but whether or not it would succeed is a topic for another day.

avatar

U.S. Objects to China's Mandatory Green Dam Censorware

Yesterday, the U.S. Commerce Secretary and Trade Representative sent a letter to China’s government, objecting to China’s order, effective July 1, to require that all new PCs sold in China have preinstalled the Green Dam Youth Escort censorware program.

Here’s today’s New York Times:

Chinese officials have said that the filtering software, known as Green Dam-Youth Escort, is meant to block pornography and other “unhealthy information.”

In part, the American officials’ complaint framed this as a trade issue, objecting to the burden put on computer makers to install the software with little notice. But it also raised broader questions about whether the software would lead to more censorship of the Internet in China and restrict freedom of expression.

The Green Dam requirement puts U.S.-based PC companies, such as HP and Dell, in a tough spot: if they don’t comply they won’t be able to sell PCs in China; but if they do comply they will be censoring their customers’ Internet use and exposing customers to serious security risks.

There are at least two interesting new angles here. The first is the U.S. claim that China’s action violates free trade agreements. The U.S. has generally refrained from treating China’s Internet censorship as a trade issue, even though U.S. companies have often found themselves censored at times when competing Chinese companies were not. This unequal treatment, coupled with the Chinese government’s reported failure to define clearly which actions trigger censorship, looks like a trade barrier — but the U.S. hasn’t said much about it up to now.

The other interesting angle is the direct U.S. objection to censorship of political speech. For some time, the U.S. has tolerated China’s government blocking certain political speech in the network, via the “Great Firewall“. It’s not clear exactly how this objection is framed — the U.S. letter is not public — but news reports imply that political censorship itself, or possibly the requirement that U.S. companies participate in it, is a kind of improper trade barrier.

We’re heading toward an interesting showdown as the July 1 date approaches. Will U.S. companies ship machines with Green Dam? According to the New York Times, HP hasn’t decided, and Dell is dodging the question. The companies don’t want to lose access to the China market — but if U.S. companies participate so directly in political censorship, they would be setting a very bad precedent.

avatar

My Testimony on Behavioral Advertising: Post-Mortem

On Thursday I testified at a House hearing about online behavioral advertising. (I also submitted written testimony.)

The hearing started at 10:00am, gaveled to order by Congressman Rush, chair of the Subcommittee on Commerce, Trade, and Consumer Protection. He was flanked by Congressman Boucher, chair of the Subcommittee on Communications, Technology, and the Internet , and Congressmen Steans and Radanovich, the Ranking Members (i.e., the highest-ranking Republican members) of the subcommittees.

First on the agenda we had opening statements by members of the committees. Members had either two or five minutes to speak, and the differing perspectives of the members became clear during these statements. The most colorful statement was by Congressman Barton, who supplemented his interesting on-topic statement with a brief digression about the Democrats vs. Republicans charity baseball game which was held the previous day. The Democrats won, to Congressman Barton’s chagrin.

After the opening statements, the chair recessed the hearings, so the Members could go to the House floor to vote. Members of the House must be physically present in the House chamber in order to vote, so it’s not unusual for hearings to recess when there is a floor vote. The House office buildings have buzzers, not unlike the bells that mark the ends of periods in a school, which alert everybody when a vote starts. The Members left the hearing room, and we all waited for the vote(s) to end, so our hearing could resume. The time was 10:45 AM.

What happened next was very unusual indeed. The House held vote after vote, more than fifty votes in total, as the day stretched on, hour after hour. They voted on amendments, on motions to reconsider the votes on the amendments, on other motions — at one point, as far as I could tell, they were voting on a motion to reconsider a decision to table an appeal of a procedural decision of the chair. To put it bluntly, the Republicans were staging a kind of work stoppage. They did this, I hear, to protest an unusual procedural limitation that the Democrats had placed on the handling of the appropriations bill that was currently before the House. I don’t know enough about the norms of House procedure to say which party had the better argument here — but I do know that the recess in our hearing lasted eight and a half hours.

These were not the most exciting eight and a half hours I have experienced. As the day stretched on, we did get a chance to wander around and do a little light tourism. Probably the highlight was when we saw Angelina Jolie in the hallway.

When we reconvened at 7:15 PM, the room, which had been overflowing with spectators in the morning, was mostly empty. The members of the committees, though, made a pretty good showing, which was especially impressive given that it was Thursday evening, when many Members hightail it back home to their districts. Late in the day, after a day that must have been frustrating for everybody, we sat down to business and had a good, substantive hearing. There were no major surprises — there rarely are at hearings — but everyone got a chance to express their views, and the members asked substantive questions.

Thinking back on the hearing, I did realize one thing that may have been missing. The panel of witnesses included three companies, Yahoo, Google, and Facebook, that are both ad services and content providers. There was less attention to situations where the ad service and the content provider are separate companies. In this latter case, where the ad service does not have a direct relationship with the consumer, so the market pressure on the ad service to behave well is attenuated. (There is still some pressure, through the content provider, who wants to stay in the good graces of consumers, but an indirect link is not as effective as a direct one would be.) Yahoo, Google, and Facebook are household names, and we would naturally expect them to pay more careful attention to the desires of consumers and Congress than lower-profile ad services would.

Witnesses have the opportunity to submit further written testimony. Any suggestions on what I might discuss?

avatar

My Testimony on Behavioral Advertising

I’m testifying this morning at 10:00 AM (Eastern) at a Congressional hearing on “Behavioral Advertising: Industry Practices and Consumers’ Expectations”. It’s a joint hearing of two subcommittees of the House Committee on Energy and Commerce: the Subcommittee on Commerce, Trade, and Consumer Protection; and the Subcommittee on Communications, Technology, and the Internet .

Witnesses at the hearing are:

  • Jeffrey Chester, Executive Director, Center for Digital Democracy
  • Scott Cleland, President, Precursor LLC
  • Charles D. Curran, Executive Director, Network Advertising Initiative
  • Edward W. Felten, Professor of Computer Science and Public Affairs, Princeton University
  • Christopher M. Kelly, Chief Privacy Officer, Facebook
  • Anne Toth, Vice President of Policy, Head of Privacy, Yahoo! Inc.
  • Nicole Wong, Deputy General Counsel, Google Inc.

I submitted written testimony to the committee.

Look for a live webcast on the committee’s site.

avatar

CITP Seeking New Associate Director

In the next few days, I’ll be writing a post to announce CITP’s visiting fellows for the upcoming 2009-2010 academic year. But first, today, I want to let you know about a change in the Center’s leadership structure. After serving for two years as CITP’s first-ever Associate Director, David Robinson will be leaving us in August to begin law school at Yale. As a result, we are now launching a search for a new Associate Director.

As Associate Director, he helped oversee CITP’s growth into a larger, more mature organization, our move into a great new space in Sherrerd Hall, and two years of our busy activities calendar. He has been an integral part of the Center’s management and its research activities. David has done a fantastic job, and we’ll miss him, but we understand and support his decision to go on to law school as the next stage of his sure-to-be-stellar career. David will remain engaged with the Center’s research, and we expect to cross paths with him often in the future.

The new Associate Director will pick up where David leaves off, taking our Center to the next level in its development. The job is a fabulous opportunity to exercise leadership, vision and dedication: As a startup, we are improvising and learning while we grow, constantly looking for new and better ways to advance the policy debate and public understanding of digital technologies through both technical and policy research. Our first challenge was to get things started—now that we are established, a key priority for the new Associate Director will be building richer and deeper links and collaborations with other faculty members, policymakers, and the tech policy community generally. Here’s the official job description, soon to appear on the University’s “Jobs at Princeton” web site:

The Associate Director serves as a core organizer and evangelist for the Center, both on campus and beyond. Working with the existing Center staff, the Associate Director will develop, plan and execute the Center’s public activities, including lecture series, workshops and policy briefings; recruit visiting researchers and policy experts and coordinate the selection appointment process; cultivate research collaborations, joint public events and other activities to build faculty engagement in the Center; coordinate interdisciplinary grant writing as appropriate; and develop and maintain the Center’s website and other published materials.

One of David’s last projects at the Center will be to coordinate the search process for his replacement. The search will continue until the position is filled: We hope to have the new Associate Director in place by the start of the school year. Applicants should provide a cover letter, CV, and contact information for three references. These materials can be sent to David (or equivalently, once the University’s jobs site has the listing, they can also be submitted through that route). David will also be happy to answer any questions about the position.

avatar

The rise of the "nanostory"

In today’s Wall Street Journal, I offer a review of Bill Wasik’s excellent new book, And Then There’s This: How Stories Live and Die in Viral Culture. Cliff’s notes version: This is a great new take on the little cultural boomlets and cryptic fads that seem to swarm all over the Internet. The author draws on his personal experience, including his creation of the still-hilarious Right Wing New York Times. Here’s a taste from the book itself—Wasik describing his decision to create the first flash mob:

It was out of the question to create a project that might last, some new institution or some great work of art, for these would take time, exact cost, require risk, even as their odds of success hovered at nearly zero. Meanwhile, the odds of creating a short-lived sensation, of attracting incredible attention for a very brief period of time, were far more promising indeed… I wanted my new project to be what someone would call “The X of the Summer” before I even contemplated exactly what X might be.

avatar

China's New Mandatory Censorware Creates Big Security Flaws

Today Scott Wolchok, Randy Yao, and Alex Halderman at the University of Michigan released a report analyzing Green Dam, the censorware program that the Chinese government just ordered installed on all new computers in China. The researchers found that Green Dam creates very serious security vulnerabilities on users’ computers.

The report starts with a summary of its findings:

The Chinese government has mandated that all PCs sold in the country must soon include a censorship program called Green Dam. This software monitors web sites visited and other activity on the computer and blocks adult content as well as politically sensitive material. We examined the Green Dam software and found that it contains serious security vulnerabilities due to programming errors. Once Green Dam is installed, any web site the user visits can exploit these problems to take control of the computer. This could allow malicious sites to steal private data, send spam, or enlist the computer in a botnet. In addition, we found vulnerabilities in the way Green Dam processes blacklist updates that could allow the software makers or others to install malicious code during the update process. We found these problems with less than 12 hours of testing, and we believe they may be only the tip of the iceberg. Green Dam makes frequent use of unsafe and outdated programming practices that likely introduce numerous other vulnerabilities. Correcting these problems will require extensive changes to the software and careful retesting. In the meantime, we recommend that users protect themselves by uninstalling Green Dam immediately.

The researchers have released a demonstration attack which will crash the browser of any Green Dam user. Another attack, for which they have not released a demonstration, allows any web page to seize control of any Green Dam user’s computer.

This is a serious blow to the Chinese government’s mandatory censorware plan. Green Dam’s insecurity is a show-stopper — no responsible PC maker will want to preinstall such dangerous software. The software can be fixed, but it will take a while to test the fix, and there is no guarantee that the next version won’t have other flaws, especially in light of the blatant errors in the current version.

avatar

On China's new, mandatory censorship software

The New York Times reports that China will start requiring censorship software on PCs. One interesting quote stands out:

Zhang Chenming, general manager of Jinhui Computer System Engineering, a company that helped create Green Dam, said worries that the software could be used to censor a broad range of content or monitor Internet use were overblown. He insisted that the software, which neutralizes programs designed to override China’s so-called Great Firewall, could simply be deleted or temporarily turned off by the user. “A parent can still use this computer to go to porn,” he said.

In this post, I’d like to consider the different capabilities that software like this could give to the Chinese authorities, without getting too much into their motives.

Firstly, and most obviously, this software allows the authorities to do filtering of web sites and network services that originate inside or outside of the Great Firewall. By operating directly on a client machine, this filter can be aware of the operations of Tor, VPNs, and other firewall-evading software, allowing connections to a given target machine to be blocked, regardless of how the client tries to get there. (You can’t accomplish “surgical” Tor and VPN filtering if you’re only operating inside the network. You need to be on the end host to see where the connection is ultimately going.)

Software like this can do far more, since it can presumably be updated remotely to support any feature desired by the government authorities. This could be the ultimate “Big Brother Inside” feature. Not only can the authorities observe behavior or scan files within one given computer, but every computer now because a launching point for investigating other machines reachable over a local area network. If one such machine were connected, for example, to a private home network, behind a security firewall, the government software could still scan every other computer on the same private network, log every packet, and so forth. Would you be willing to give your friends the password to log into your private wireless network, knowing their machine might be running this software?

Perhaps less ominously, software like this could also be used to force users to install security patches, to uninstall zombie/botnet systems, and perform other sorts of remote systems administration. I can’t imagine the difficulty in trying to run the Central Government Bureau of National Systems Administration (would they have a phone number you could call to complain when your computer isn’t working, and could they fix it remotely?), but the technological base is now there.

Of course, anybody who owns their own computer will be able to circumvent this software. If you control your machine, you can control what’s running on it. Maybe you can pretend to be running the software, maybe not. That would turn into a technological arms race which the authorities would ultimately fail to win, though they might succeed in creating enough fear, uncertainty, and doubt to deter would-be circumventors.

This software will also have a notable impact in Internet cafes, schools, and other sorts of “public” computing resources, which are exactly the sorts of places that people might go when they want to hide their identity, and where the authorities could have physical audits to check for compliance.

Big Brother is watching.

avatar

Internet Voting: How Far Can We Go Safely?

Yesterday I chaired an interesting panel on Internet Voting at CFP. Participants included Amy Bjelland and Craig Stender (State of Arizona), Susan Dzieduszycka-Suinat (Overseas Vote Foundation) Avi Rubin (Johns Hopkins), and Alec Yasinsac (Univ. of South Alabama). Thanks to David Bruggeman and Cameron Wilson at USACM for setting up the panel.

Nobody advocated a full-on web voting system that would allow voting from any web browser. Instead, the emphasis was on more modest steps, aimed specifically at overseas voters. Overseas voters are a good target population, because there aren’t too many of them — making experimentation less risky — and because vote-by-mail serves them poorly.

Discussion focused on two types of systems: voting kiosks, and Internet transmission of absentee ballots.

A voting kiosk is a computer-based system, running carefully configured software, that is set up in a securable location overseas. Voters come to this location, authenticate themselves, and vote just as they would in a polling place back home. A good kiosk system keeps an electronic record, which is transmitted securely across the Internet to voting officials in the voter’s home jurisdiction. It also keeps a paper record, verifiable by the voter, which is sent back to voting officials after the elections, enabling a post-election audit. A kiosk can use optical-scan technology or it can be a touch-screen machine with a paper trail — essentially it’s a standard voting system with a paper trail, connected to home across the Internet. If the engineering is done right, if the home system that receives the electronic ballots is walled off from the central vote-tabulating system, and if appropriate post-election auditing is done, this system can be secure enough to use. All of the panelists agreed that this type of system is worth trying, at least as a pilot test.

The other approach is use ordinary absentee ballots, but to distribute them and allow voters to return them online. A voter goes to a web site and downloads a file containing an absentee ballot and a cover sheet. After printing out the file, the voter fills out the cover sheet (giving his name and other information) and the ballot. He scans the cover sheet and ballot, and uploads the scan to a web site. Election officials collect and print the resulting file, and treat the printout like an ordinary absentee ballot.

Kevin Poulsen and Eric Rescorla criticize the security of this system, and for good reason. Internet distribution of blank ballots can be secure enough, if done very carefully, but returning filled-out ballots from an ordinary computer and browser is risky. Eric summarizes the risks:

We have integrity issues here as well: as Poulsen suggests (and quotes Rubin as suggesting), there are a number of ways for things to go wrong here: an attacker could subvert your computer and have it modify the ballots before sending them; you could get phished and the phisher could modify your ballot appropriately before passing it on to the central site. Finally, the attacker could subvert the central server and modify the ballots before they are printed out.

Despite the risks, systems of this sort are moving forward in various places. Arizona has one, which Amy and Craig demonstrated for the panel’s audience, and the Overseas Vote Foundation has one as well.

Why is this less-secure alternative getting more traction than kiosk-based systems? Partly it’s due to the convenience of being able to vote from anywhere (with a Net connection) instead of having to visit a kiosk location. That’s understandable. But another part of the reason seems to be that people don’t realize what can go wrong, and how often things actually do go wrong, in online interactions.

In the end, there was a lot of agreement among the panelists — a rare occurrence in public e-voting discussions — but disagreement remained about how far we can go safely. For overseas voters at least, the gap between what is convenient and what can be made safe is smaller than it is elsewhere, but that gap does still exist.