December 22, 2024

"Stolen" LinkedIn Profiles and the Misappropriation of Ideas

The common law tort of “hot news” misappropriation has been dying a slow and justified death. Hot news misappropriation is the legal doctrine on which news outlets like the Associated Press have repeatedly relied over the years to try to prevent third-party dissemination of factual information gathered at the outlets’ expense. Last June, the Second Circuit Court of Appeals dealt a blow to the hot news doctrine when it held that financial firms engaged in producing research reports and recommendations concerning publicly traded securities could not prevent a third party website from publishing news of the recommendations soon after their initial release. The rationale for the court’s decision was that state law claims of hot news misappropriation can only very rarely survive federal preemption by the Copyright Act, which excludes facts from the scope of copyright protection. The rule that facts are not eligible for copyright (called the fact-expression dichotomy) is at the heart of the copyright system and serves the interests of democracy by promoting the unfettered dissemination of important news to the populace. Creative arrangements of facts can be protected under copyright law, but individual facts cannot.

Given the declining fortunes of the hot news doctrine, I was a little surprised to discover a recent case out of Pennsylvania called Eagle v. Morgan, in which the parties are fighting over ownership of a LinkedIn account containing the plaintiff’s profile and her professional connections. The defendant, Eagle’s former employer, asserted a state law counterclaim for misappropriation of ideas. Ideas, as it happens, are—like facts—excluded from the scope of federal copyright protection for a compelling policy reason: If we permit the monopolization of ideas themselves, we will stifle the communal intellectual progress that intellectual property laws exist to promote. Copyright law thus protects only the expression of ideas, not ideas themselves. (This principle is known as the idea-expression dichotomy.) Accordingly, section 102(b) of the Copyright Act denies copyright protection “to any idea, procedure, process, system, method of operation, concept, principle, or discovery, regardless of the form in which it is described, explained, illustrated, or embodied.” The statute really could not be clearer.

In its opinion denying Eagle’s motion for judgment on the pleadings, the trial court did not consider whether the state law tort of misappropriation of ideas is federally preempted by the Copyright Act, which seems to me to be a really important legal question. The court explained that a claim for misappropriation of an idea in Pennsylvania has two elements: “(1) the plaintiff had an idea that was novel and concrete and (2) the idea was misappropriated by the defendant.” To determine whether a misappropriation has occurred, the court further explained, Pennsylvania law requires consideration of three factors:

(1) the plaintiff “has made substantial investment of time, effort, and money into creating the thing misappropriated such that the court can characterize the ‘thing’ as a kind of property right,” (2) the defendant “has appropriated the ‘thing’ at little or no cost such that the court can characterize the defendant’s actions as ‘reaping where it has not sown,’” and (3) the defendant “has injured the plaintiff by the misappropriation.”

Setting aside the oddity of classifying digital information as a “thing,” the first of these factors collides head on with the Supreme Court’s clear repudiation in Feist Publications v. Rural Telephone Service of the “sweat of the brow” theory of intellectual property.

In Feist, the Court held that “sweat of the brow” as a justification for propertizing information “eschew[s] the most fundamental axiom of copyright law—that no one may copyright facts or ideas.” Given copyright law’s express prohibition on the propertization of ideas, there is a strong case to be made that state law claims for misappropriation of ideas are in direct conflict with both the letter and spirit of the federal copyright scheme. On that basis, they are akin to claims of hot news misappropriation, and they should likewise be treated as preempted.

Stopping SOPA's Anticircumvention

The House’s Stop Online Piracy Act is in Judiciary Committee Markup today. As numerous protests, open letters, and advocacy campaigns across the Web, this is a seriously flawed bill. Sen. Ron Wyden and Rep. Darell Issa’s proposed OPEN Act points out, by contrast, some of the procedural problems.

Here, I analyze just one of the problematic provisions of SOPA: a new “anticircumvention” provision (different from the still-problematic anti-circumvention of section 1201). SOPA’s anticircumvention authorizes injunctions against the provision of tools to bypass the court-ordered blocking of domains. Although it is apparently aimed at MAFIAAfire, the Firefox add-on that offered redirection for seized domains in the wake of ICE seizures, [1] the provision as drafted sweeps much more broadly. Ordinary security and connectivity tools could fall within its scope. If enacted, it would weaken Internet security and reduce the robustness and resilience of Internet connections.

The anticircumvention section, which is not present in the Senate’s companion PROTECT-IP measure, provides for injunctions, on the action of the Attorney General:

(ii)against any entity that knowingly and willfully provides or offers to provide a product or service designed or marketed by such entity or by another in concert with such entity for the circumvention or bypassing of measures described in paragraph (2) [blocking DNS responses, search query results, payments, or ads] and taken in response to a court order issued under this subsection, to enjoin such entity from interfering with the order by continuing to provide or offer to provide such product or service. § 102(c)(3)(A)(ii)

As an initial problem, the section is unclear. Could it cover someone who designs a tool for “the circumvention or bypassing of” DNS blockages in general — even if such a person did not specifically intend or market the tool to be used to frustrate court orders issued under SOPA? Resilience in the face of technological failure is a fundamental software design goal. As DNS experts Steve Crocker, et al. say in their Dec. 9 letter to the House and Senate Judiciary Chairs, “a secure application expecting a secure DNS answer will not give up after a timeout. It might retry the lookup, it might try a backup DNS server, it might even restart the lookup through a proxy service.” Would the providers of software that looked to a proxy for answers –products “designed” to be resilient to transient DNS lookup failures –be subject to injunction? Where the answer is unclear, developers might choose not to offer such lawful features rather than risking legal attack. Indeed, the statute as drafted might chill the development of anti-censorship tools funded by our State Department.

Some such tools are explicitly designed to circumvent censorship in repressive regimes whose authorities engage in DNS manipulation to prevent citizens from accessing sites with dissident messages, alternate sources of news, or human rights reporting. (See Rebecca MacKinnon’s NYT Op-Ed, Stop the Great Firewall of America. Censorship-circumvention tools include Psiphon, which describes itself as an “Open source web proxy designed to help Internet users affected by Internet censorship securely bypass content-filtering systems,” and The Tor Project.) These tools cannot distinguish between Chinese censorship of Tiananmen Square mentions and U.S. copyright protection where their impacts — blocking access to Web content — and their methods — local blocking of domain resolution — are the same.

Finally, the paragraph may encompass mere knowledge-transfer. Does telling someone about alternate DNS resolvers, or noting that a blocked domain can still be found at its IP address — a matter of historical record and necessary to third-party evaluation of the claims against that site — constitute willfully “providing a service designed … [for] bypassing” DNS-blocking? Archives of historic DNS information are often important information to legal or technical network investigations, but might become scarce if providers had to ascertain the reasons their information was being sought.

For these reasons among many others (such as those identified by my ISP colleague Nick), SOPA should be stopped.

What are the Constitutional Limits on Online Tracking Regulations?

As the conceptual contours of Do Not Track are being worked out, an interesting question to consider is whether such a regulation—if promulgated—would survive a First Amendment challenge. Could Do Not Track be an unconstitutional restriction on the commercial speech of online tracking entities? The answer would of course depend on what restrictions a potential regulation would specify. However, it may also depend heavily on the outcome of a case currently in front of the Supreme Court—Sorrell v. IMS Health Inc.—that challenges the constitutionality of a Vermont medical privacy law.

The privacy law at issue would restrict pharmacies from selling prescription drug records to data mining companies for marketing purposes without the prescribing doctor’s consent. These drug records each contain extensive details about the doctor-patient relationship, including “the prescriber’s name and address, the name, dosage and quantity of the drug, the date and place the prescription is filled and the patient’s age and gender.” A doctor’s prescription record can be tracked very accurately over time, and while patient names are redacted, each patient is assigned a unique identifier so their prescription histories may also be tracked. Pharmacies have been selling these records to commercial data miners, who in turn aggregate the data and sell compilations to pharmaceutical companies, who then engage in direct marketing back to individual doctors using a practice known as “detailing.” Sound familiar yet? It’s essentially brick-and-mortar behavioral advertising, and a Do Not Track choice mechanism, for prescription drugs.

The Second Circuit recently struck down the Vermont law on First Amendment grounds, ruling first that the law is a regulation of commercial speech and second that the law’s restrictions fall on the wrong side of the Central Hudson test—the four-step analysis used to determine the constitutionality of commercial speech restrictions. This ruling clashes explicitly with two previous decisions in the First Circuit, in Ayotte and Mills, which deemed that similar medical privacy laws in Maine and New Hampshire were constitutional. As such, the Supreme Court decided in January to take the case and resolve the disagreement, and the oral argument is set for April 26th.

I’m not a lawyer, but it seems like the outcome of Sorrell could have a wide-ranging impact on current and future information privacy laws, including possible Do Not Track regulations. Indeed, the petitioners recognize the potentially broad implications of their case. From the petition:

“Information technology has created new and unprecedented opportunities for data mining companies to obtain, monitor, transfer, and use personal information. Indeed, one of the defining traits of the so-called “Information Age” is this ability to amass information about individuals. Computers have made the flow of data concerning everything from personal purchasing habits to real estate records easier to collect than ever before.”

One central question in the case is whether a restriction on access to these data for marketing purposes is a restriction on legitimate commercial speech. The Second Circuit believes it is, reasoning that even “dry information” sold for profit—and already in the hands of a private actor—is entitled to First Amendment protection. In contrast, the First Circuit in Ayotte posited that the information being exchanged has “itself become a commodity,” not unlike beef jerky, so such restrictions are only a limitation on commercial conduct—not speech—and therefore do not implicate any First Amendment concerns.

A major factual difference here, as compared to online privacy and tracking, is that pharmacies are required by many state and federal laws to collect and maintain prescription drug records, so there may be more compelling reasons for the state to restrict access to this information.

In the case of online privacy, it could be argued that Internet users are voluntarily supplying information to the tracking servers, even though many users probably don’t intend to do this, nor do they expect that this is occurring. Judge Livingston, in her circuit dissent in Sorrell, notes that different considerations apply where the government is “prohibiting a speaker from conveying information that the speaker already possesses,” distinguishing that from situations where the government restricts access to the information itself. In applying this to online communications, at what point does the server “possess” the user’s data—when the packets are received and are sitting in a buffer or when the packets are re-assembled and the data permanently stored? Is there a constitutional difference between restrictions on collection versus restrictions on use? The Supreme Court in 1965 in Zemel v. Rusk stated that “the right to speak and publish does not carry with it the unrestrained right to gather information.” To what extent does this apply to government restrictions of online tracking?

The constitutionality of state and federal information privacy laws have historically and consistently been called into question, and things would be no different if—and it’s a big if— Congress grants the FTC authority over online tracking. When considering technical standards and what “tracking” means, it’s worth keeping in mind the possible constitutional challenges insofar as state action may be involved, as some desirable options to curb online tracking may only be possible within a voluntary or self-regulatory framework. Where that line is drawn will depend on how the Supreme Court comes down in Sorrell and how broadly they decide the case.

A Good Day for Email Privacy: A Court Takes Back its Earlier, Bad Ruling in Rehberg v. Paulk

In March, the U.S. Court of Appeals for the Eleventh Circuit, the court that sets federal law for Alabama, Florida, and Georgia, ruled in an opinion in a case called Rehberg v. Paulk that people lacked a reasonable expectation of privacy in the content of email messages stored with an email provider. This meant that the police in those three states were free to ignore the Fourth Amendment when obtaining email messages from a provider. In this case, the plaintiff alleged that the District Attorney had used a sham subpoena to trick a provider to hand over the plaintiff’s email messages. The Court ruled that the DA was allowed to do this, consistent with the Constitution.

I am happy to report that today, the Court vacated the opinion and replaced it with a much more carefully reasoned, nuanced opinion.

Most importantly, the Eleventh Circuit no longer holds that “A person also loses a reasonable expectation of privacy in emails, at least after the email is sent to and received by a third party.” nor that “Rehberg’s voluntary delivery of emails to third parties constituted a voluntary relinquishment of the right to privacy in that information.” These bad statements of law have effectively been erased from the court reporters.

This is a great victory for Internet privacy, although it could have been even better. The Court no longer strips email messages of protection, but it didn’t go further and affirmatively hold that email users possess a Fourth Amendment right to privacy in email. Instead, the Court ruled that even if such a right exists, it wasn’t “clearly established,” at the time the District Attorney acted, which means the plaintiff can’t continue to pursue this claim.

I am personally invested in this case because I authored a brief asking the Court to reverse its earlier bad ruling. I am glad the Court agreed with us and thank all of the other law professors who signed the brief: Susan Brenner, Susan Freiwald, Stephen Henderson, Jennifer Lynch, Deirdre Mulligan, Joel Reidenberg, Jason Schultz, Chris Slobogin, and Dan Solove. Thanks also to my incredibly hard-working and talented research assistants, Nicole Freiss and Devin Looijien.

Updated: The EFF (which represents the plaintiff) is much more disappointed in the amended opinion than I. They make a lot of good points, but I prefer to see the glass half-full.

Anonymization FAIL! Privacy Law FAIL!

I have uploaded my latest draft article entitled, Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization to SSRN (look carefully for the download button, just above the title; it’s a little buried). According to my abstract:

Computer scientists have recently undermined our faith in the privacy-protecting power of anonymization, the name for techniques for protecting the privacy of individuals in large databases by deleting information like names and social security numbers. These scientists have demonstrated they can often “reidentify” or “deanonymize” individuals hidden in anonymized data with astonishing ease. By understanding this research, we will realize we have made a mistake, labored beneath a fundamental misunderstanding, which has assured us much less privacy than we have assumed. This mistake pervades nearly every information privacy law, regulation, and debate, yet regulators and legal scholars have paid it scant attention. We must respond to the surprising failure of anonymization, and this Article provides the tools to do so.

I have labored over this article for a long time, and I am very happy to finally share it publicly. Over the next week, or so, I will write a few blog posts here, summarizing the article’s high points and perhaps expanding on what I couldn’t get to in a mere 28,000 words.

Thanks to Ed, David, and everybody else at Princeton’s CITP for helping me develop this article during my visit earlier this year.

Please let me know what you think, either in these comments or by direct email.