October 6, 2022

Archives for 2016

Disrupting The Business Model of the Fake News Industry

By Katherine Haenschen & Paul Ellenbogen 

In the aftermath of the 2016 election, researchers and media professionals alike seized on the vast proliferation of so-called “Fake News” on Facebook as a cause for concern. An informed citizenry is a necessary condition for democracy, so it is far from ideal to have millions of people consuming intentionally misleading information masquerading as hard news. Now that Facebook has admitted that it has a problem with Fake News, Mark Zuckerberg and Co. need to do even more to prevent its spread on the platform. We propose one solution: Facebook should block advertising links to Fake News websites and Fake News pages on the Facebook platform itself.

When we talk about Fake News, we’re referring to websites that intentionally and knowingly publish factually untrue content intended to masquerade as traditional “hard news.”  Individuals may choose to publish Fake News for political reasons, such as seeking to impact voting decisions. However, there is also a profit motive behind Fake News: publishers can make big money from advertising revenue that results from traffic to their sites. The Fake News business model utilizes Facebook’s paid features to gain readers and build audiences. Facebook offers a variety of advertising options for individuals looking to reach its 156 million users in the United States, including newsfeed and sidebar ads and promoted posts from pages. It is Facebook’s advertising features that should be rendered unavailable for the paid promotion of Fake News to users.

There is precedent for calling on Facebook to block Fake News from being advertised directly to its users: Facebook already bans certain kinds of ads on the platform such as those promoting dietary supplements and “controversial content.” Additionally, Facebook announced that it will stop placing ads on third-party Fake News websites. Now, we are calling on Facebook to ban Fake News from being advertised and promoted on the Facebook platform itself. Facebook should apply this type of ban even if it hurts Facebook’s own revenue.

There’s Big Money In Fake News

Media stories about Fake News producers emphasize the tremendous profits to be made in publishing knowingly false information and helping it to “go viral” on Facebook. Display ad revenue generated by Fake News sites can reach $10,000 to $30,000 per month. One Macedonian teen who publishes Fake News sites told Buzzfeed that advertising revenue could reach thousands of dollars per day or week. And though Google and Facebook have blocked the sites from their third-party advertising platforms, the Fake News publishers also note that there is no shortage of advertising networks willing to display ads on their websites. The reach of these articles amounts to millions of Facebook shares and clicks to the website, in turn generating millions of ad impressions, according to Buzzfeed.

The same articles about Fake News sites indicate that publishers are not bothered by the potential impact of sharing incorrect information. A Macedonian teen who operated multiple sites admitted that his content was “bad, false, and misleading,” and that he was motivated by the advertising revenue generated by his Fake News site. Therefore, if Facebook wants to curtail the proliferation of Fake News on its website, it should disrupt its business model using tools that are already at the platform’s disposal.

The Fake News Business Model

Multiple news articles have referenced Fake News producers’ use of Facebook advertising features to promulgate their posts. Here’s how the business model for Fake News works, with each step in the process illustrated by the diagram below:

  1. An individual publishes false information on a Fake News website, then pays to advertise a link to the post in Facebook users’ newsfeeds.
  2. Facebook profits from advertising on its platform, earning money for every person who clicks the link or every 1,000 users who see the ads.
  3. Facebook users click on the advertised links and go to the Fake News website, generating an impression for each display ad on the website.
  4. The Fake News site earns revenue from the resulting advertising impressions, which amount to millions of page views and tens of thousands of dollars per month.

Publishers must have a Facebook page to run newsfeed ads. As such, an ancillary cycle exists in which Fake News Publishers can promote this page to gain fans and organic traffic.

  1. Fake News producers advertise their Page to fans, growing an organic Facebook audience to whom they can share links at no cost.
  2. Fans can share these links to their own Facebook networks, furthering the organic reach of Fake News. This is how something “goes viral.”

As long as the cost of the Facebook ads that promote the posts is lower than the display ad revenue from the resulting clicks, the business model above will generate net income for the Fake News producer.

Cut Off Paid Features for Fake News

Our solution is simple: Facebook needs to deny the use of paid features by pages that promote Fake News. This means Fake News pages should not be able to run newsfeed and sidebar ads, promote page posts, or market their Facebook page to gain fans. Furthermore, Facebook should block any third-party attempts to advertise links to Fake News sites. Currently, any individual with a Facebook account can create a public page and use it to run ads for Fake News stories in Facebook users’ newsfeeds. Banning all advertising links to Fake News sites would prevent publishers from setting up new and deceptive Facebook pages for the purpose of advertising.

Facebook has already taken action to limit its role in directly funding Fake News. The platform cut off advertising on — but not leading to — Fake News websites, as has Google’s AdSense network. However, the Facebook advertising platform can still be used to drive traffic to these sites and fuel the cycle detailed above. Even if it does ban the use of advertising outbound links to Fake News sites, Facebook will still need to grapple with the size of Fake News pages, some of which surpass 700,000 fans and have a tremendous potential for organic reach. Our purpose is not to weigh in on that argument, but simply to point out a simple step Facebook can take that is consistent with its external ad placement on third-party sites.

Facebook Already Bans Certain Advertisers

Furthermore, a ban on the use of Facebook’s advertising features by Fake News sites would be in keeping with existing rules pertaining to the kinds of advertisers that can use the platform to reach users. For example, Facebook restricts the advertising of unsafe supplements at its “sole discretion,” such as various diet aids and performance-enhancing substances. Other prohibited content includes “controversial content,” which is defined as “…content that exploits controversial political or social issues for commercial purposes.”

Given that Fake News producers are open about their profit motivations, their use of Facebook advertisements to drive traffic should be considered a commercial purpose rather than a political purpose. As such, Facebook should use its existing rules to draw a line between political content and commercial content. If it fails to do so, unscrupulous individuals could start dressing up their questionable advertisements as political speech — Donald Trump Diet Pills, anyone? They’ll make your waistline great again!

Distinguishing between Fake News and news from reputable outlets is something that Facebook is already committed to doing now that it has pledged to pull advertisements from Fake News sites in the Facebook Audience Network program. Facebook could use the same criteria that it uses in the Audience Network on its advertising platform. While we are not proposing a heuristic to determine what is Fake News and what is merely an opinion piece devoid of factual content, we suggest that Facebook apply the same rules for banned third-party sites to advertisements on the platform for those very same sites.

Fight Fake News, Or Else Everyone Gets Played

It isn’t clear how Facebook’s long-term interests are served by enabling Fake News to market to its users, essentially creating a back door around its own advertising policies. Facebook makes money from advertisements for Fake News, but in the long term it may come to hurt Facebook, with suspicion and lost goodwill outweighing earnings from this category of advertisements. If Facebook chooses to regulate Fake News as political speech, Zuckerberg et al. are setting themselves up to be useful idiots for websites trying to make a quick buck off sensationalist and false stories.

As for the users, they are being intentionally misled with incorrect articles about political actors, which have the potential to impact issue awareness and candidate choice. At worst, people are basing their vote on misinformation-for-profit. At best, users may be getting quick entertainment out of these links (if they recognize them as false), but for the most part it seems like the Fake News operators are getting the benefit of the arrangement. Removing paid advertisements for these sites from users’ Facebook newsfeeds is not going to negatively impact their lives. Furthermore, these individuals remain free to like the Facebook pages for Fake News sites and share their posts organically with friends.

We are merely proposing that Facebook cut off the use of its paid features to promote links to Fake News to wider audiences, in accordance with its existing advertising policies. Advertisements for Fake News should be regulated like ads for “controversial content” and dietary supplements. This would cut off one stream of revenue for these Fake News websites, forcing them to gain traffic from Facebook entirely through organic reach. Failure to ban this type of advertising would suggest that Facebook values its own revenue over the need to curtail bad actors who are using its platform to intentionally spread misinformation harmful to our democratic society.

Announcing the Open Review Toolkit

I’m happy to announce the release of the Open Review Toolkit, open source software that enables you to convert your book manuscript into a website that can be used for Open Review. During the Open Review process everyone can read and annotate your manuscript, and you can collect valuable data to help launch your book. The goals of the Open Review process are better books, higher sales, and increased access to knowledge. In an earlier post, I described some of the helpful feedback that I’ve received during the Open Review of my book Bit by Bit: Social Research in the Digital Age.  Now, in this post I’ll describe more about the Open Review Toolkit—which has been generously supported by a grant from the Alfred P. Sloan Foundation—and how you can use it for your book.

As described on the project’s website, the Open Review Toolkit is a set of open source scripts that you can download and use to convert your manuscript to an Open Review website. One way to think about it is that the Open Review Toolkit is the plumbing that ties together four outstanding projects: Hypothes.is, Pandoc, Google Analytics, and Google Forms. Full technical details and all the code are available from the Open Review Toolkit GitHub repository, but here’s an overview.

The build process that converts a manuscript into an Open Review website is codified in a single Makefile and has three primary steps:

  1. Pandoc converts the book manuscript into a single HTML file.
  2. A set of custom scripts enrich the single HTML (e.g., with richer information about each citation) and then split the single HTML file into a bunch of different HTML files, one for each section of the book.
  3. Middleman uses those HTML files and some custom templates to create the Open Review website, which is a static HTML website.

Step 1

Pandoc converts the book manuscript into a single HTML file. Currently, the only supported input format for this first step is Markdown. In other words, at this time, your manuscript must be written in Markdown. However, Pandoc supports a variety of formats as inputs, and in the future we hope to add support for additional input formats, such as LaTeX and Word. If you’d like to help build support for additional input formats, please get in touch.

Step 2

The custom scripts enrich and split the HTML output from Pandoc. First, an enrichment script adds information to each citation. In the future, additional enrichments could also be added at this step. Next, the splitting script splits the single HTML file into one file for each section of the book. These sections are then placed in directory structure that reflects to hierarchy of the sections in the manuscript. This splitting script also creates a JSON file that includes metadata about the manuscript structure. This JSON metadata file that allows the Middleman build process to create things such as the table of contents and previous / next page links between sections.

Step 3

Middleman builds the Open Review website, which is a static HTML website. The Middleman project lives inside the website/ directory. This project is pre-populated with existing layouts that include Google Analytics, Hypothes.is, and navigational elements for the site. This is also where pages that are part of the Open Review website but are not part of the manuscript reside (e.g., an About page). The HTML files from step 2 are used as the primary content for each book page on the site. These HTML files should not be manually modified as they will be overwritten the next time the site is built.

This entire build process takes place inside of a virtual machine we created that comes pre-installed with all the open-source software that you will need. By using this virtual machine, we hope to ensure that the Open Review Toolkit will work right the first time no matter what operating system you are using.

Once those three steps are complete, you have a set of static html files that you can host anywhere that you want (for my book, we are using GitHub pages). On the Open Review Toolkit website, I also describe additional features of the Open Review websites.

We’ve tried to make it as easy as possible to convert your manuscript into a modern and functional Open Review website. All of our code is open source, but if you’d like to hire a developer to help you do the conversion, the Open Review Toolkit has a recommend list of Preferred Partners.

The Open Review Toolkit, which was inspired by earlier innovations in academic publishing, would not have been possible without the help of many people. I would like to thank the folks at the the Agathon Group, particularly Luke Baker (coding) and Paul Yuen (design) who built the Open Review website for my book Bit by Bit: Social Research in the Digital Age. The Open Review Toolkit grew out of that initial code and design. I would also like to thank Meagan Levinson and Princeton University Press for their support during the first Open Review process. Further, I would like to thank the Alfred P. Sloan Foundation for their support of the Open Review Toolkit. Finally, the Open Review Toolkit builds on some amazing open source software. I’d like to thank everyone who contributed to the project we used in the Open Review Toolkit: Pandoc, LaTeX, hypothes.is, Vagrant, Ansible, Middleman, Bootstrap, Nokogiri, GNU Make, and Bundler.

 You can read more about the Open Review Toolkit at our webpage and download our code from GitHub.

CITP Call for Visitors and Affiliates for 2017-18

The Center for Information Technology Policy is an interdisciplinary research center at Princeton that sits at the crossroads of engineering, the social sciences, law, and policy.

We are seeking applicants for various residential visiting positions and for non-residential affiliates. For more information about these positions, please see our general information page and yearly call for applications and our lists of current and past visitors.

We are happy to hear from anyone working at the intersection of digital technology and public life, including experts in computer science, sociology, economics, law, political science, public policy, information studies, communication, and other related disciplines.

We have a particular interest this year in candidates working on issues related to Interconnection, the Internet of Things (IoT), and the ethics of big data and algorithms.


All visitors must apply online through the Jobs at Princeton site. There are three job postings for CITP visitors: 1) the Microsoft Visiting Professor of Information Technology Policy, 2) Visiting IT Policy Fellow, and 3) IT Policy Researcher.

A Visiting IT Policy Fellow is on leave from a full-time position (for example, a professor on sabbatical); an IT Policy Researcher will have Princeton University as the primary affiliation during the visit to CITP (for example, a postdoctoral researcher or a professional visiting for a year between jobs). As such, applicants should apply to only one of the Visiting IT Policy Fellow position or the IT Policy Researcher position as appropriate; applicants to either position may also apply to be the Microsoft Visiting Professor.
For all visitors, we are happy to hear from anyone working at the intersection of digital technology and public life, including experts in computer science, sociology, economics, law, political science, public policy, information studies, communication, and other related disciplines.

Applicants should submit a current curriculum vitae, a research plan (including a description of potential courses to be taught if applying for the Visiting Professorship), and a cover letter describing background, interest in the program, and any funding support for the visit. CITP has secured limited resources from a range of sources to support visitors. However, many of our visitors are on paid sabbatical from their own institutions or otherwise provide some or all of their own outside funding.

Microsoft Visiting Professor of Information Technology Policy

The successful applicant must possess at least a bachelor’s degree and will be appointed to a ten-month term, beginning September 1st, with the possibility of renewal for a second year. The Visiting Professor must teach one course in technology policy per academic year. Preference will be given to current or past professors in related fields and to nationally or internationally recognized experts in technology policy.

The application process for the Microsoft Visiting Professor of Information Technology position is generally open from November through the end of January for the upcoming year.

To apply to become the Microsoft Visiting Professor, please go to Jobs at Princeton, click on “Search Open Positions,” and enter requisition number 1600994.

Visiting IT Policy Fellow; IT Policy Researcher

The successful applicant must possess an advanced degree and typically will be appointed to a nine- to twelve-month term, beginning September 1st. These visitors may teach a seminar if desired, subject to the approval of the Dean of the Faculty. We encourage candidates at all levels to apply.

As noted above, candidates should apply to either the Visiting IT Policy Fellow position (if they will be on leave from a full-time position) or the IT Policy Researcher position (if not). Please do not apply to both listings.

Full consideration for the Visiting IT Policy Fellow and IT Policy Researcher positions is given to those who apply from November through the end of January for the upcoming year.

To apply to become a Visiting IT Policy Fellow, please go to Jobs at Princeton, click on “Search Open Positions,” and enter requisition number 1600996.

To apply to become an IT Policy Researcher, enter requisition number 1600995.

Princeton University is an Equal Opportunity/Affirmative Action employer and all qualified applicants will receive consideration for employment without regard to age, race, color, religion, sex, sexual orientation, gender identity or expression, national origin, disability status, protected veteran status, or any other characteristic protected by law.

All offers and appointments are subject to review and approval by the Dean of the Faculty.


Technology policy researchers and experts who wish to have an affiliation with CITP, but cannot be in residence in Princeton, may apply to become a CITP Affiliate. The affiliation typically will last for two years. Affiliates do not have any formal appointment at Princeton University.

Applicants should email applications to between November and the end of January for affiliations beginning the following academic year. Please send a current curriculum vitae and a cover letter describing background and interest in the program.

New Workshop on Technology and Consumer Protection

[Joe Calandrino is a veteran of Freedom to Tinker and CITP. As long time readers will remember,  he did his Ph.D. here, advised by Ed Felten. He recently joined the FTC as research director of OTech, the Office of Technology Research and Investigation. Today we have an exciting announcement. — Arvind Narayanan.]

Arvind Narayanan and I are thrilled to announce a new Workshop on Technology and Consumer Protection (ConPro ’17) to be co-hosted with the IEEE Symposium on Security and Privacy (Oakland) in May 2017:

Advances in technology come with countless benefits for society, but these advances sometimes introduce new risks as well. Various characteristics of technology, including its increasing complexity, may present novel challenges in understanding its impact and addressing its risks. Regulatory agencies have broad jurisdiction to protect consumers against certain harmful practices (typically called “deceptive and unfair” practices in the United States), but sophisticated technical analysis may be necessary to assess practices, risks, and more. Moreover, consumer protection covers an incredibly broad range of issues, from substantiation of claims that a smartphone app provides advertised health benefits to the adequacy of practices for securing sensitive customer data.

The Workshop on Technology and Consumer Protection (ConPro ’17) will explore computer science topics with an impact on consumers. This workshop has a strong security and privacy slant, with an overall focus on ways in which computer science can prevent, detect, or address the potential for technology to deceive or unfairly harm consumers. Attendees will skew towards academic and industry researchers but will include researchers from government agencies with a consumer protection mission, including the Federal Trade Commission—the U.S. government’s primary consumer protection body. Research advances presented at the workshop may help improve the lives of consumers, and discussions at the event may help researchers understand how their work can best promote consumer welfare given laws and norms surrounding consumer protection.

We have an outstanding program committee representing an incredibly wide range of computer science disciplines—from security, privacy, and e-crime to usability and algorithmic fairness—and touching on fields across the social sciences. The workshop will be an opportunity for these different disciplinary perspectives to contribute to a shared goal. Our call for papers discusses relevant topics, and we encourage anyone conducting research in these areas to submit their work by the January 10 deadline.

Computer science research—and computer security research in particular—excels at advancing innovative technical strategies to mitigate potential negative effects of digital technologies on society, but measures beyond strictly technical fixes also exist to protect consumers. How can our research goals, methods, and tools best complement laws, regulations, and enforcement? We hope this workshop will provide an excellent opportunity for computer scientists to consider these questions and find even better ways for our field to serve society.

Privacy: A Personality, Not Property, Right

The European Court of Justice’s decision in Google v. Costeja González appears to compel search engines to remove links to certain impugned search results at the request of individual Europeans (and potentially others beyond Europe’s borders). What is more, Costeja may inadvertently and ironically have the effect of appointing American companies as private censors and arbiters of the European public interest.

Google and other private entities are therefore saddled incomprehensibly with the gargantuan task of determining how to “balance the need for transparency with the need to protect people’s identities,” and Costeja’s failure to provide adequate interpretive guidelines further leads to ad hoc approaches by these companies. In addition, transparency and accountability are notoriously difficult to cultivate when balancing delicate constitutional values, such as freedom of expression and privacy. Indeed, even the constitutional courts and policy makers who typically perform this balancing struggle with it—think of the controversy associated with so-called “judicial activism.” The difficulty skyrockets when the balancers are instead inexperienced and reticent corporate actors, who presumably lack the requisite public legitimacy for such matters, especially when dealing with foreign (non-U.S.) nationals.

The Costeja decision attempts to paper over the growing divergence between Anglo-American and continental approaches to privacy. Its poor results highlight internal normative contradictions within the continental tradition and illustrate the urgency of re-conceptualizing digital privacy in a more transystemically viable fashion. [Read more…]