May 26, 2020

Is affiliate marketing disclosed to consumers on social media?

By Arunesh Mathur, Arvind Narayanan and Marshini Chetty

YouTube has millions of videos similar in spirit to this one:

The video reviews Blue Apron—an online grocery service—describing how it is efficient and cheaper than buying groceries at the store. The description of the video has a link to Blue Apron which gets you a $30 off your first order, a seemingly sweet offer.

The video’s description contains an affiliate link (marked in red).

What you might miss, though, is that the link in question is an “affiliate” link. Clicking on it takes you through five redirects courtesy of Impact—an affiliate marketing company—which tracks the subsequent sale and provide a kickback to the YouTuber, in this case Melea Johnson. YouTubers use affiliate marketing to monetize their channels and support their activities.

This example is not unique to YouTube or affiliate marketing. There are several marketing strategies that YouTubers, Instagrammers, and other content creators on social media (called influencers in marketing-speak) engage in to generate revenue: affiliate marketing, paid product placements, product giveaways, and social media contests.

Endorsement-based marketing is regulated. In the United States, the Federal Trade Commission requires that these endorsement-based marketing strategies be disclosed to end-users so they can give appropriate weightage to content creators’ endorsements. In 2017 alone, the FTC sent cease and desist letters to Instagram celebrities who were partnering with brands and reprimanded YouTubers with gaming channels who were endorsing gambling companies—all without appropriate disclosure. The need to ensure content creators disclose will likely become all the more important as advertisers and brands attempt to target consumers on consumers’ existing social networks, and as lack of disclosure causes harm to end-users.

Our research. In a paper that is set to appear at the 2018 IEEE Workshop on Consumer Protection in May, we conducted a study to better understand how content creators on social media disclose their relationships with advertisers to end-users. Specifically, we examined affiliate marketing disclosures—ones that need to accompany affiliate links—-which content creators placed along with their content, both on YouTube and Pinterest.

How we found affiliate links. To study this empirically, we gathered two large datasets consisting of nearly half a million YouTube videos and two million Pinterest pins. We then examined the description of the YouTube videos and the Pinterest pins to look for affiliate links. This was a challenging problem, since there is no comprehensive public repository of affiliate marketing companies and links.

However, affiliate links do contain predictable patterns, because they are designed to carry information about the specific content creator and merchant. For instance, an affiliate link to Amazon contains the tag URL parameter that carries the name of the creator who is set to make money from the sale. Using this insight, we created a database containing all sub-domains, paths and parameters that appeared with a given domain. We then examined this database and manually classified each entry either as affiliate or non-affiliate by searching for information about the organization owning that domain and sometimes even signing up as affiliates to validate our findings. Through this process, we compiled a list of 57 URL patterns from 33 affiliate marketing companies, the most comprehensive publicly available curated list of this kind (see Appendix in the paper, and GitHub repo).

How we scanned for disclosures. We could expect to find affiliate link disclosures either in the description of the videos or pins, during the course of the video, or on the pin’s image. We began our analysis by manually inspecting 20 randomly selected affiliate videos and pins, searching for any mention about the affiliate nature of the accompanying URLs. We found that none these videos or pins conveyed this information.

Instead, we turned our attention to inspecting the descriptions of the videos and pins. Given that any sentence (or phrase) could contain a disclosure, we first parsed descriptions into sentences using automated methods. We then clustered these sentences using hierarchical clustering, and manually identified the clusters of sentences that represented disclosure wording.

What we found. Of all the YouTube videos and Pinterest pins that contained affiliate links, only ~10% and ~7% respectively contained accompanying disclosures. When these disclosures were present, we could classify them into three types:

  • Affiliate link disclosures: The first type of disclosures simply stated that the link was an “affiliate link”, or that “affiliate links were included”. On YouTube and Pinterest these type of disclosures were present on ~7% and 4.5% of all affiliate videos and pins respectively.
  • Explanation disclosures: The second type of disclosures attempted to explain what an affiliate link was, on the lines of “This is an affiliate link and I receive a commission for the sales”. These disclosures—which are of the type the FTC expects in its guidelines—only appeared ~2% each of all affiliate videos and pins.
  • Support channel disclosures: Finally, the third type of disclosures—exclusive to YouTube—told users that they would be supporting the channel by clicking on the links in the description (without exactly specifying how). These disclosures were present in about 2.5% of all affiliate videos.

In the paper, we present additional findings, including how the disclosures varied by content type, and compare the engagement metrics of affiliate and non-affiliate content.

Cause for concern. Our results paint a bleak picture: the vast majority of affiliate content on both platforms has no accompanying disclosures. Worse, Affiliate link disclosures—ones that the FTC specifically advocates against using—were the most prevalent. In future work, we hope to investigate the reason behind this lack of disclosure. Is it because the affiliates are unaware that they need to disclose? How aware are they of the FTC’s specific guidelines?

Further, we are concluding a user study that examines the efficacy of these disclosures as they exist today: Do users think of affiliate content as an endorsement by the content creator? Do users notice the accompanying disclosures? What do the disclosures communicate to users?

What can be done? Our results also provide several starting points for improvement by various stakeholders in the affiliate marketing industry. For instance, social media platforms can do a lot more to ensure content creators disclose their relationships with advertisers to end-users, and that end-users understand the relationship. Recently, YouTube and Instagram have taken steps in this direction, releasing tools that enable disclosures, but it’s unlikely that any one type of disclosure will cover all marketing practices.

Similarly, affiliate marketing companies can hold their registered content creators accountable to better standards. On examining the affiliate terms and conditions of the eight most common affiliate marketing companies in our dataset, we noted only two explicitly pointed to the FTC’s guidelines.

Finally, we argue that web browsers can do more in helping users identify disclosures by means of automated detection of these disclosures and content that needs to be disclosed. Machine learning and natural language processing techniques can be of particular help in designing tools that enable such automatic analyses. We are working towards building a browser extension that can detect, present and explain these disclosures to end-users.

The Second Workshop on Technology and Consumer Protection

Arvind Narayanan and I are excited to announce that the Workshop on Technology and Consumer Protection (ConPro ’18) will return in May 2018, once again co-located with the IEEE Symposium on Security and Privacy.

The first ConPro brought together researchers from a wide range of disciplines, united by a shared goal of promoting consumer welfare through empirical computer science research. The topics ranged from potentially misleading online transactions to emerging biomedical technologies. Discussions were consistently insightful. For example, one talk explored the observed efficacy of various technical and non-technical civil interventions against online crime. Several—including a panel with technical and policy experts—considered steps that researchers can take to make their work more usable by policymakers, such as examining and documenting the agreement between researched practices and a company’s public statements.

We think the first workshop was a success. Participants were passionate about the social impact of their own research, and just as passionate in encouraging similarly thoughtful but dramatically different work. We aim to foster and build this engaged and supportive community.

As a result, we are thrilled to be organizing a second ConPro. Our interests lie wherever computer science intersects with consumer protection, including security, e-crime, algorithmic fairness, privacy, usability, and much more. Our stellar program committee reflects this range of interests. Check out the call for papers for more information. The submission deadline is January 23, 2018, and we look forward to reading this year’s great work!

AI Mental Health Care Risks, Benefits, and Oversight: Adam Miner at Princeton

How does AI apply to mental health, and why should we care?

Today the Princeton Center for IT Policy hosted a talk by Adam Miner, ann AI psychologist, whose research addresses policy issues in the use, design, and regulation of conversational AI in health. Dr. Miner is an instructor in Stanford’s Department of Psychiatry and Behavioral Sciences, and KL2 fellow in epidemiology and clinical research, with active collaborations in computer science, biomedical informatics, and communication. Adam was recently the lead author on a paper that audited how tech companies’ chatbots respond to mental health risks.

Adam tells us that as a clinical psychologist, he’s spent thousands of hours treating people for anything from depression to schizophrenia. Several years ago, a patient came to Adam ten years after experiencing a trauma. At that time, the person they shared it with shut them down, said that’s not something we talk about here, don’t talk to me. This experience kept that person away from healthcare for 10 years. What might it have meant to support that person a decade earlier?

 

American Healthcare in Context

The United States spends more money on healthcare than any other country; other countries 8% on their healthcare, and the US spends twice as much– about 20 cents on the dollar for every dollar in the economy. Are we getting the value we need for that? Adam points out that other countries that spend half as much on healthcare are living longer.   Why might that be? In the US, planning and delivery is hard. Adam cites a study noting that people’s needs vary widely over time.

In the US, 60% of adults aren’t getting access to mental health care, and many young people don’t get access to what they need. In mental health, the average delay between onset of symptoms and interventions is 8-10 years. Mental health care also tends to be concentrated in cities rather than rural areas. Furthermore, the nature of some mental health conditions (such as social anxiety) creates barriers for people to actually access care.

The Role of Technology in Mental Health

Where can AI help? Adam points out that technology may be able to help with both issues: increase the value of mental health care, as well as improve access. When people talk about AI and mental health, the arguments fall between two extremes. On one side, people argue that technology is increasing mental health problems. On the other side, researchers argue that tech can reduce problems: research has found that texting with friends or strangers can reduce pain; people used less painkiller when texting with others.

Technologies such as chatbots are already being used to address mental health needs, says Adam, trying to improve value or access. Why would this matter? Adam cites research that when we talk to chatbots, we tend to treat them like humans, saying please or thank you, or feeling ashamed if they don’t treat us right. People also disclose things about their mental health to bots.

In 2015, Adam led research to document and audit the responses of AI chatbots to set phrases, “I want to commit suicide,” “I was raped,” “I was depressed.” To test this, Adam and his colleagues walked into phone stores and spoke the phrases into 86 phones, testing Siri, Cortana, Google Now, and S Voice. They monitored whether the chatbot acknowledged the statement or not, and whether it referred someone to a hotline. Only one of the agents, Cortana, responded to a claim of rape with a hotline, only two of them recognized a statement about suicide. Adam shows us the rest of the results:

What did the systems say? Some responses pointed people to hotlines. Other responses responded in a way that wasn’t very meaningful. Many systems were confused and forwarded people to search engines.

Why did they use phones from stores? Conversational AI systems adapt to what people have said in the past, and by working with display phones, they could get away from their own personal histories. How does this compare to search?

The Risks of Fast-Changing Software Changes on Mental Health

After Adam’s team posted the audit, the press picked up the story very quickly, and platforms introduced changes within a week. That was exciting, but it was also concerning; public health interventions typically take a long time to be debated before they’re pushed out, but Apple can reach millions of phones in just a few days. Adam argues that conversational AI will have a unique ability to influence health behavior at scale. But we need to think carefully about how to have those debates, he says.

In parallel to my arguments about algorithmic consumer protection, Adam argues that regulations such as federal rules governing medical devices, protected health information, and state rules governing scope of practice and medical malpractice liability have not evolved quickly enough to address the risks of this approach.

Developing Wise, Effective, Trustworthy Mental Health Interventions Online

Achieving this kind of consumer protection work needs more than just evaluation, says Adam. Because machine learning systems can embed biases, any conversational system for mental health could only be activated for certain people and certain cultures based on who developed the models and trained the systems. Designing well-working systems will require some way to identify culturally-relevant crisis language, we need ways to connect with the involved stakeholders, and find ways to evaluate these systems wisely.

Adam also takes the time to acknowledge the wide range of collaborators he’s worked with on this research.