October 2, 2022

Improving Your Relationship with Social Media May Call for a Targeted Approach

By Max Fineman and Matthew Salganik 

Chances are, you’re on social media every day. If you have teens, they are too. And everyone seems worried about just how much social media they’re consuming; even many teens.  Beyond these individual worries, some researchers have linked social media use to increases in political tribalism, mental health problems, and suicide.  Yet at the same time, many people seem to really love using social media.  This combination was puzzling to us, as social scientists.  So, as part of our recent undergraduate course in social networks at Princeton University, we decided to explore it further.  In particular, we decided to ask: How can people use scientific ideas to create a healthier relationship with social media?

You might think the best approach is to change your behavior based on previous research, but that runs into two problems.  First, prior research doesn’t necessarily shed light on how individual users are affected by social media, and second, as best as researchers can tell, different social media platforms seem to impact people differently.  Therefore, if you want to understand and improve your own relationship with social media, a promising approach is self-experimentation, where you basically run experiments on yourself.

The students in our social networks class did just that – designing, conducting, and reflecting on a self-experiment involving social media. For example, one student who was interested in improving their sleep decided to stop using TikTok after 10 p.m. Another student interested in being less lonely posted more Instagram Stories.

About 60 students did the activity, and there were some interesting patterns in what they found. We expected that students who limited their use—as opposed to increasing it—would benefit more in terms of personal well-being, loneliness, productivity, and sleep quality. But it turns out that the students who saw the most positive outcomes were those who designed their social media intervention in a targeted way – like avoiding Instagram while in the library. These students benefited more than students that tried something blunt, like quitting TikTok altogether. In other words, changes that should be the easiest to try — small interventions students could stick to long-term — had the most positive effects. 

Below we describe what our students did and learned. We’ve also included all of our materials so that you can try it yourself.

What students did:

Our class had about 60 students, from a variety of years and majors, and like most Princeton students, many of them were heavy users of several social media platforms.  Each of them designed their own treatment and selected an outcome of interest.  For example, some students were interested in improving their sleep and others were interested in wasting less time.  In addition to these student-specific outcomes, we also had all students track two common outcomes that have been studied by other researchers: subjective well-being and time use changes.  

This process of self-experimentation is a bit different from what social scientists normally do. Typically, researchers standardize the treatment, randomly assign treatments to participants, and collect data so that we can compare across participants and treatment groups. In our class, however, each participant was a researcher and designed a unique treatment for themself. Even though this is not standard for research, self-experimentation can be a good way to learn.  The treatments students developed fell into three main groups:

  • Targeted limitation (about 45%). Students in this group restricted – but did not eliminate – their social media use. For example, students in this group did things like stopping TikTok use after 8pm and avoiding the Instagram feed (but still using Instagram for messaging).
  • Targeted increase (about 15%).  In class, we learned about some research that suggests people who use social media actively—rather than passively scrolling—see an improvement in their well-being.  So some students committed to increasing their active engagement with social media.  For example, students in this group did things like posting 3 times per day on Instagram or direct-messaging at least 3 friends. 
  • Elimination (about 40%). Students in this group eliminated their social media use altogether on one or more apps. Students who designed these treatments did things like delete Instagram or TikTok from their phone, and some actively replaced their social media use with another activity they valued such as reading the news or spending time with friends.

What students found:

Students who designed a targeted intervention—either a decrease or an increase in use—experienced the greatest benefits to their overall well-being. Students that made untargeted changes, such as deleting apps, tended not to experience as much benefit. This difference is probably because many students already had strong intuitions about which parts of their social media use were harming them. 

In the following sections, we provide a bit more detail on the effects of the different types of treatments.

Targeted limitation

The most popular type of treatment was to restrict just a part of their social media use.  These treatments fell into roughly 3 groups: 

  1. limiting time (e.g., only using social media 30 minutes a day, no using Instagram after 8pm);
  2. adding friction (e.g., moving the social media apps from their phone’s home screen); and,
  3. avoiding specific features (e.g., not using the NewsFeed but continuing to use other parts of Facebook).  

Here’s what happened to students who restricted –  but did not eliminate – their social media use:

  • Well-being improved: About half reported increased daily happiness and more positive emotions throughout the day.
  • Loneliness decreased: More than a third reported feeling less lonely, while fewer than 15% experienced an increase in loneliness.
  • Productivity increased: Almost every student told us they were more productive.
  • Sleep quality improved: Half slept better, and the majority of the rest experienced no change. The effect on sleep quality was especially strong for students who added friction or avoided specific features.
  • More in-person social interaction: Most reported engaging in more social interaction during the treatment period, usually hanging out with friends in person.
  • Overall phone usage decreased: A majority spent less overall time on their phones during the experiment. For the most part, these were students that added friction or limited time. On the other hand, students who avoided specific features were more likely to spend the same amount of time on their phones as they did before their experiment.
  • Many students adopted their limitation after the treatment period ended: About half stuck to their intervention, even after the class activity was over. 

Targeted increase

In contrast to students that limited use, about 15% of students in the class increased their usage for the experiment.  Students might have designed these kinds of treatments because in class we discussed a few studies suggesting that some kinds of social media usage can have a positive effect on well-being.  Overall, students that increased their usage in a targeted way saw some positive effects, but they weren’t as strong as the students who did targeted limitations. 

  • Well-being improved. More than 60% said they experienced an increase in their well-being, happiness, and other positive emotions. 
  • Less stressed, anxious, and lonely. About a third reported feeling less stressed, anxious, and lonely.  
  • Changes in phone usage were mixed. A third said they used their phones less, a third said they used their phones more, and the rest said they used their phones the same amount as before the treatment period.
  • Many students adopted their intervention after the treatment period ended. Interestingly, more than 40% also continued their increased use long-term. Most of these students had increased some type of active engagement, such as direct messaging with friends or regularly posting photos and videos on Instagram or TikTok.


Among students who completely eliminated usage of at least one social media app, we didn’t see as much of an overall pattern:

  • Well-being was mixed: Half said their well-being didn’t change, and the other half was split between students who reported their well-being getting better and worse. 
  • Stress and anxiety decreased for some, worsened for others: Around 70% said they experienced less stress and anxiety, but the other 30% actually felt more stress and anxiety during the treatment period than before.
  • Loneliness worsened or did not change: More than half did not report a change in how lonely they felt, and almost a third felt more lonely during the treatment period.
  • Productivity changes were mixed: These students were roughly equally split between those who said they were more productive and those who said their productivity didn’t really change.
  • Sleep quality improved for some: Half said they slept better but about a third said they got less sleep when they eliminated their social media use.
  • In-person social interaction increased: The vast majority spent more time with their friends in-person than before.
  • Overall phone use decreased: Most said their overall phone screen time went down during the treatment period.
  • Most returned to their old behavior: Most returned to their pretreatment use patterns after the experiment ended, but about 40% reduced or eliminated after the activity ended. Some said that during the activity they discovered benefits of reducing their usage and introduced limitations in their usage after the treatment period was over. 

Why did targeted interventions show more positive effects than elimination interventions?

When we compared the three groups, we saw a general pattern indicating that targeted interventions—either limitations or increases in social media use—worked better than elimination.  This pattern surprised some of us who thought that the most important difference would be between increases and decreases in usage.  The students’ self-reflections after the experiment offer some clues about the pattern.

Students using targeted interventions frequently wrote that they targeted only the parts of social media that their past experiences suggested were especially influential for them, whether positive or negative. By designing a strategic, specific intervention, they still maintained their use of other parts of social media that they liked and believed were beneficial to them. 

For example, many students already suspected that social media was distracting them from their coursework. They limited the time they spent on social media during the hours of the day when they did their coursework and saw an improvement in their productivity. But by focusing their treatment only on their coursework hours, they were able to keep using social media in other ways that benefited them.

By contrast, students who eliminated every part of their usage were more likely to tell us that they missed certain aspects of social media during their intervention.  For example, many students who deleted their apps altogether expressed frustration at not being able to do something they liked to do. Many also reported that they worried they were missing out on online social interactions and opportunities.  They may have thrown out the good with the bad, leading to less overall improvement in well-being. 

Our main takeaway is that, if you want to reduce social media’s harmful effects on you and increase its benefits, the most effective approach may be to try a targeted intervention.

We want to point out several caveats. First, this targeted invention will vary from person to person.   All our students were doing this as part of a class activity, and the treatment period was only a few days.  Also, because students designed their own treatment—rather than having it assigned to them—it is hard to rule out the possibility that certain kinds of students might have self-selected into targeted interventions.  Further, many of the measurements were not as precise and our analysis was more informal than we would use in other settings.  Finally, not that many students did a targeted increase so it is hard to say very much about this group.

Try it yourself.

Although our findings are limited in important ways, one of the great things about this activity is that anyone can do it, even outside of a classroom setting.  If you want to try it yourself, we’ve included a slightly modified version of the materials that we used in our class at Princeton.  In just three weeks, you can potentially improve your relationship with social media and learn about the joys and struggles of doing real social science research.

If you are interested in trying this out, here are the materials we used for this activity and the class more generally.

Please note that for some people, social media has very significant impacts on their mental health, both positive and negative. We urge you to exercise caution when experimenting with something that affects your mental health, and you may want to consult a mental health professional before trying any experimentation.  If you are struggling with your mental health and need help, the National Alliance on Mental Illness provides numerous resources.

Also, if you are considering using this activity in a class that you teach, here are three things to consider:

  1. The activity tries to provide a mix of structure and flexibility.  Based on our conversations with students, we think that the freedom to choose their own treatment, outcomes, and hypothesis is key to making this successful.  We also think the chance to discuss the activity with peers was valuable. It helped the students see themselves differently and learn more about the variety of ways that people interact with social media.  That said, this flexibility often makes the results less scientifically rigorous.  Whenever there was a tension between making this a good learning activity and a good research project, we tried to lean into that tension and remind students that all research designs involve trade-offs.
  2. A major design decision you’ll need to make is the length of the treatment period.  For our class, the treatment periods typically lasted between 3 days and a week.  After the experiment many students reported wishing that the treatment period was longer. However, if your treatment period is longer, it may be harder to sustain.
  3. In our evaluation, the students reported finding the activity valuable, interesting, and not too time consuming.  Although we didn’t assess it formally, we think that many students would also say that this activity helped improve their well-being and relationship with social media.

Thanks to the teaching staff from this year and last year for helping us shape this activity: Emily Cantrell, Kyle Chen, and Katie Donnelly-Moran.  We also want to thank Janet Vertesi who has used a related activity in some of her classes. 

What our students found when they tried to break their bubbles

This is the second part of a two-part series about a class project on online filter bubbles. In this post, where we focus on the results. You can read more about our pedagogical approach and how we carried out the project here.

By Janet Xu and Matthew J. Salganik

This past spring, we taught an undergraduate class on social networks at Princeton University which involved a multi-week, student-led collective class project about algorithmic filter bubbles on Facebook. We wanted to expose students to the process of doing real research, and filter bubbles seemed like an attractive topic because they are interesting, important, and tricky to study. The project—which we called Breaking Your Bubble—had three steps: measuring your bubble, breaking your bubble, and studying the effects. In short, all 130 undergraduates in the class measured their Facebook News Feed for four weeks—recording the slant (liberal, neutral, or conservative) of the political posts that they saw. Then, starting in the second week of the project, students implemented procedures they had developed in order to change their News Feeds, with the goal of achieving a “balanced diet” that matched the baseline distribution of what is being shared on Facebook. Students also came up with public opinion questions for a big class survey, which they took at both the beginning and the end of the project. You can read more about what exactly we did, how it worked, and what we’d do differently next time here.

Though our primary goal was to teach students about doing research, we also learned some surprising things about the Facebook News Feed from the aggregated student results.

[Read more…]

Breaking your bubble

This is the first part of a two-part series about a class project on online filter bubbles. In this post, we talk about our pedagogical approach and how we carried out the project. To read more about the results of the project, go to Part Two.

By Janet Xu and Matthew J. Salganik

The 2016 US presidential election dramatically increased public attention to online filter bubbles and their impacts on society. These online filter bubbles—roughly, personalized algorithms that over-expose people to information that is consistent with their prior beliefs—are interesting, important, and tricky to study. These three characteristics made online filter bubbles an ideal topic for our undergraduate social network class. In this post, we will describe a multi-week, student-led project on algorithmic filter bubbles that we ran with 130 students. We’ll describe what we did, how it worked, and what we’d do differently next time. You can read about what we learned from the results — which turned out to be pretty surprising — here.

[Read more…]

Announcing the Open Review Toolkit

I’m happy to announce the release of the Open Review Toolkit, open source software that enables you to convert your book manuscript into a website that can be used for Open Review. During the Open Review process everyone can read and annotate your manuscript, and you can collect valuable data to help launch your book. The goals of the Open Review process are better books, higher sales, and increased access to knowledge. In an earlier post, I described some of the helpful feedback that I’ve received during the Open Review of my book Bit by Bit: Social Research in the Digital Age.  Now, in this post I’ll describe more about the Open Review Toolkit—which has been generously supported by a grant from the Alfred P. Sloan Foundation—and how you can use it for your book.

As described on the project’s website, the Open Review Toolkit is a set of open source scripts that you can download and use to convert your manuscript to an Open Review website. One way to think about it is that the Open Review Toolkit is the plumbing that ties together four outstanding projects: Hypothes.is, Pandoc, Google Analytics, and Google Forms. Full technical details and all the code are available from the Open Review Toolkit GitHub repository, but here’s an overview.

The build process that converts a manuscript into an Open Review website is codified in a single Makefile and has three primary steps:

  1. Pandoc converts the book manuscript into a single HTML file.
  2. A set of custom scripts enrich the single HTML (e.g., with richer information about each citation) and then split the single HTML file into a bunch of different HTML files, one for each section of the book.
  3. Middleman uses those HTML files and some custom templates to create the Open Review website, which is a static HTML website.

Step 1

Pandoc converts the book manuscript into a single HTML file. Currently, the only supported input format for this first step is Markdown. In other words, at this time, your manuscript must be written in Markdown. However, Pandoc supports a variety of formats as inputs, and in the future we hope to add support for additional input formats, such as LaTeX and Word. If you’d like to help build support for additional input formats, please get in touch.

Step 2

The custom scripts enrich and split the HTML output from Pandoc. First, an enrichment script adds information to each citation. In the future, additional enrichments could also be added at this step. Next, the splitting script splits the single HTML file into one file for each section of the book. These sections are then placed in directory structure that reflects to hierarchy of the sections in the manuscript. This splitting script also creates a JSON file that includes metadata about the manuscript structure. This JSON metadata file that allows the Middleman build process to create things such as the table of contents and previous / next page links between sections.

Step 3

Middleman builds the Open Review website, which is a static HTML website. The Middleman project lives inside the website/ directory. This project is pre-populated with existing layouts that include Google Analytics, Hypothes.is, and navigational elements for the site. This is also where pages that are part of the Open Review website but are not part of the manuscript reside (e.g., an About page). The HTML files from step 2 are used as the primary content for each book page on the site. These HTML files should not be manually modified as they will be overwritten the next time the site is built.

This entire build process takes place inside of a virtual machine we created that comes pre-installed with all the open-source software that you will need. By using this virtual machine, we hope to ensure that the Open Review Toolkit will work right the first time no matter what operating system you are using.

Once those three steps are complete, you have a set of static html files that you can host anywhere that you want (for my book, we are using GitHub pages). On the Open Review Toolkit website, I also describe additional features of the Open Review websites.

We’ve tried to make it as easy as possible to convert your manuscript into a modern and functional Open Review website. All of our code is open source, but if you’d like to hire a developer to help you do the conversion, the Open Review Toolkit has a recommend list of Preferred Partners.

The Open Review Toolkit, which was inspired by earlier innovations in academic publishing, would not have been possible without the help of many people. I would like to thank the folks at the the Agathon Group, particularly Luke Baker (coding) and Paul Yuen (design) who built the Open Review website for my book Bit by Bit: Social Research in the Digital Age. The Open Review Toolkit grew out of that initial code and design. I would also like to thank Meagan Levinson and Princeton University Press for their support during the first Open Review process. Further, I would like to thank the Alfred P. Sloan Foundation for their support of the Open Review Toolkit. Finally, the Open Review Toolkit builds on some amazing open source software. I’d like to thank everyone who contributed to the project we used in the Open Review Toolkit: Pandoc, LaTeX, hypothes.is, Vagrant, Ansible, Middleman, Bootstrap, Nokogiri, GNU Make, and Bundler.

 You can read more about the Open Review Toolkit at our webpage and download our code from GitHub.

Open Review leads to better books

My book manuscript, Bit by Bit: Social Research in the Digital Age, is now in Open Review.  That means that while the book manuscript goes through traditional peer review, I also posted it online for a parallel Open Review.  During the Open Review everyone—not just traditional peer reviewers—can read the manuscript and help make it better.


Schematic of Open Review.


Screenshot of Open Review interface. Click for full size.


I think that the Open Review process will lead to better books, higher sales, and increased access to knowledge.  In this blog post, I’d like to describe the feedback that I’ve received during the first month of Open Review and what I’ve learned from the process.

[Read more…]