Fighting Misinformation on Instagram
Role: User Experience Designer
Teammates: Irene Guo, Anabelle Teoh, Pete Yang
According to a research study from MIT, most misinformation spreads not out of bias, but out of laziness and distraction. People often share a post with misinformation because it appears factual on first glance and people don’t want to take the time and effort to research whether the information is actually true. That is the key behavior we aimed to design around: a solution will work if it combats misinformation while still allowing the user to remain a bit lazy.
This presents a design opportunity for us:
How can we allow a user to look deeper into the credibility of information they see without drastically increasing the perceived time it takes to do so?
We wanted to learn how different people approach news and information they see, including their patterns of fact-checking. We looked to social networking platform NextDoor to zero in on our target stakeholders. NextDoor, which allows people in the same local area to connect, offers a straightforward way to find people in our community willing to share their experiences and insights.
We initially anticipated that individuals over the age of 40 would be the most susceptible to misinformation online, and we conducted 9 unstructured interviews with these individuals. We discovered that the vast majority of these older individuals are in fact very diligent about fact-checking and oftentimes deliberately seek out multiple sources of news to counteract bias.
We then decided to interview younger people in their 20’s and 30’s to see if there were differing trends in how they approach information they find online. We found that this demographic is more susceptible to misinformation; several of our interviewees cited not wanting to spend the time or effort to delve deeper into a piece of info they see online. Additionally, this group used social media platforms like Instagram and Twitter more than older individuals. This presented us with a promising opportunity for a design solution, and we chose this group as our primary stakeholder.
Young adults in their 20’s who frequently use Instagram. We chose Instagram because it is a popular social media app for people in their 20’s-30’s; and, with the breadth of content on Instagram, it is a place with a lot of misinformation.
After conducting 8 interviews with individuals over 40 and 10 interviews with individuals under 40, we collected our research in an affinity diagram. Creating this diagram allowed us to explore relationships between our user interviews and secondary research, and more easily visualize trends in how different stakeholders fact-check info they see online.
Included below are parts of the affinity diagram that guided our ideas on how to fight misinformation. Pink sticky notes denote broad, over-arching topics; purple and blue sticky notes denote more specific subtopics; and yellow sticky notes feature direct pieces of research such as answers from stakeholder interviews and insights from research articles. Dark yellow notes are research from people under 40 while light yellow notes are research from people over 40.
Key insights from user research:
- Both older and younger interviewees cited how the author behind an article is important and something to watch for.
- Younger individuals tend to be less active about fact-checking, largely due to feeling that it takes too much time. The individuals also generally trust information shared by friends and family, but fear that this creates an echo chamber on their social media feeds.
- People anticipate more misinformation on news sources with “an agenda”, and perceive that much of social media plays a large role in the spread of misinformation. From scientific research we learned that false content spreads faster than true content on social media applications.
- We found that people generally determine the validity of a piece of news in passive or active ways. Some passive ways include going along with popular opinion by checking the comments section of a post or going along with what their friends post. Active ways of fact-checking include reading other articles on the topic or referring to fact-checking platforms like Snopes.com.
The primary problem space we observed through our user research was younger individuals perceiving that they did not have enough time or energy to diligently fact-check info. Moving forward, we wanted to ideate solutions that reduced that perception of time. The patterns of fact-checking we observed in older individuals offered promising ways of doing so, most notably through focusing on the author of a piece of info and exploring differing perspectives on an issue.
In our ideation process, we wanted to focus on solutions that would make it easier for users to validate the accuracy of social media content when they see it. By reducing the perceived length of time required for fact checking, we hoped to promote more critical thinking regarding the information users consume. Using insights from our affinity diagram, we created storyboards for multiple ideas. These two ideas initially showed the most promise:
Idea #1: Giving journalists more power over the general public to ‘control’ the information that gets spread by affording users an easier way to access their content and platform.
Reasoning behind Idea #1: Social media has played a huge role in influencing what people believe and understand to be correct — which can lead to a lot of misinformation (Wales & Kopel, 2019). Many of our interviewees stated that their credibility of a source or article is dependent on the reporter. One of our interviewees even stressed that “journalism is one of our last defenses against corruption”, and how it’s broken integrity has propelled the spread of false information. We believed we could make use of journalists in our solution to help validate and correct viral information.
Idea #2: Provide users with the option to view or share other posts related to the current topic when they are on social media (such as Twitter or Facebook). Similar posts can present contrasting opinions, other news articles, as well as other opinions that are written by fellow users of the app.
Reasoning behind Idea #2: A few of our interviewees mentioned how people on social media tend to react very emotionally to news. As a result of peer pressure and the overload of information that people are exposed to, they form opinions extremely quickly without critically analyzing the topic. Promoting users to view different perspectives on the same topic could prevent such quick judgement and foster more a nuanced critical analysis of the information they see.
In the end, Idea #1 proved to be the idea we moved forward with. Several of the individuals we interviewed highlighted the importance of knowing the author behind a piece of info when assessing its accuracy. The usefulness of Idea #2, on the other hand, was unclear based on contextual research. While some of our interviewees expressed concern over lack of varied perspectives on their news feeds, articles from Policy Review and Science Daily showed that echo chambers and selective exposure do not significantly contribute to misinformation.
Moving forward with Idea #1, we sketched potential ideas for how journalists can asess the accuracy of info online and how users can see those assessments.
Our first idea involves a browser plugin that lets a community of journalists annotate, comment, and rate news articles. Any user who has downloaded the plugin can view these annotations.
Our second idea centers around a rating system within social media sites, in this case Twitter and Instagram. Journalists would have the ability to rate the accuracy of info in a tweet or Instagram post, and our sketches depict the different ways those ratings would appear to users on the platforms.
These sketches helped tremendously in visualizing how we could integrate our idea of giving journalists more power to rate info online. Ideating rough solutions brought up valuable questions we needed to consider moving forward, most notably the privacy of journalists. Although we considered allowing journalists to keep their name hidden to prevent harassment, we realized that the core of our solutions centered around giving the user more power to fact-check info. Keeping journalists' names private would decrease transparency and make it harder for users to dig deeper and research the credentials of individual journalists.
After consolidating all of our sketches, we designed three different lo-fi prototypes to test with 5 different stakeholders (young adults in their 20’s). We wanted to understand which prototype provided the most value to them, and how we could also improve that prototype.
Moving forward, we decided to combine aspects of the second and third prototype in our hi-fi prototype, since those were the ones that received more positive reception. We also decided on focusing our solution around Instagram, since it is a more popular platform for people in their 20’s. We researched where misinformation commonly appears on Instagram, and Instagram stories appeared to be the place where misinformation is most prevalent. (Instagram, 2021).
Our solution to fighting misinformation on Instagram:
An Ethics and Standards Board (ESB), made up of esteemed journalists, can verify the accuracy of information in a post. Posts that have been verified are tagged, and users can tap the tag to view a popup explaining how the post was verified and who verified it. This tag also appears on the original Instagram post outside of Stories, and the original post also includes a button to view other verified posts about the same topic. Including the tag on both places allows for multiple entry points to learn more about why the post was verified.
Below you can see a breakdown of our hi-fi prototype of the Instagram ESB. We tested the prototype with 8 of our target stakeholders and incorporated their feedback when refining the prototype for our final design.
Feedback from usability testing:
- Many users did not understand the "See Similar Posts" button underneath the Instagram post. Additionally, they reported that including two buttons on the post made the interface feel too cluttered.
- Users were confused by the copy of the tag "This information is recommended by the ESB." They initially did not understand what "ESB" stood for until they tapped the tag and read the full explanation on the overlay.
- Several users mentioned how they would like to see more professionals other than just journalists in the ESB. For example, one user said that for health-related info they would rather see doctors rating the post.
Based on user feedback from our high fidelity prototype, we refined our designs. Much of the revisions centered around the specific copy we used; words such as "recommended" and "supported" were either unclear to users or made them trust the ESB feature less. We focused on using more clear and consistent copy for the ESB tag and overlay, notably using the more neutral word "verified."
The core functionality of the final design remains the same, with one key difference: Instagram posts no longer include a "See Similar Posts" button. This button confused users, so we opted to keep the similar posts as a standalone feature within the Instagram Explore Page. This way, users still have the ability to browse multiple verified posts without being overwhelmed by a cluttered UI on each Instagram post.
Below you can see a detailed breakdown of our final prototype of the Instagram ESB feature. The feature addresses the key problem we were designing for: How can we allow a user to look deeper into the credibility of information they see without drastically increasing the perceived time it takes to do so?
The ESB tag and overlay offload the fact-checking process from users onto the experts who make up the Ethics and Standards Board. This gives the user more context on the credibility of the info they see without drastically increasing the perceived time it takes to fact-check the post. Including the individual experts who have verified the post increases transparency and trust with the feature; users can research those individual experts' credentials if they want more context.
The Verified Posts feature incorporates our original Idea #2, which provided users with the option to view other posts related to the current topic. Although our research indicated that such a feature would not significantly impact the spread of misinformation, we still wanted to include it to give users the ability to delve deeper into a topic and engage with a variety of perspectives on those issues.
And that's our solution to fighting misinformation online! I hope you enjoyed reading this case study as much as I enjoyed tackling this problem. Feel free to play around with the full prototype below!
In the process of this project, we realized how difficult the topic of misinformation is to tackle; however, going through the entirety of the human-centered design process allowed us to progress and narrow down our topic a little further each week. We also learned how useful models like affinity diagrams can be when consolidating research and focusing on a direction. We also understood how important it is to talk to stakeholders every step of the way; our group was able to keep in contact with a couple of stakeholders, which immensely helped guide our solution. For the purpose of time, we couldn’t dive into the specific process for joining the ESB and how the verification feature works on their end. Given more time, it would be interesting and important to create concrete designs for how the whole feature works behind-the-scenes. There’s also one piece of feedback I’d love to receive: if your Instagram app included this feature, would you care more about the validity of the information you see?