Skip to main content

News & events

Dissecting disinformation

By Chelsea Yates
Photos by University of Washington


Kate Starbird’s interest in digital volunteerism — how people share information on social media to help each other after a crisis — sparked her research in social computing.

“It was like studying the internet in its most ideal form — how online volunteers across the globe worked together to help survivors of earthquakes and hurricanes by verifying and sharing information about where to go for water, food and medical aid,” the human centered design and engineering (HCDE) assistant professor says.

“It’s pretty ironic if you think about the work we’re doing now,” she adds.

Starbird and her research team in the Emerging Capacities of Mass Participation (emCOMP) Lab still track information flow after crisis events, but these days the group is inundated with something much different and darker: disinformation — false content deliberately circulated to confuse and deceive.

We recently sat down with Starbird to discuss what it’s like to study information that’s disseminated to mislead, how it affects all of us, and what we can do to be savvier consumers of online content.

EmCOMP researchersKate Starbird, left, with HCDE doctoral student and emCOMP Lab researcher Ahmer Arif.

How is disinformation similar to or different from fake news and misinformation?

Misinformation is false information mistakenly or inadvertently spread. It’s not necessarily intentional. Disinformation is purposely — and strategically — circulated to misinform and create confusion. It’s a political tool used by governments like Russia and other nation-states to advance their geopolitical goals and confuse mass audiences. A society that is unsure where to turn for trusted information is a society easily manipulated. 

Fake news can be a type of disinformation; it includes clickbait, political propaganda (fictional stories created to politicize an issue), manipulated photographs, and anything intentionally posted to discredit sources. It’s also become a term used to delegitimize mainstream media.

How did you start researching disinformation?

I stumbled into it through my interest in digital volunteerism. Emergency responders were reluctant to use crowdsourced information, so my team received a grant to demonstrate how crowds would weed out rumors and correct inaccurate information.

But that’s not what we discovered. Once online information is posted, it’s difficult to control and even harder to correct if wrong. Even if it’s later proved false, few people care by the time the corrected information is reported.

The more we studied crisis events, the more we noticed strange rumors that kept surfacing — like that the U.S. Navy Seals were behind the 2013 Boston Marathon bombing. Many seemed so bizarre that we initially regarded them as sensationalist conspiracy theories. We assumed that, while a few people might buy into them, mass audiences wouldn’t. But they kept appearing, and their popularity grew. This was our first glimpse into disinformation, though we didn’t know it at the time.

Your team analyzes hundreds of crisis events a year, identifies key patterns in disinformation that arise and then traces them back to the websites where they originally appeared. What's that like?

It can be very disorienting. Sometimes articles are copied and pasted across a wide variety of different domains so their original sources are unclear. Sometimes those domains also operate as conduits for state-sponsored propaganda. Remember that the goal of disinformation is to distract and deceive: While the targets might be left-leaning or right-leaning audiences, the motivation is to create and deepen societal distrust and division.

Why do we fall prey to disinformation?

It draws on our individual biases — we are more likely to accept information that aligns with our beliefs and reject what conflicts with them. We also tend to develop logical arguments that support what we think and want to believe. And what we want to believe shapes how we digest information.

Deliberately false information feeds on these personal beliefs and emotions, and when we surround ourselves with others who share similar beliefs and may likely have similar emotional reactions, false information becomes more easily consumed without question. If our friends believe something is outrageous, then we’re more likely to do the same.

So false information becomes a part of the overall collective information we perceive as “real.” When we talk to people who hold different political beliefs than we do, it can seem like we’re not even discussing the same issue because most likely we’re not operating from the same information base. The experience can be divisive, frustrating and confusing — why disinformation is planted in the first place.

Starbird's research teamStarbird’s research team uses data visualization to identify patterns in disinformation.

How does social media play into this?

With social media, people select who and what they want to follow. It becomes easy to believe that everyone in our sphere thinks the same way we do, and we become less likely to be presented with information that conflicts with our worldview. Plus, social media platforms filter the posts we see: A computer algorithm sends targeted content and advertisements to us based on what we’ve liked or shared in the past. So, information tends to bounce around inside these digital echo chambers we’ve created for ourselves. And repetition works: The more we see something online, the more familiar it becomes and the more we tend to accept it as true.

EmCOMP researchersEmCOMP researchers hold daily debrief meetings where they share the data they’ve collected and reflect on how it’s making them feel.

If we can be easily swayed by disinformation, how do you and your team keep a clear head when researching it?

As researchers we need to be mindful of how the content we’re interacting with may impact us. Practicing self-care is important. We research together in our lab and hold daily debrief meetings where we reflect on the data we’re collecting and how it’s making us feel. Everyone in our lab has the right — at any time — to distance themselves from this work if they feel it’s impacting them negatively.

How can we all be better consumers of online information?

It’s important to be aware of our own biases. If we come across an article that seems too good to be true, we should dig a little further before sharing it — it probably isn’t completely “true,” especially if it aligns with our beliefs or triggers us emotionally. Remember that a lot of false information preys on our feelings. Above all, we should remember that no matter how political, apolitical, right- or left-leaning we self-identify, we are all targets of disinformation. It is out there to divide us and erode our trust in democracy.


Want to learn more? Watch Starbird’s talk on online rumors, conspiracy theories and disinformation from the 2017 Engineering Lecture Series.

Originally published October 8, 2018