Updated at 8:35 p.m. ET
By the time a pro-Trump mob stormed the U.S. Capitol on Jan. 6, fueled by far-right conspiracies and lies about a stolen election, a group of researchers at New York University had been compiling Facebook engagement data for months.
The NYU-based group, Cybersecurity For Democracy, was studying online misinformation — wanting to know how different types of news sources engaged with their audiences on Facebook. After the events of Jan. 6, researcher Laura Edelson expected to see a spike in Facebook users engaging with the day's news, similar to Election Day.
But Edelson, who helped lead the research, said her team noticed a troubling phenomenon.
"The thing was, most of that spike was concentrated among the partisan extremes and misinformation providers," Edelson told NPR's All Things Considered. "And when I really sit back and think about that, I think the idea that on a day like that, which was so scary and so uncertain, that the most extreme and least reputable sources were the ones Facebook users were engaging with, is pretty troubling."
But it wasn't just one day of high engagement. A new study from Cybersecurity For Democracy found that far-right accounts known for spreading misinformation are not only thriving on Facebook, they're actually more successful than other kinds of accounts at getting likes, shares and other forms of user engagement.
It wasn't a small edge, either.
"It's almost twice as much engagement per follower among the sources that have a reputation for spreading misinformation," Edelson said. "So, clearly, that portion of the news ecosystem is behaving very differently."
The research team used CrowdTangle, a Facebook-owned tool that measures engagement, to analyze more than 8 million posts from almost 3,000 news and information sources over a five-month period. Those sources were placed in one of five categories for partisanship — Far Right, Slightly Right, Center, Slightly Left, Far Left — using evaluations from Media Bias/Fact Check and NewsGuard.
Each source was then evaluated on whether it had a history of spreading misinformation or conspiracy theories. What Edelson and her colleagues discovered is what some Facebook critics — and at least one anonymous executive — have been saying for some time: that far-right content is just more engaging. In fact, the study found that among far-right sources, those known for spreading misinformation significantly outperformed non-misinformation sources.
In all other partisan categories, though, "the sources that have a reputation for spreading misinformation just don't engage as well," Edelson said. "There could be a variety of reasons for that, but certainly the simplest explanation would be that users don't find them as credible and don't want to engage with them."
The researchers called this phenomenon the "misinformation penalty."
Facebook has repeatedly promised to address the spread of conspiracies and misinformation on its site. Joe Osborne, a Facebook spokesperson, told NPR in a statement that engagement is not the same as how many people actually see a piece of content. "When you look at the content that gets the most reach across Facebook, it's not at all as partisan as this study suggests," he said.
In response, Edelson called on Facebook to be transparent with how it tracks impressions and promotes content: "They can't say their data leads to a different conclusion but then not make that data public."
"I think what's very clear is that Facebook has a misinformation problem," she said. "I think any system that attempts to promote the most engaging content, from what we call tell, will wind up promoting misinformation."
Editor's note: Facebook is among NPR's financial supporters.
MICHEL MARTIN, HOST:
We've talked a lot in recent years about misinformation and about how it spreads online, but we have new information about that. You might remember that Facebook has promised repeatedly in recent years to address the spread of conspiracy theories and misinformation on its site. But a new study from researchers at New York University shows that far-right accounts known for spreading misinformation are not only thriving on Facebook - they are actually more successful than other kinds of accounts aimed at getting likes, shares and other forms of user engagement. Laura Edelson helped lead that research. She is part of Cybersecurity for Democracy, a group based at NYU that's studying online misinformation. And she's with us now.
Laura Edelson, thank you so much for being with us.
LAURA EDELSON: Great to be here.
MARTIN: And I do want to note that Facebook is among NPR's financial supporters. With that being said, could you walk us through these findings in nonexpert terms? As briefly as you can, what question was your research team looking at, and what did you find?
EDELSON: Absolutely. So after the events of the last few months, we really wanted to understand how different types of news media engaged with their audiences on Facebook. So we got third-party evaluations of news quality and partisanship, and we combined that with Facebook data about engagement. And what we found is that overall, far-right news sources have much more engagement with their audiences than other partisan categories. But most of that edge comes from sources with a reputation for spreading misinformation.
So on the far-right, misinformation sources outperformed more reputable sources by quite a bit. But for all other partisan categories, including slightly right, the reverse was true. Sources with a reputation for spreading misinformation performed worse, and usually significantly so. And we call that effect a misinformation penalty.
MARTIN: So when you talk about a far-right news source, do you feel comfortable giving us an example that we might recognize? I know what I certainly think of, but what are you thinking of?
EDELSON: So some of the top-performing far-right news sources in our data set were things like Newsmax, Breitbart - that kind of media source.
MARTIN: And you and your colleagues say that far-right content is the only partisan leaning in which misinformation actually drives more engagement. And there is no what you call misinformation penalty. Could you just talk a little bit more about that? So a misinformation penalty is what? Is that if you are demonstrated to be inaccurate or false, then what? People who are on the left side of the ledger would give you less credibility. Is that it?
EDELSON: So we can't say exactly why it's happening. But what we see is that for left-leaning sources, for center sources, and even slightly right, the sources that have a reputation for spreading misinformation just don't engage as well. There could be a variety of reasons for that. But certainly, the simplest explanation would be that users don't find them as credible and don't want to engage with them.
MARTIN: But you're saying that misinformation actually drives more engagement with far-right content, which is remarkable.
EDELSON: Yeah. The effect was quite striking because it's not a small edge, either. It's almost twice as much engagement, you know, per follower among the sources that have a reputation for spreading misinformation. So clearly, that portion of the news ecosystem is just behaving very differently.
MARTIN: And it's my understanding that Facebook responded by saying engagement isn't the same as how many people actually see a piece of content. So perhaps you could talk a little bit more about that. Like, what do we know about how Facebook promotes content? And what do you make of their response?
MARTIN: Well, we really don't know that much about how Facebook promotes content. We know that engagement is part of what drives Facebook's algorithm for promoting content, but they really don't make a lot of information about that available. Frankly, I would love for Facebook to make the data available that backs this assertion, but they don't make it public. And this is where I just think Facebook can't have it both ways. They can't say that their data leads to a different conclusion but then not make that data public.
MARTIN: I recognize that the purpose of the study is to analyze what is as opposed to, say, what should be. But does your team have recommendations? Because it sounds like - I mean, and I understand exactly what you're saying - there is not sort of publicly available data from Facebook that would help us understand why far-right misinformation drives more engagement.
But it sounds from what you're telling us that people seek this stuff out and believe it because they want to. They want to seek it out, and they want to engage with it. So if that's the case, do you have recommendations about that?
EDELSON: I think what's very clear is that Facebook has a misinformation problem. I think any system that attempts to promote the most engaging content, from what we can tell, will wind up promoting misinformation. And just to pull out one portion of our data that I know I was really concerned about when I saw it is, you know, of course, we saw a spike of engagement with news content on January 6. That's to be expected. The thing was that most of that spike was concentrated among the partisan extremes and misinformation providers.
And when I really sit back and think about that, I think the idea that on a day like that, which was so scary and so uncertain, that the most extreme and least reputable sources were the ones that Facebook users were engaging with is pretty troubling. And I think those are the kinds of circumstances where especially Facebook has a responsibility to its users and to the wider public to do a better job of stopping misinformation from spreading. And I think, you know, that's true every day.
MARTIN: That was Laura Edelson. She's a Ph.D. candidate at New York University and a researcher with Cybersecurity for Democracy.
Laura Edelson, thank you so much for being with us and sharing this work with us.
EDELSON: Thanks for having me.
(SOUNDBITE OF MISTY SAPPHIRE'S "BLAZO") Transcript provided by NPR, Copyright NPR.