Dr. Josephine Lukito discusses how Russian trolls, posing as Americans online, sow discord and heighten U.S. political and racial divisions.

Producer: Stephanie McVicker
04/08/2021 • 06:45 PM EST


Dr. Josephine Lukito discusses her ground-breaking research on how Russian-troll comments find their way into the reporting of mainstream U.S. news and media outlets, presented as American popular opinion. She also explains how the intent behind this trolling is less about misinforming Americans and more about sowing discord, heightening current U.S. social and racial divisions, and furthering political polarization.

Transcript:
0:00: Stephanie McVicker:

With us today, we have Dr. Josephine Lukito, an assistant professor at the University of Texas at Austin School of Journalism and Media. Dr. Lukito studied political misinformation and disinformation, specifically as it relates to interactions between news outlets and the various social media platforms. Dr. Lukito's work has been widely cited, including in the Mueller Report. Dr. Lukito, thank you so much for joining us today.

0:25: Dr. Josephine Lukito:

Thank you so much for having me.

0:27: Stephanie McVicker:

In you're 2018 Columbia Journalism Review study that was cited in the Mueller Report, you did a lot of research on Russian troll activity on the major social media platforms. Can you share some of your key findings with us?

0:40: Dr. Josephine Lukito:

Totally. So, in the 2018 Columbia Journalism Review study, one of the things we're really interested in is what situations existed, where these Russian troll accounts would appear beyond their social media platforms. So, when did they, yes, we knew that they appeared on Twitter and Twitter had tried to delete them, but were there situations where they moved beyond that platform? And so, in that study, we were really interested in situations where Russian troll tweets were quoted in US news stories. And we had found a startlingly high number of US news outlets that had unintentionally embedded a tweet from a Russian troll without realizing it was a Russian troll.

And so, of the 33 outlets that we looked at, 32 of them had at least one story with a Russian troll. And in later studies we've expanded that frame out from 33 to about a hundred, and we still find that a majority of US news outlets or outlets that are popular in the US, did unintentionally quote a Russian troll in there at some point from 2016 until 2018.

1:50: Stephanie McVicker:

And how were you able to identify these Russian troll account names? How did you know they were Russian accounts?

1:56: Dr. Josephine Lukito:

Yeah, so we originally started by using the list that was provided from, I believe it's the Select Committee on Intelligence. So their committee had released the list of about 3000 or so accounts that were associated with the Russian IRA agency or the Internet Research Agency, as they're typically known. And we kind of just built our analysis from there. So, we started with those accounts, so they listed a range of account names as well as their handle. They're called user IDs. And what we actually found when we started doing analysis and going back through Twitter archives was that some of these accounts changed their names, for example. So they would start as one account name in 2014 and then in like 2016 or 17, they would totally change their account name. And so we ended up expanding that list out a little bit further by finding handles that the original list didn't actually provide. And then from that list, we crawled through just dozens of news outlets and all of the stories that they had been publishing and ultimately came together with this list of a couple of hundred stories that had these Russian IRA troll tweets in them.

3:10: Stephanie McVicker:

Here on the screen, we have an example of one of those tweets, which is just congratulating Steven Miller for calling Jim Acosta ignorant. What's the point of tweets like these?

3:23: Dr. Josephine Lukito:

So I think, when we think about Russian disinformation, we have to remember that it's a combination of both sometimes factually accurate and inaccurate information, but also a lot of opinion. And that's ultimately what we ended up finding in a lot of the stories that quoted these Russian trolls was that they're not necessarily being quoted for factually accurate or inaccurate information, they're being quoted for some sort of opinion. Right. And so making a statement that someone is being an idiot on screen or making a comment about like there was one tweet we had found where they were promoting like heterosexual pride day. And so in a lot of these situations, it's not just the mix of fact or fiction, it's also this combination of really incendiary, inflammatory opinions.

And one really popular thing to do in journalism is this practice called vox populi. So typically that's when a journalist wants to interview a regular citizen and ask them, what's your opinion on this new story? And this is not a new practice. Journalists have been doing this for decades, if you think of just the man on the street type interviews and television, but with social media, now it's a lot easier for a journalist to just go on Twitter or Facebook and find a really interesting or compelling tweet. But in doing so, they can inadvertently quote, you know, a disingenuous actor; in this case, a Russian troll.

4:51: Stephanie McVicker:

So in that same study, you found that media outlets with long-standing reputations, like the Washington Post, had referenced at least one tweet from the IRA that showed up by journalists using these quotes in their stories. Is there a greater awareness about this problem among journalists, since this study was made public?

5:09: Dr. Josephine Lukito:

I think so. I think it's unfortunate that I had to do this study at all. Right. I try to think of my research as, in my ideal world, I wouldn't have to do my research, disinformation wouldn't exist. But as long as this information does, I'll still be doing this work. And when I talk to journalists about this study in particular, I think they were really startled by the fact that they had a Russian troll tweet in some of their news stories. There was one piece from Slate, actually, a journalist kind of did a retrospective of why she had embedded that tweet and what she wants to do in the future to avoid those sort of situations. One thing that I've been really encouraging journalists to do is to try to reach out to the user that has posted the tweet and see if they can get permission, especially if it's an unverified user.

And that serves two purposes, right? You're informing the person that their content is being put on a news story. But also you can engage in that verification process just to make sure, even if it's an opinion that it's a real person behind the scenes. And newsrooms, I think vary a lot in whether they can do that. Obviously in my, originally I tried recommending calling the individual users and I know that's really difficult to do with a Twitter user. And so a lot of journalists have instead decided that they want to DM that person or a direct message or reach out to them. Maybe not as intensely as a phone call, but still engaging in that verification process.

6:40: Stephanie McVicker:

Have you done any research that quantifies the degree to which Russian trolls are embedded in the comment section of the major US news outlets?

6:49: Dr. Josephine Lukito:

Unfortunately I haven't. And one of the reasons for that is I think because of the difficulty of collecting comments underneath news stories. So sometimes they use different platforms. For example, Disqus is a really common one, but isn't always used across the board. We have found a range of Russian trolls in like YouTube comments and on social media platforms. So it's a lot more common to see them there, but we haven't empirically looked at them underneath the news stories.

7:19: Stephanie McVicker:

Are there any particular social media platforms where Russian trolls are the most prominent?

7:24: Dr. Josephine Lukito:

So it appears that they were the most prominent by far on Twitter. They had the most accounts. That was their kind of largest presence. And some of my work in 2020 / 2021, had been comparing Twitter to Facebook activity to Reddit, and we've by and far I found that Twitter was the most active. But in terms of variety, I would say probably Facebook is where they really shine. So they had both you know, organic content. So this is Russian trolls pretending to be regular everyday citizens. But on Facebook, we also saw a large range of advertisements. And so that's where I would say Facebook really differentiates, or their activity on Facebook was really differentiated. They had both organic content and they had put in a lot of money towards targeted advertisements that were geographically tagged or demographically tagged. So, there were some really scary instances in fact, that I found Russian IRA advertisements targeting university students, either at my old Alma mater at UW Madison or here at UT Austin, there were a whole range of Russian ads targeting those college students.

8:36: Stephanie McVicker:

To what end, what are they trying to accomplish?

8:39: Dr. Josephine Lukito:

So a lot of the content, both organic and advertising-wise that they produced, really tried to identify issues that were highly, highly partisan or highly inflammatory. And then they tried to really punch up the language on that. So to use really crass example refugees was, and immigration in general was a really common topic among Russian disinformation. And there was actually a couple of Russian troll accounts that use the term rapefugee, to try to again, bring up that really inflammatory language. And the goal is to make people upset, right? And so that's where the trolling language really comes in because their goal isn't to provide information or their goal isn't to persuade you to vote one way or another. Their goal is to make you hate fellow Americans.

9:30: Stephanie McVicker:

So the 2021 study, you looked at the ways in which the IRA Twitter accounts built up their followings, which were central to their disinformation campaigns focused on the 2016 elections. Can you explain how they operated?

9:44: Dr. Josephine Lukito:

Totally. So of the 3000 accounts that I had mentioned being released, there was about maybe a hundred of them that were really, really prominent. So of the 3000, there was a hundred who had gained a fairly substantial following. And while we were broadly interested in the content from all of these platforms or all these accounts, we were especially interested in the ones that garnered a ton of attention, because they had an outsized influence on the media ecology. And so we were really interested in how those top accounts got their followers. And so we took of those major accounts, we took the top six and we wanted to look at what factors contributed to their increase in retweet ability, like retweeted content, but also in the number of followers.

And so in this analysis, what we ended up identifying was that when a prominent Russian IRA troll was quoted in a news story, or when they were retweeted by a prominent/verified Twitter account, they ended up getting a lot more followers. And this is only true in kind of partisan aligned accounts. So for example, TEN_GOP, which is a Russian troll that pretends to be conservative, they would get more followers if they were, you know, embedded in a Fox News story, but not necessarily if they were embedded in a Huffington Post or New York Times story. But one of the things that we realized after this was that the broader media ecology has a much greater impact on the followership of these accounts than we realized.

So there's a lot of power in what verified accounts and what news organizations can do in amplifying this content. And I think it's important to keep in mind that I don't think it's possible to totally remove disinformation or to totally remove trolling, but I think there's definitely ways that we in our media ecology can stop that information from being amplified. And for me, this is where news organizations, influencers, public figures can play a really, really critical role in stopping that information from reaching a much broader audience.

11:51: Stephanie McVicker:

You just touched a little bit how partisan enclaves and hyper-partisan media outlets contributed to the IRA's ability to disseminate disinformation. Can you get into a little bit further about how that works?

12:04: Dr. Josephine Lukito:

Yeah. And I'm going to use just one example as one that always comes to mind. So this is another mix of kind of factually incorrect information and opinion. There was a shock jock at Ohio who had posted this picture of a Cleveland Cavalier celebration, and he had pretended that it was actually a rally for Donald Trump in Phoenix. And the picture got kind of pulled from the shock jock. TEN_GOP actually, the account I referenced earlier, ended up amplifying that image in his own separate tweet. And it got a ton of attention. So it was retweeted by a whole slew of verified conservative figures. It was shared across a variety of conservative news stories as proof that Donald Trump was actually popular. And after some time people on Twitter actually realized that that image was not what they said it was, it wasn't actually a Trump rally. It was this sports game celebration. And then you see the second cycle of liberal accounts. So liberal Twitter users, as well as liberal news organizations, kind of making fun of conservatives for being duped by this picture.

Throughout this whole process, which lasted about two weeks, no one realized that it was a Russian troll. In the whole process, everyone focused on the picture. Focused on, Oh, how silly the conservatives or liberals were, but in no instance, was there any actual fact checking of the account itself. And I think it kind of highlights, when we talk about partisan media outlets, one body of content that they produce tends to focus on slamming the opponent. Conservatives want to show situations where liberals are not necessarily thinking through something or when they're duped by something. And liberal partisan platforms had the tendency to do the same thing for conservative accounts.

But when we're trying to find those stories, or when we're trying to produce stories that are dinging the other side we might not be engaging in that fact-checking verification process as intensely as we want to. We want this information to be true. And so those biases unintentionally allow this sort of disinformation to proliferate. And I think that's something that Russian IRA trolls, not only do they know, but they exploit very heavily. And so I think this is one big reason why they target these sort of contested issues, these what they're called third rail issues. Things that are going to be naturally inflammatory, especially for partisans.

14:41: Stephanie McVicker:

And you found that race may play a significant role in determining targeting and consumption of disinformation. Can you elaborate a little on that as well?

14:51: Dr. Josephine Lukito:

Absolutely. So there's two components to this, especially when it comes to Russia. So there's first the component of Black Lives Matter being one of those contested issues. So conservatives and liberals feel very, very differently about Black Lives Matter, race relations, and this is something that Russia absolutely exploited, when they were producing disinformation, both in 2016, but even after that. And so on Reddits that they were active in, we saw that there were several Russian IRA accounts that pretended to be very, you know, pro "Blue Lives Matter," pro-cop and then groups that were very anti-cop for example. But even beyond that, just even beyond thinking of BLM as a kind of touch point issue.

The other thing that I think Russians were really aware of was the ability to target specific audiences, particularly those from a specific ethnic community or population. And so one study that I did, I think in 2020, with another colleague, Dr. Deen Freelon at UNC Chapel Hill, looked at specifically Russian trolls that pretended to be black Americans, through this theory we called Digital Black Facing. And so it was very much trying to focus on how these Russian trolls, impersonated black Americans and how they sought to do that.

And there's been some work by, you know, the Mueller investigation, others that showed that prior to conducting the 2016 disinformation campaign, Internet Research Agency operatives did travel to the US, spend time with activists, actually spent time in the US trying to understand these issues, so that they could go back to Russia and then produce, you know, exploitive content about this. And so one of the things we think about when we think about Russians impersonating these identities is, you know, why are they impersonating, especially minority identities? And what purposes does it achieve for them? Because we've noticed, not just in the Russian disinformation campaign, but in other disinformation campaigns, this very intense targeting of ethnic minorities, with specific disinformation that is tailored for those communities

17:06: Stephanie McVicker:

What can, and what are these platforms doing to weed out Russian trolls that are impersonating US citizens?

17:14: Dr. Josephine Lukito:

So, there's a range of things I think that social media platforms are doing. And I will say, compared to other types of disinformation, foreign disinformation is something that they're much more actively involved in trying to remove. So when it comes to domestic disinformation, I think this is where Twitter, Facebook, and Reddit sort of lag, but in terms of foreign disinformation, they're the most active in trying to root this information out. Of the platforms I've mentioned, I would say Twitter is the most proactive about doing this.

So some of the things that they've done include trying identify these accounts through a combination of computational strategies. So for example, one strategy I used to use a lot was called natural language processing, basically using computational means to try to identify when someone was pretending to be an American, but was actually using, for example, Russian grammar. Twitter uses a combination of that as well as network analysis. So if they find one Russian troll, they're basically going to identify, in that Russian trolls network, what's the likelihood that all the other accounts there are associated with or re-tweeting or following, what's the likelihood that those accounts are also Russian trolls? And so those are the combination of both, I think, are very common strategies for places like Twitter and Facebook to try to identify these accounts.

One additional thing, I think that Twitter does that I really like is that they have started the process of including information about when someone is a government official or a world leader or someone working in the government for a specific country. And the reason why I think that's really useful is because, with this sort of propaganda and disinformation, oftentimes it is amplified by other countries media. So for example, Russian trolls are often embedded in news stories from Russia today, also known as RT. And so there is this link between what they call black propaganda or hidden disinformation, and then the kind of more public diplomacy, white propaganda, I'm going to be upfront about the association of this news organization. And so, if we're able to know that this account is coming from a Chinese official, if they retweet something that's coming from a Chinese troll, we can see that link a lot more clearly. And so I think that's something that I would love to see in other social media platforms that Twitter has started to implement.

19:46: Stephanie McVicker:

Looks like we're out of time for today. We've been talking to Dr. Josephine Lukito from the University of Texas at Austin School of Journalism and Media. Dr. Lukito is a researcher and cross-platform media language in political communication. And Dr. Lukito it has been really great. Thank you so much for joining us today.

20:03: Dr. Josephine Lukito:

Thank you.

References