Nick Davis said he spends “every day, all day” on Twitter and often uses the network to promote his favorable views of President Donald Trump. So, when he received an email in late 2017 saying he interacted with an account potentially connected to the Russian government during the 2016 election, he was not surprised.
“I just kind of ignored the email,” said Davis, a fourth-year in natural resource management. “I don’t put too much weight into that kind of stuff.”
Emails like the one Davis received are part of an ongoing effort by social media companies at the request of Congress to address the growing problem of misinformation on their networks.
A majority of U.S. adults — 67 percent — receive some news from social media, according to the Pew Research Center. Gleb Tsipursky, an assistant professor of history at Ohio State, said companies like Facebook and Twitter are responsible for the information shared on their networks.
“Social media tries to get away from its actual role. They should be acting like a news agency,” he said, referencing how most news agencies verify information they spread.
Roughly 1.4 million people received the same email as Davis from Twitter, according to its blog post about the matter. The emails were sent to those who had “certain types of interactions” with the nearly 4,000 accounts connected to the Internet Research Agency, a propagandist organization linked to the Russian government, according to Twitter.
Special Counsel Robert Mueller, who is investigating Russia’s interference into the election, recently issued indictments for 12 employees of the agency.
Davis said he doubts he interacted with a propaganda account and likely only saw a tweet from a Russian source, adding it didn’t affect his vote.
Twitter also announced the deletion of more than 50,000 Russian-linked automated accounts — or bots — that were programmed to tweet election material and push inflammatory messages related to immigration and the deportation of refugees. The tweets reached many Americans across the country.
In fact, the involvement of bots stretched to the Ohio State community.
Following the car-and-knife attack on campus Nov. 28, 2016, Twitter users took to the social network to voice their support for the university community by tweeting the hashtag #prayforosu.
While many of the tweets were heartfelt messages of thoughts and prayers, some consisted of racist messages and ideas, including those sent from Russian bots, according to a database published by NBC News.
The users @_nickluna_, @cassieweltch and @thefoundingson are the tip of the iceberg consisting of more than 200,000 tweets sent from more than 2,500 bots during the 2016 election, according to the database.
“I hope Trump will ban Somali refugees #PrayForOSU,” @thefoundingson tweeted Nov. 28. “#MSM is the #FakeNews WE are the new #Media #PrayForOSU,” @_nickluna_ retweeted the next day.
Tsipursky said it’s good that Twitter is deleting bot accounts, but the lies they spread will persist.
People really are very bad at telling apart misinformation from accurate information on social media. Ridiculously bad. — Gleb Tsipursky, assistant professor of history at Ohio State
“Even though research suggests that [people’s] opinions are strongly informed by bot accounts, people don’t believe that about themselves, so they will ignore that sort of information,” Tsipursky said.
Because Twitter is a private company, Davis said it’s under no obligation to prevent misinformation on its network, adding he believes if it decides to act, it should only delete bots.
“They shouldn’t be trying to limit the speech of a person,” he said, even if that person is Russian “who’s trying to sway people.”
However, Tsipursky said preventing the spread of fake news is not a threat to free speech.
“How can we have any sort of society where falsehoods are treated equal to real truths?” he said. “Ideas are different than facts.”
Tsipursky said Twitter has never been proactive about addressing the problem of misinformation on its platform, and researchers know fake messages are a much bigger issue than the company has admitted publicly.
“It’s financially disadvantageous for Twitter to reveal as much of the misinformation on Twitter as is actually happening,” he said.
Facebook has taken a different approach by allowing its users to vote on the trustworthiness of news sources. Tsipursky said popularity is not the ideal way to determine reliability and this method will favor clickbait — news articles with exaggerated and sensational headlines.
“[News sources] that appeal to emotions — the ones that are less accurate — will be the ones that are most trusted,” he said.
Facebook’s model will ask users if they’ve heard of a news outlet and how much they trust it. Based on current Facebook likes and follows, Occupy Democrats — an extremely liberal and unreliable news outlet — would be more trusted than The Washington Post. Infowars — a popular website known for pushing conspiracy theories — would be more trusted than The Columbus Dispatch.
The biggest flaw in voting on news content is that while the Pew Research Center found 84 percent of Americans report feeling somewhat confident in their ability to recognize fake news, Tsipursky said studies have shown this confidence is misplaced.
“People really are very bad at telling apart misinformation from accurate information on social media,” he said. “Ridiculously bad.”
Davis said the Facebook model will limit the speech of unpopular news sources and make entering the market more difficult for emerging news companies.
A better way to judge the reliability of news sources is by the number of stories retracted or verified by other sources and the amount of inaccuracies revealed by independent fact checkers, Tsipursky said. Facebook has been collaborating with fact-checking organizations to make sure fake news is identified, but the company stops short of deleting the stories, he said.
People should not rely on Facebook or Twitter to determine which stories are factual, Davis said.
“It should be the job of people to not be stupid,” he said.
Tsipursky said average people only recently developed a need for media literacy because they used to be able to rely on news organizations to filter out misinformation. He said social media users putting forth more effort in recognizing truthful content can help prevent more misinformation from increasing and spreading.
“[Social media users] are part of the solution and not part of the problem,” he said.
Summer Cartwright contributed to this article