"Ever since the World Health Organization (WHO) declared COVID-19 a global public health emergency, we’ve been working to connect people to accurate information, and taking aggressive steps to stop misinformation and harmful content from spreading," Facebook.
Since early January, social media has been flooded with false and misleading posts about the coronavirus that has left many users in our chronic disease community wondering how to verify information. As the Director-General of the World Health Organization succinctly stated, “We’re not just fighting an epidemic; we’re fighting an infodemic. Fake news spreads faster and more easily than this virus and is just as dangerous.”
To combat the “infodemic,” major social media outlets have deployed legions of digital sleuths and machine data-mining algorithms to detect, flag and remove posts that contain misinformation including inaccurate cures, treatments, health directives, research and more.
The confusion on what is true, and what is not, has prompted many in the chronic disease community to ask what information is accurate, credible and reliable on social media.
Curtis Warfield, CDC Patient Ambassador and leading social media advocate on kidney disease, believes social media allows advocates to educate and bring awareness to chronic diseases. But with the flood of coronavirus information in the last few weeks, Warfield worries about the credibility of some of what he is reading.
“Now that information about COVID-19 is flowing more freely,” asks Warfield, “how and where should a person go to find proper, authentic and reliable information they can trust?”
To help answer that question, we consulted Veryan Khan (Pictured Right), a leading expert in accessing and verifying social media sources. Khan leads a private sector firm that monitors and analyzes hundreds of social media posts each day to help law enforcement around the world identify malicious online activities.
For Khan, being an online sleuth begins with a very fundamental assessment of looking at the URL link included in a post. Before reposting or retweeting, Khan urges our community to make a very basic determination, “Do you know of the source, either by reputation or history that establishes a standard of credibility?”
“It sounds basic, but you wouldn’t believe how many people don’t check the source. Anytime I am in doubt, I always Google the topic the social media item is about. If there is not a historical trail or has never been reported by credible media, then there’s a high likelihood it is not true. But even if it has been reported, assess the credibility of those original sources,” says Khan.
That is easy if you see a source like The New York Times, Associated Press, or other major media company. But what if you don’t recognize the source and can’t trace an online history through Google or other search platforms?
Sophisticated machine-learning can be used to identify “bots,” short for “robot.” Bot-run accounts utilize software that allows for autonomous (and anonymous) posts which often seek to influence a conversation and topic through fake personas.
But without machine-learning tools, Khan and others use good old-fashioned investigatory techniques that begin with looking at profiles. On Twitter, it’s as easy as clicking on the photo which leads over to a short bio page.
“Twitter makes some of that leg work easier with the blue checks for verified accounts,” said Khan. “But absent that verification, one of the places I look on Twitter and Instagram is the number of followers and the date followed. If a profile has 9K followers but is a relatively new account, or simply doesn’t have any biographical information, stranger danger.”
That’s because, as anyone in our community on social media can attest, it’s not easy gaining followers. Usually, it takes years on social media to reach 9,000 followers. Experts warn that an account with a large number of followers gained over a short period of time could indicate the use of BOTS or other online manipulation.
But Khan also warns that just because someone has a lot of followers and has been around a long time doesn't equate that they are not trying to spread disinformation. “I’ve seen bad actors posing as legitimate sources that spread misinformation about the virus, its treatment and possible cures,” Khan said.
The bottom line is that fake personas can often be spotted with a little investigation. For example, a doctor in Cleveland writing about a chronic disease or the virus will have a track record. Their posts will also pass a simple linguistics or grammar test, with bad accounts often writing in English as a second language; misspellings, incorrect verb tense, and confusing syntax will often point to an online poser.
And experts caution, that before reposting online content, a user should always verify that any URL within the post links to legitimate sources. All too often, a sensible sounding post may point to completely bogus “news sites” or other contrived sources like non-existent government agencies. Usually, these types of sites provide misinformation meant to confuse, or conquer and divide readers for propaganda purposes.
Of course, these practices take a little more time than mindlessly scrolling through content and reposting something simply because you like the text.
But for patient advocates like Curtis Warfield, it takes a little more work. As he states, “this is important to me personally and as an advocate because I can't help during COVID-19 directly due to being high risk myself, I can help by sharing good reliable information to other transplant recipients, kidney patients and other chronic diseases.”