As any fact-checker worth their salt will tell you, some facts are more difficult to check than others. Verifying simple facts such as dates, distances, and monetary values is straightforward. But what about broad statements, ambiguous claims, or politically biased opinions? In an era of fake news and outright misinformation, separating fact from fiction is harder than ever.
The robots have already risen
As if fake news written by humans wasn’t problematic enough, AI-generated content is increasingly prevalent. In fact, you’ve likely read some without even knowing it. The Washington Post’s robot reporter, Heliograf, has been churning out everything from financial reports to high school football game reports since 2017. The non-profit, OpenAI, has created an AI writing program that writes so convincingly that it may be too dangerous to release.
However, it turns out that the impending robot apocalypse may actually solve more problems than it creates. Researchers from MIT, Qatar, and Bulgaria have developed an AI program that can autonomously fact-check individual claims and evaluate the factualness of different websites while uncovering their political biases. Their jointly authored paper, ‘Predicting Factuality of Reporting and Bias of News Media Sources’, shows that AI may make it easier for people to avoid fake news.
Wikipedia may save us all
In a recent interview with Popular Science, the paper’s lead author, Ramy Baly, a postdoc at MIT’s Computer Science and Artificial Intelligence Lab, explained that when it comes to spotting fake news, “Wikipedia is very important.” According to Baly, Wikipedia is crucial to identifying inaccurate and biased reporting because critical information is often listed on the Wikipedia page for the news source.
For example, the Wikipedia page for the Drudge Report’s clearly labels is as a conservative news site. Likewise, the Onion’s Wikipedia page clearly identifies the site as a source of satire. In fact, the very absence of a Wikipedia page is a good indicator that a website lacks reliability and authority.
Training the AI fake news killer
Baly’s team trained their AI system using a range of data from over 1,000 websites, including Wikipedia page content, URLs, Twitter accounts and articles on the sites themselves. By analyzing up to 150 articles from each website, their system could identify extremely biased websites by examining the language used. Speaking with Popular Science, Baly noted that “Biased websites try to appeal to the emotions of the readers,” to a much greater extent than middle-of-the-road mainstream sites.
According to the paper, MIT’s AI system achieved a 70 percent success rate in terms of detecting bias and a 65 percent success rate in predicting how factual the website was. Baly told Popular Science that the MIT results indicated a strong correlation between highly biased websites and low levels of factual accuracies.
How to detect bias in news sources yourself
As Wikipedia is edited by humans, it is one of the best ways of assessing the overall trustworthiness of any website. Any website without its own Wikipedia page should be viewed with suspicion. For instance, last August, Facebook identified Iranian-based fake news originating from a site called the Liberty Front Press. Despite its libertarian-sounding name, the site appeared to be pro-Iran and lacked a Wikipedia page. Facebook has created its own list of tips for spotting fake news to help the average reader separate truth from satire or comedy.
The robots are coming … to help
While Baly’s team is currently working on ways to improve the sophistication of their system, recent developments indicate that overall, AI will be more of a help than a hindrance over the longer term. Facebook is using AI to help solve the problem of hate speech in Myanmar while Google’s AI system Jigsaw can automatically rank reader’s comments according to the toxicity of their content. Initiatives such as Jigsaw indicate that in the future, fake news, hate speech and other nasties may be automatically filtered out of view by AI.