Technology has taken much of the blame for the avowed rise in fake news, but UK startup Logically believes that tech can also mitigate the damage.

The platform ingests news from across the political spectrum of the mainstream media and uses machine learning to identify bias and misinformation.

"There are things that indicate whether an article is less likely to be credible for any reason, be it the source where it appears or the content of the article itself," Logically's founder Lyric Jain told Techworld.

"We have mechanisms in place to try and detect why these might not be credible, and on the bias front, we have ways of checking when commentary is being preferential to one side or the other.

"Those are the two main things that we can do and we apply these in a consumer-media platform and also in building tools for journalists and news organisations."

A history of reinterpreting reality

Fake news may have only recently become a buzzword, but the issue is as old as news itself.

The creators and disseminators of information have always maneuvered the public to support their views through biased interpretations if not outright lies since town criers made public announcements in the streets.

What has changed since then is the technology that generates new stories and then disseminates them across digital channels through targeted influences.

While legendary newspaper tycoon William Randolph Hearst relied on the printing press and postman to gain support for the Spanish-American War in 1898, today's purveyors of misinformation have a suite of powerful yet affordable digital tools at their disposal.

They include data analytics that can identify the content that causes the maximum impact, psychometric analysis to target its most susceptible consumers, and bots that can disseminate it across a range of social media platforms.

This creates what Jain calls an "artificially engineered serendipity", in which news that appears to surface by chance in fact emerges due to a confirmation bias that narrows the perspective of views and sources.

Jain first observed the effects during the heady days of the build-up to the Brexit vote, when media biases and echo chambers had a crucial impact on public opinion.

"I noticed a lot of my friends were on one side of the debate and they used to consume information from certain sources, and the others on the other side had completely different talking points altogether," he says.

When his studies for a master's degree in engineering took him from the University of Cambridge to MIT for an exchange year, Jain was exposed to the machine learning technology that could turn his budding idea into a product.

"I was exposed to these models that could quite readily be applied with some modification to the news context," he says. "That's when I thought I could do something about it using technology, and when I think the company actually became viable."

He decided to recruit a team that could build and test these models and turn them into a news verification product.

How Logically identifies misinformation

The platform that they developed incorporates a variety of mechanisms that assess the credibility and bias of news content and its source.

Logically combines machine learning models with human oversight to analyse three aspects of a story: its metadata, network dissemination and the text of the article.

It can analyse an article’s metadata to identify how quickly it appeared after the story broke, how frequently the article is used as a reference to the story, and the domain expertise held by the author or the publisher.

The platform also reviews how a story spreads.

"There are certain people on Twitter who are more susceptible to sharing misinformation than other people," says Jain.

“We're able to find out who those people are likely to be. If those people tend to be sharing a particular piece of information more than others it's a signal that indicates that information is likely to be credible.”

To detect the bias and credibility of the text itself, Logically runs it through a logical fallacy detector that identifies if an argument is missing crucial details, or if it contains information that it deems is a distraction.

It checks the veracity of the underlying story through a combination of automated and manual fact checking using existing fact-checking organisations and an internal team of human fact-checkers.

This allows the platform to cover both the claims that have already been assessed and to flag new ones that contradict information in its knowledge database.

It uses Natural Language Processing and machine learning to analyse the veracity of an article and an extra layer of human infrastructure to refine the accuracy of the models and provide expert annotations on any given topic.

The concept is to integrate human oversight within artificial intelligence to maximise the value of the technology’s capabilities.

News is assigned a credibility score and ranked on the site accordingly.

"Credible information appears up top," explains Jain. "All the facts appear and quotes that people have made appear at the top of any story, and then different perspectives from across the political spectrum appear lower down, juxtaposed next to each other users so can compare what either side is saying.

"There's also going to be an element where if they find a piece of information not on our website, they can simply enter the link or the text to it and we'd comment on the veracity of the commentary that they found."

Can governments fight the fake news of the future?

Another way to tackle the issue is through regulation. Governments around the world are currently developing publishing standards that platforms such as Facebook and Twitter must uphold.

Any regulation will need to balance standards for truth against freedom of speech and to ensure that dissenting voices aren’t censored by the platforms for political reasons under a fear of fines imposed by a government. The success of their efforts can only be evaluated after the next series of major elections come to an end.

"There is a risk that the whole situation might become increasingly politicised and it needs to be something that's built out cautiously, but it also needs to be something that's acted on fairly quickly,” says Jain.

Emerging technologies such as automated video lip-syncing technology and AI-powered face-swapping will introduce even greater threats in the future elections.

"They use similar sorts of technology that Hollywood Studios spend millions on in trying to develop special effects, but the proliferation of this technology and the ease of building it means that anyone can do it from just their room," says Jain.

"Right now, it's more like an arms race, with the people who want to promote misinformation on one side and the people who want to detect it on the other side.

"Those people who want to detect it are equipped with better resources and that's allowing them to detect it for now, but it's about trying to keep this side well-funded and equipped with skills so that this trend carries on."