Technology

Posted on July 6, 2018 by staff

AI goes to war on fake news

Technology

Artificial intelligence is being used to make sure adverts don’t appear next to malicious or fake content on websites.

Last year, a number of big-name companies started to pull their ad spend from tech giants including Google, Facebook, and YouTube after their ads appeared next to inappropriate content on extremist sites.

London-based start-up Factmata is developing new AI software that will weed out fake news and reduce the revenue of malicious websites that produce it.

“We’re a company that is focused on finding misinformation on the web, especially problematic content related to things like fake news, hyper-partisanship (extreme political bias) and hate speech,” the firm’s chief revenue officer Anant Joshi explained.

When a business invests in online display advertising it can’t specify which websites the ad will appear on. Factmata’s AI solution is aimed to help businesses avoid having their ads appear on websites which could damage the brand.

Traditionally this would be done by ‘blacklisting’ words, but Joshi explains that this is often inaccurate.

“The traditional way of looking at data would be only keywords, but ‘black and white’ might bring a list of black and white photos – rather than it being about race,” he said.

“We have machine learning algorithms that look at the text on pages and analyse that text based on datasets which we have already created. They look for instances of that sort of content on that page.”

The company’s ‘credibility engine’ is capable of detecting hyper-partisan content, hate speech, clickbait and fake news.

Instead of basing the popularity of content of social shares, Factmata want it to be based on a credibility score.

As the solution gathers information about the language and content that a website uses, it can begin to build a picture of each website’s credibility.

This is then expressed as a score, which the company shares with its clients to ensure adverts sit alongside only unbiased and well-researched news.

Joshi agrees that ‘bias’ can be difficult to define even for humans, but particularly for AI.

To combine the two, the company has created a product called ‘Briefr’ which it hopes will make the process more academic by including external experts and journalists into the process.

Briefr, which is still in beta testing, will invite journalists and academics to make notes on websites next to a spurious fact, or bias opinion.

The company’s AI will then use those notes to better assess the content on its own, without the need for an expert to fact-check every story.

“The AI is trained to learn what the experts think,” says Joshi. “Journalists can sign up to Briefr, and instead of them sharing news articles based on likes, it’s based on the credibility and trustworthiness of that data.”

As the product advances, Joshi sees a future where the product’s offering is used outside of the advertising and media industries.

“Our AI could be used for government or finance,” he said. “For example, if a hedge fund wanted to look at the fake news coming from companies which were based on a trading decision on Twitter.

“There could also be implications going forward for health data, for example.”

Whilst the company currently analyses text, it is also looking to begin the same assessment of video content.

“We’re starting some proof-of-concept in that field because so much video content is being produced but there are a lot of areas which are causing problems for the larger brands.”