Aberfield Communications Logo Menu

Social media tech companies are finally beginning to step up in the fight against fake news. But how do we make sure they are doing enough to combat its spread? We take a look at a couple of possible solutions.

Posted on Mon 13th Aug, 2018 in: Advice, Influence, Public Relations, Social Media by Tim Downs

Social media tech companies are finally beginning to step up in the fight against fake news.Last week saw the likes of Facebook, Youtube, Apple and Spotify remove content by American conspiracy theorist Alex Jones and his Infowars brand.

It’s the latest, and one of the most significant developments to date, in the debate around fake news and the manipulation of audiences on social media that has been raging since the 2016 U.S. election and the EU referendum in the UK.

It’s also, once again, raised the issue of tech companies and their role in regulating online content.

The move by these tech giants comes amidst mounting calls for legislation that would make them liable for content, particularly that which was knowingly misleading, illegal or promoted extremism or hatred.

Some commentators are trying to place this issue in the context of a broader debate on freedom of speech, but this is really a massive red-herring pushed by those who generally hold the views, or produce the content, which would most likely be affected by the clampdown.

The real crux of this is that, up until now, these tech companies have spent millions of dollars, pounds and euros trying to maintain their status as ‘hosts’ of online content and not publishers, which legally makes them not responsible for content which is shared.

This means they are not subject to the same controls as publishers and media outlets. The effect has been to allow a free-for-all approach to what content can be shared and perhaps more importantly, promoted.

Up to this point they have all self-regulated, but the global focus on this issue has forced many of them, most prominently Facebook, to develop new tools to increase transparency and highlight paid political or issues based content.

Undoubtedly the pressure is beginning to have an effect. Going back to Infowars as the example, not only has some of its content been removed and channels terminated by the tech companies, but Infowars would appear to be beginning to remove some of its own posts. Twitter was the only major platform that didn’t remove its content but some of Infowars more extreme tweets have still been disappearing. Twitter has since confirmed that it was done by someone with access to the account. So perhaps Infowars are being forced to think about what they put out and what might contravene Twitter’s own guidelines as they fear losing another media outlet.

This hopefully points towards to an improvement in the current situation, but if we want to see a dramatic shift then perhaps we should look at the real motivation for the tech companies cleaning up their act, because it’s not for a better, fairer world.

Let’s just say that in 2017 Facebook made $39.9billion of its $40billion revenue from advertising. So if legislation is passed that classifies these tech companies as publishers, or makes them liable for content, they will be responsible for checking that every single post adheres to the letter of the law (in each of the countries they operate in) and would make it much easier for people to sue them for libel, costing them billions.

Now, the idea of the tech companies manually checking every public or shared post clearly seems impractical. So what could be done?

One suggestion is that they become liable for content when the original producer of the content cannot be tracked down. Basically, if the platform fails to establish the true identity of the person placing the content, they become responsible.

A second option is that they become responsible / liable for any content that they earn advertising revenue from.

Let’s face it, these companies employ moderators and people to assess content before publishing. They create tools to make it easier to create and publish campaigns and they sell advertising space based on content – they are publishers!

Given that it is boosted, promoted, sponsored posts that carry the greatest influence because they allow access to specific audiences, deliver greater reach, increase engagement and drive shares then make them the tech providers legal responsibility.

It would need to be monitored by a body with the teeth to enforce it, but most countries including the UK, already have them in place to monitor existing media. The financial penalties would need to be significant and the internal costs of implementation will not be small for the tech companies.

But let’s face it, they can afford it, as so far they have been given a licence to print money.

Back to news

Twitter Icon LinkedIn Icon Pinterest Icon Google Icon