21 March 2018

Only common sense can stop fake news

By Lexi Peery

Fake news is the term that everyone uses, but no one really knows what to do about it. Collins Dictionary defines fake news as “false, often sensational, information disseminated under the guise of news reporting” and, for the purposes of this article, can be distinguished from political tools used to undermine unappealing, but true, reporting (as exemplified by @realdonaldtrump). Fake news has become more than a common phrase in the vernacular; it’s a serious problem for which politicians and media companies are, so far, just finding flimsy, Band-Aid solutions.

Recently several countries have implemented desperate attempts to combat fake news. The French President Emmanuel Macron announced in January that he would make fake news illegal, but only when campaigning is going on — prompting the question of who is supposed to find and stop the fake news, and whether they will always be partisan in their findings. The notorious Filipino President Rodrigo Duterte says that he’s fighting fake news, but in reality he’s making established media companies fit his own deadly agenda. Germany is requiring Facebook and other media companies to take down hate speech within 24 hours of posting, something free speech advocates in the country find questionable. Even the UK announced a parliamentary inquiry to find solutions to fake news spreading, although no clear action has been announced.

Recent reporting reveals that some tech companies are part of the problem when it comes to spreading fake news. Cambridge Analytica, a political data analysis company, has come under fire in recent days for allegedly harvesting data from 50 million Facebook accounts to spread false information in relation to global elections. Facebook has been drawn in to the row, even though it claims there has been no data breach.

The Facebook/Cambridge Analytica mess may primarily be about data and privacy, but according to former Cambridge Analytica employee Christopher Wylie, the company “absolutely” planted fake news. With each negative headline, Facebook and other companies pour more time and resources into creating strategies to fight fake news. However, the real issue can’t be solved by increasing censorship on social media platforms that are capable of disseminating fake news to millions in moments. Rather, more responsibility needs to be placed on the user to discern what is or is not false information — and then what they do with that information.

It can seem like a never-ending battle when political institutions and social media platforms create new measures to prevent the spread of fake news online, and in response, trolls, bots, hackers, and anyone else purposefully producing fake news change their posting strategies ever so slightly to avoid detection. Even with these platforms flagging systems, there are thousands of posts that employees and bots would have to go through (and that is an imperfect system as well).

In a report published by Science, researchers found that bots share news, real and fake, at the same speed. But they found that true social media posts took almost six times longer than false information to reach 1,500 people. The reality is, we are part of the problem, and it’s time that people take responsibility for spreading and believing fake news. All the blame isn’t to be placed on social media platforms for letting the news be shared — it’s not their job to think critically for people.

Granted, social media companies can be doing something to try to combat fake news, especially when it’s affecting kids who had just gone through some traumatic experience (like the Parkland or Sandy Hook school shootings). It’s disgraceful that a conspiracy theory video about Parkland survivors being crisis actors was one of the top trending videos on YouTube just days after the shooting.

The reality, though, is that fake news has been around forever, and will continue to exist after the social media platforms we use today join the dusty shelves the likes of MySpace and Vine now occupy. What’s needed isn’t updated flagging systems or fake news detecting bots, but people who can stop and think about what they hear and determine for themselves if it’s true or not. Once society realises that people are primarily responsible for its spread, and that it isn’t all because of some ambiguous Russian bots, we can start making practical strides to fixing the internet’s problem with fake news.

What these strides may include are media literacy classes for people, and teaching people that Wikipedia isn’t the best source for facts (I’m looking at you, YouTube). In fact, just recently the EU’s grandly named High-Level Expert Group came out with a 42-page report with five tangible ideas to try to solve the problem, declaring: “(The report) recommends a “multi-dimensional” approach to tackling online disinformation, over the short and long term — including emphasising the importance of media literacy and education and advocating for support for traditional media industries; at the same time as warning over censorship risks and calling for more research to underpin strategies that could help combat the problem.”

For now, social media platforms and politicians, under public pressure, will come up with ideas and campaigns to fight fake news. The reality, however, is that media literacy and user accountability could be the only long-term solution.

Lexi Peery is an editorial intern at CapX and former editor-in-chief of Boston University's Daily Free Press.