Mark Zuckerberg — one of the most insightful, adept leaders in the business world — has a problem. It’s a problem he has been slow to acknowledge, even though it’s become more apparent by the day.
Several current and former Facebook employees tell NPR there is a lot of internal turmoil about how the platform does and doesn’t censor content that users find offensive. And outside Facebook, the public is regularly confounded by the company’s decisions — around controversial posts and around fake news.
How can we fight back against the fake news infecting our information feeds and political systems? New research suggests that education and filtering technology might not be enough: The very nature of social media networks could be making us peculiarly vulnerable.
Six years ago, Google faced a problem a lot like the problem Facebook faced today: The web was being flooded with “webspam,” web pages with little useful content that were created solely to manipulate Google’s algorithm in order to generate traffic and ad revenue. Google’s successful response to this crisis tells us a lot about how Facebook can deal with today’s fake news epidemic.
A funny thing happened in August: Facebook fired its human trending-news curators and replaced them with an algorithm. Almost instantly, the social network was awash in false news stories that many users were treating as credible and sharing on their timelines. The 2016 election, polarizing as it was, fed the fake-news beast.
Facebook is acknowledging that governments or other malicious non-state actors are using its social network to influence political sentiment in ways that could affect national elections.
It’s a long way from CEO Mark Zuckerberg’s assertion back in November that it was “pretty crazy” to think that false news on Facebook influenced the U.S. presidential election. It’s also a major sign that the world’s biggest social network is continuing to grapple with its outsized role in how the world communicates, for better or for worse.
How Are They Handling It?
Facebook thinks it has figured out how to stop the spread of fake news: It’s going to ask journalists to tell them if something’s fake, and then it will ask users not to share it.
The social network laid out its plan in a blog post today, following weeks of criticism for its role in spreading intentionally deceptive stories during the 2016 election. Here are the big ideas
The next battleground in Facebook’s war against fake news is the link previews on shared posts from publishers.
Product manager for news Alex Hardiman announced in a blog post that non-publisher pages can no longer overwrite link metadata—such as headlines, descriptions and images—in Graph API or page composer.
Misleading or bogus news that was politically driven got a lot of attention this election season, but Facebook’s response also highlighted another, longstanding problem of fake news that masquerades as legitimate news outlets. And it’s questionable how effective its response will be.
Facebook is giving fact-checking organizations a kind of power they’ve never had before: the power to publicly brand other websites’ stories as “disputed” and push them down in Facebook users’ newsfeeds. Facebook’s new fact-checking system is going to subject these organizations — some of them quite small — to an unprecedented amount of public scrutiny.
Following pressure from users, the social network introduced tools to stem the spread of false information. But the rollout has been rocky at best.
New tools and policies take on the News Feed’s worst offenders. But our truth problems are bigger than Facebook.