Facebook’s Election Protection Plan for 2020
If the weird and turbulent journey of social media this last decade has taught us anything, it’s that we probably shouldn’t believe everything we see or read. Social media has a lot to offer, but there’s a lot of dubious and deeply misguided (we’re looking at you, Farmville!) content out there.
For better or worse, in recent years, Facebook has been at the heart of this mis/information maelstrom. And the whirling epicenter of that “infonado” (c) seems to be the ancient human mixed mental martial art of arguing endlessly over politics. Whatever side of the political mini-golf course you choose to putt on, election interference on Facebook is one of the hot topics of our age.
In October of 2019, Facebook announced significant planned changes to improve how people share political information on social media. Here’s a breakdown of a few of the social media giant’s plans to bring more accuracy and transparency to political messaging on its platform.
First, Facebook plans to equip political campaign workers with more rigorous security options. Hacking and identity theft are everywhere these days, and people working on political campaigns are an obvious target.
A stolen campaigner’s account could easily derail a political campaign in any number of ways. Of course, that would be disastrous for the politician involved, but it’s also bad news generally for the political process.
Recognizing the increased risk, Facebook will offer additional security features to campaign workers’ accounts, including added password and identity verification requirements, two-factor authentication, and complete account lockdown after too many failed password attempts.
Political Advertisement Controls
In the past, political ads on Facebook were not vetted. All anyone required to send out political campaign messaging — no matter how inaccurate or destructive — was a Facebook account, and enough cash to run a campaign.
Facebook has announced the end of that era. Future political messaging will be vetted for accuracy, with inaccurate information grayed out unless the reader consciously opts-in to the content by clicking a visibility button. Moreover, all advertisements will clearly display not just who manages the ad page, but also who owns it.
It’s a tricky line to walk — avoiding outright censorship while ensuring biases are clearly revealed and keeping false information at least partially in check.
Is it perfect? No. But we’ll come right out and say that anything is an improvement.
A Ban on Blatant Misinformation
Then we hit the political content where Facebook is drawing a hard line. Anything objectively untrue about voting practices (when to vote, what identity documents are required, who is running, and the like) will be banned outright. Facebook has also pledged a strong commitment to proactively remove coordinated attempts by hostile foreign networks to derail the domestic political process.
Facebook’s take here is that this kind of misinformation obstructs the basic mechanics of the democratic process and should be removed quickly and comprehensively.
Too Far? Not Far Enough?
Is this too close to censorship? Or are Facebook’s new initiatives merely lip service and not enough to address the proliferation of false and misleading information on social media? People are going to be banging their heads together on these kinds of issues for some time yet.
It’s still not perfect, but hopefully Facebook’s new policies are at least a step in the right direction.