Facebook Takes Action To Limit Spread of Propaganda

Strategic Security

​Facebook CEO Mark Zuckerberg at an appearance in 2012. Flickr Photo by JD Lasica

Facebook Takes Action To Limit Spread of Propaganda

​Government exploitation of Facebook to spread propaganda is causing the social media titan to change its security posture to limit the practice, the company announced in a whitepaper.

On Thursday, Facebook published Information Operations and Facebook to address the increasing role it’s playing in facilitating civil discourse and the changes its making to detect and respond to information operations—actions taken by organized actors, such as governments, to distort domestic or foreign political sentiment to achieve a strategic or geopolitical outcome. This is done by spreading false news, disinformation, or using a network of fake accounts to manipulate public opinion (false amplifiers).

“In brief, we have had to expand our security focus from traditional abusive behavior, such as account hacking, malware, spam and financial scams, to include more subtle and insidious forms of misuse, including attempts to manipulate civic discourse and deceive people,” the white paper said. “These are complicated issues and our responses will constantly evolve, but we wanted to be transparent about our approach.” 
Facebook is taking this step because it’s observed three major features of online information operations on its platform: targeted data collection, content creation, and false amplification. 

  • Targeted data collection: Goal of stealing, and often exposing, non-public information to provide opportunities for controlling public discourse.
  • Content creation: False or real, either directly by the information operator or by seeding stories to journalists and other third-parties, such as through fake online personas.
  • ​False amplification: Coordinated activity by inauthentic accounts with the intent of manipulating political discussion. 

Targeted Data Collection
During the past few years, Facebook said it has seen an increase of malicious actors targeting individual’s personal email and social media accounts to steal information from them.

“While recent information operations utilized stolen data taken from individuals’ personal email accounts and organizations’ networks, we are also mindful that any person’s Facebook account could also become the target of malicious actors,” the whitepaper explained. “Without adequate defenses in place, malicious actors who were able to gain access to Facebook user account data could potentially access sensitive information that might help them more effectively target spear phishing campaigns or otherwise advance harmful information operations.”

To prevent targeted data collection, Facebook is providing a security and privacy features to users—including two-factor authentication. It’s also sending notifications to individuals—that Facebook is aware of—who have been targeted by sophisticated attackers, sending proactive notifications to people Facebook thinks might be targeted by malicious actors in the future, communicating directly with likely targets, and working with government bodies responsible for election protections to notify and educate users who might be at risk.

False Amplifiers
False amplifiers are motivated by ideological, rather than financial, incentives. Sometimes their goal is to push a specific narrative, but other times their true motivations are more complex and can involve promoting or denigrating a specific cause or issue, sowing distrust in political institutions, or spreading confusion.

“There is some public discussion of false amplifiers being solely driven by ‘social bots,’ which suggests automation,” Facebook said. “In the case of Facebook, we have observed that most false amplification in the context of information operations is not driven by automated processes, but by coordinated people who are dedicated to operating inauthentic accounts.”

To tackle this problem, Facebook is increasing its protections against manually created fake accounts and using analytic techniques—including machine learning—to find and disrupt abuse. It’s also enhancing its capability to respond to reports of abuse, to detect and remove spam, to identify and eliminate fake accounts, and to prevent accounts from being compromised. Additionally, Facebook is improving its ability to recognize inauthentic accounts by identifying patterns of activity.

“For example, our systems may detect repeated posting of the same content, or aberrations in the volume of content creation,” the whitepaper explained. “In France, for example, as of April 13, these improvements recently enabled us to take action against over 30,000 fake accounts.”