With the growing penetration of social and digital media in our everyday lives, it is becoming increasingly crucial to keep online spaces safe and secure. There are ongoing efforts by social media companies to prevent misuse of their platforms for inciting offline violence through threatening or hateful speech, propaganda, or conspiracy theories. This Wednesday, global social media giant, Meta organized an online briefing session for Pakistani journalists to apprise them regarding their policy on dangerous individuals and organizations (DIOs).
Nawab Osman, Meta’s Head of Counterterrorism and Dangerous Organizations, Asia Pacific Region held the briefing competently, partly because of his enviable cognizance regarding Pakistan, its people, and socio-cultural sensibilities, apart from his rich experience at the core of Meta’s counterterrorism ecosystem.
To start with, Meta does not allow any individual or organization that proclaims a violent mission or engages in violence to use its platforms. Therefore, all terrorist individuals and organizations are banned from having any sort of presence on Facebook, Instagram, Messenger, or other Meta platforms.
On the basis of their ties with offline violence and the level of its severity, Meta designates dangerous entities in three different tiers. The first tier focuses on individuals and organizations that engage in serious offline harms, including organizing or calling for violence against civilians, repeatedly dehumanizing or calling for harm against people based on protected characteristics, or engaging in systematic criminal operations.
Meta removes praise, substantive support, and representation of Tier 1 entities, violent events like terrorist attacks and hate crimes; and violent ideologies like Nazism and White Supremacy.
Tier 2 focuses on what Meta refers to as “violent non-state actors” that engage in violence against state or militaries but do not generally target civilians. Likewise the third and the last tier focuses on entities that repeatedly engage in violations of Meta’s Hate Speech or Dangerous Organizations policies on or off the platform, or demonstrate strong intent to engage in offline violence. This includes militarized social movements, violence-inducing conspiracy networks, and individuals and groups banned for promoting hatred.
The social media giant is employing more and more sophisticated measures to filter out dangerous content and take action against the perpetrators. A preponderance of the objectionable content is flagged by Meta’s artificial intelligence system (AI) that is constantly evolving to be able to decipher major regional languages, veiled messaging, and text embedded in graphics. Through its AI system, the company is able to filter 99.7 percent of terrorism-related content and 96.4 percent of organized hate content. For a more nuanced examination of content, Meta has a large team of content moderators sifting the data at all times.
Meta enforces a stringent check on violent content but doesn’t lose sight of the people’s right to freedom of expression as aligned with the platform’s raison deter to connect people across geographical and ideological boundaries. Since there is not a single agreeable definition of hate speech, Meta engages subject matter experts and rights activists from across the globe to help formulate a framework for designating entities as DIOs. The framework is constantly evolving to be able to respond to newer challenges emerging on a daily basis.
The journalists participating in the session were particularly curious about Meta’s position on the Taliban regime in Afghanistan and the TehreekLabbaik Pakistan (TLP), which was earlier banned by the government of Pakistan only to unban months later.
Afghan Taliban are a US-designated Foreign Terrorist Organization (FTO) and as a US company Meta has the obligation to designate them as well, however, Meta is committed to preserving the Afghan people’s right to freedom of expression.
Since Meta doesn’t make its designation list public, there couldn’t be a clear answer regarding TLP. However, Meta’s framework can guide our understanding regarding the now unbanned religio-political organization, oft accused of having ties with offline violence.
Despite its unflinching commitment to the safety of users, Meta is committed to keeping its platforms open, conducive, and accessible vis-à-vis the global right to freedom of speech. It has exceptions in place to encourage constructive discussions with the intent clearly spelled out.
Therefore, Facebook, Instagram, Messenger, and Whatsapp allow news reporting and academic or any sort of discussion regarding DIOs with positive intent. Although, it bans criticism of protected characteristics of people and communities it allows criticism of ideologies. For instance, it disallows writing Christians are bad because it would be an attack on a group of people with religious identity, however, it allows saying Christianity is bad because that would be criticism of an ideology.
However, Meta adjusts its actions vis-à-vis the socio-cultural sensitivities of particular markets and removes content that is likely to cause violence. It prioritizes action on the basis of the severity and urgency of the harm and would tackle the content likely to pose imminent harm. Meta has a cross-platform procedure to protect its community of users from accessing harmful content by redirecting the search results apart from its expansive on-ground measures to raise awareness and enhance resiliency against violent extremism.