In the tumultuous landscape of the Israeli Palestinian conflict, media outlets often find themselves accused of exhibiting a bias and heavily moderating content in support of Palestinian people, potential encroaching on the free speech of users. In this essay I will delve into the heart of this claim and asses the legitimacy of them through examining platform mechanics and the dynamics of censorship publics and moderation. As the conflict continues companies navigate through complex terrains where such accusations echo loudly. Unpacking all of this to come to a conclusion necessitates a nuanced investigation of the accused platform’s mechanisms and moderation techniques.

During the rise of social media activism and the usage of internet journalism, a trend of self-censoring to evade automated moderation has picked up on several different social media platforms such as Twitter, Instagram and Tiktok where users replace letters or create words when discussing controversial topics. For example, the term “suicide” becomes ‘unaliving yourself’ and ‘sex’ becomes ‘seggs’. This originated from apps such as Instagram, Twitter and Facebook where users discovered that certain words if posted would result in an automated moderation response.

“There’s an algospeak term for Palestinians as well: “P*les+in1ans.” Its very existence speaks to a concern among many people posting and sharing pro-Palestine content during the war between Hamas and Israel that their posts are being unfairly suppressed. Some users believe their accounts, along with certain hashtags, have been “shadowbanned” as a result” wrote A.W. Ohlheiser in a Vox article. 

As demonstrated by the replacement of terms like “suicide” and “sex,” users resort to linguistic workarounds to avoid content removal. The existence of an algospeak term for “Palestinians” suggests a perceived suppression of pro-Palestine content, as mentioned by A.W. Ohlheiser in their Vox article.

This begs the question, are media companies targeting Palestinian posts resulting in the algospeak term? To answer this, one must analyse the concept of platform mechanics to understand how social media algorithms work.

The first mechanism of the concept of platform mechanics is datafication. We will explore two mechanisms of platform mechanics in this essay.

Datafication is the process of transforming interactions and activities, into digital data. In the context of social media, it involves collecting and analysing user-generated content to inform algorithms and influence what appears on users’ feeds. 

Chapter 2 of ‘The Platform Society: Public Values in a Connective World’  discusses a controversy from 2011 where Twitter was accused of supressing the Occupy protests from their platform, specifically the hashtags #OccupyWallStreet and #OccupyBoston which protestors expected to see trending on twitter due to it gathering momentum after the protests and a high volume of tweets. 

However, it was later revealed that the intricate methods through which Twitter algorithmically organises content means that for a topic to trend it depends on the increase in usage and not simply the frequency in which it was used. This means that only a dramatic increase of said topic will put that term in the trending space of Twitter’s platform. Understanding the intricacies of algorithms has been of upmost importance during this conflict between Israel and Palestine for this reason. There have been many cases of social media activists accusing platforms such as Instagram and Facebook for a discriminatory algorithm.  Which we will expand on later. 

It is also vital to understand that it is through the means of datafication, that moderation is possible (which is another concept we will be discussing.) Datafication empowers moderation efforts by providing the technological infrastructure needed to analyse, interpret, and act upon user-generated data.

Moderation is the process of overseeing and regulating content on social media. This involves setting or enforcing guidelines regarding what is acceptable and what is not on the site. Moderation is a key tool in the online rule to address issues such as hate speech or harassment, among other things which may be deemed inappropriate.  

Along with the previous controversy we discussed regarding the Occupy protesters claiming Twitter was purposefully suppressing their hashtags and later it being revealed that protesters actually just misunderstood how the algorithm works, we see other instances where the answer to such claims isn’t as straightforward.      

Throughout May of 2021 Palestinians mainly from the West Bank and Gaza, whom are under occupation by Israel in violation of international law, documented Israeli aggression committed against Palestinians across mainstream social media platforms, specifically through Instagram live streams. At this time Israeli military attempted to forcibly displace Palestinians living in the Sheikh Jarrah area of the West Bank, which was backed by Israeli court decisions.  

This eventually led to raids and attacks on hundreds of unarmed worshippers praying at Al Aqsa Mosque (a sacred Islamic Mosque) during the holiest month of the year for Muslims around the world, Ramadan. Facebook and Instagram users in Palestine streaming the raid on Instagram live allowed an otherwise oblivious audience around the world to witness the crimes committed by Israel first-hand.

This increase in attention and awareness assisted in fomenting a shift in the narrative pushed in the media, from the perspective of the occupiers to those occupied, which led to more support for the Palestinian cause than ever. However, this change didn’t last long, as these videos were swiftly met with Facebooks moderation, specifically Meta which is coincidentally also Instagram’s parent company. All three of which are owned by Mark Zuckerberg. Many influencers who were pro Palestine were supposedly threatened with their accounts being deleted. Since these events transpired in 2021, we have seen content in solidarity with the Palestinian movement throughout social media is often removed without explanation.

This raises many ethical questions regarding moderation and free speech and how media corporations such as Facebook use platform mechanics such as datafication and selection (which is the other mechanism of the concept ‘platform mechanics’ and we will be delving into this shortly after.)

Three years prior to these events, in July 2019 a headline for the front cover of the New York Daily News was “Holocaust Deniers Deserve a Voice.” This statement was made by the CEO and founder of Facebook, owner of Instagram and creator of Meta – Mark Zuckerberg – in an interview with Greenblatt. 

Mark Zuckerberg stirred quite the controversy during this interview after people questioned his methods of moderation, or lack thereof. An absurd antisemitic conspiracy theory went viral in 2018-2019 claiming the Holocaust was a hoax and attention was brought to Zuckerberg, the owner of these platforms, for not moderating such offensive content.  

“As abhorrent as the content can be, I do think it comes down to the principle of giving people a voice” said Zuckerberg, “I disapprove of what you say, but I will defend to the death your right to say it.” 

This directly contradicts the moderation placed on pro-Palestinian voices and their treatment on his platforms. Zuckerberg claims on one hand to be on an eternal mission to preserve the value of free speech, and on the other he goes out of his way to supress and moderate Palestinian voices.

From January to June in 2020, Facebook received 913 appeals from the Israeli government to remove or restrict content on their platform. Facebook consented to 81 percent of these requests . In addition to this, the Israeli Cyber Unit bypasses the legal procedure of filing a court to take down online content, the Cyber Unit makes appeals directly to these platforms based on their own terms of service and a study done in 2018 also revealed an extremely high compliance rate with these voluntary requests – 90 percent across all platforms . It is clear there is an inconsistency regarding what content is moderated and what is allowed on these platforms, thus adding validity to the claim of a bias that media corporations may have, specifically concerning the Israeli and Palestinian topic, which is the topic of my essay.

In this interview he claimed Facebook aims to only take down “misinformation that is aimed at or [is] going to induce violence” to which Greenblatt replied that anti-Semitism to the extent of Holocaust denial is always produced with a violent intention. If the threshold for content being taken down is merely inducing violence, how does that explain the copious amounts of Palestinian related content that has been and is currently being removed from these platforms. What makes content in solidarity with the Palestinian cause “violent”?

“There are,” he further explained, “two core principles at play here. There’s giving people a voice.… Then there’s keeping the community safe.” Evidently the community Zuckerberg aims to keep “safe” doesn’t involve Palestinians. 

This is also an issue of selection, the second mechanism of platform mechanics which I mentioned previously, and we will now be going into. 

Selection refers to the process by which certain online content is either promoted or supressed based on user engagement, algorithms among other things. This mechanism plays a crucial role in shaping the narrative and its effects are evident throughout the media world. We’ve mentioned a few examples that demonstrate the power of selection when it comes to the media world, specifically Facebook and Instagram’s effect on the perception of Israeli and Palestinian conflict with their selection process. 

José van Dijck et al say in their book ‘The Platform Society’  “social media virality can transform a small protest into a national movement, whereas its invisibility condemns it to obscurity ” and this is especially true in Palestinian case I presented and people like Mark Zuckerberg understands this well which is why there are meticulous selection procedures on his platforms to make sure what gains attraction and what is kept in the dark is in agreement with their pockets. 

Major media corporations have the power to influence entire societies and their perceptions of other peoples through selection. It was only until the 2021 viral Instagram lives of the raids on Al Aqsa Mosque, which I mentioned previously, that the Palestinian movement became mainstream and gained traction. 

In conclusion, it’s easy to pick on Mark Zuckerberg and criticize his actions however this goes beyond a single person. This is a testament to the dangerous era of news and journalism we are in today, where a small number of people own majority of media outlets and control the narrative. The claim that media corporations have a supposed bias in the Israeli Palestinian conflict isn’t really a claim at this point but an observation and an accurate one which I hope I’ve made clear in this essay with the concepts of datafication in platform mechanics and the moderation of content. It is undeniable that since the conception of Israel as a state, there has been some form of prejudice when it came to presenting both states in the media.  

mahadyaqub avatar

Published by

Categories:

Leave a comment