32 private links
On Jan. 7th, Greenblatt appeared before Knesset's Committee for Immigration, Absorption and Diaspora Affairs to encourage further censorship initiatives that the Israeli government could take on.
The Apartheid Defense League is an anti-Palestinian hate group whose primary focus is right-wing, pro-Israel advocacy.
Greenblatt pushed for the State of Israel to consider the fight against antisemitism online and around the world to be another front (alongside Gaza, Lebanon, West Bank, Yemen, Syria, Iraq and Iran) that the country must contend with.
“Capturing TikTok might seem less meaningful than holding on to Mount Hermon. Libelous tweets certainly might seem less deadly than missiles from Yemen. But this is urgent because the next war will be decided based on how Israel and its allies perform online as much as offline. Make no mistake, it’s real,” he said.
eJP - In the Knesset, ADL chief admits failure to extinguish the post-Oct. 7 ‘inferno of antisemitism,’ calls for new strategies
https://ejewishphilanthropy.com/in-the-knesset-adl-chief-admits-group-has-failed-to-combat-antisemitism-calls-for-new-strategies/
Of course, Greenblatt had no advice on encouraging censorship on Facebook or X because both of those institutions are already anti-Palestinian.
After Elon Musk boosted antisemitic conspiracies on X and denounced the ADL, he had to launder his reputation with corporate America. So, he suddenly became a pro-Israel fanatic and began regurgitating hasbara BS.
Relatively-recently, Wikipedia deems the ADL to be an unreliable source on the Israel/Palestine issue.
Democracy Now! - Wikipedia Declares ADL an “Unreliable” Source on the Israel-Palestine Conflict
https://www.democracynow.org/2024/6/21/headlines/wikipedia_declares_adl_an_unreliable_source_on_the_israel_palestine_conflict
There have been edit wars going on re: this issue for as long as Wiki has existed. Pro-Israel editors have long been coordinating edits on Wiki to present a pro-Israel narrative.
Some pro-Palestine editors might have begun doing the same thing - but recently, many such editors were busted for primarily ETIQUETTE-related reasons. Like being 'impolite' or something to a pro-Israel editor.
For this, the ADL recently celebrated:
https://x.com/ADL/status/1880363032179798081
The ADL and other pro-Israel groups are likely putting pressure on Wiki to take these actions. No such pressure has been applied to the many sock-puppet pro-Israel accounts that regularly edit-war on Wiki over this issue.
RE: META
META has long been censoring & penalizing pro-Palestine content and that which is critical of Israel. In fact, META discriminates against Palestinians and their supporters so badly that an internal audit by the company put out a report stating it hurts Palestinian human rights.
Based on the data reviewed, examination of individual cases and related materials, and external stakeholder engagement, Meta’s actions⁸ in May 2021 appear to have had an adverse human rights impact (as defined in footnote 3) on the rights of Palestinian users to freedom of expression, freedom of assembly, political participation, and non-discrimination, and therefore on the ability of Palestinians to share information and insights about their experiences as they occurred. This was reflected in conversations with affected stakeholders, many of whom shared with BSR their view that Meta appears to be another powerful entity repressing their voice that they are helpless to change.
The Independent - Facebook and Instagram ‘violated Palestinian users’ rights by taking down posts, according to its own report
https://www.independent.co.uk/tech/meta-facebook-instagram-palestinian-rights-b2173756.html
HRW even issued a follow-up report about META, concluding:
Human Rights Watch found that the censorship of content related to Palestine on Instagram and Facebook is systemic and global. Meta’s inconsistent enforcement of its own policies led to the erroneous removal of content about Palestine. While this appears to be the biggest wave of suppression of content about Palestine to date, Meta, the parent company of Facebook and Instagram, has a well-documented record of overbroad crackdowns on content related to Palestine. For years, Meta has apologized for such overreach and promised to address it. In this context, Human Rights Watch found Meta’s behavior fails to meet its human rights due diligence responsibilities. Despite the censorship documented in this report, Meta allows a significant amount of pro-Palestinian expression and denunciations of Israeli government policies. This does not, however, excuse its undue restrictions on peaceful content in support of Palestine and Palestinians, which is contrary to the universal rights to freedom of expression and access to information.
Human Rights Watch - Meta’s Broken Promises
https://www.hrw.org/report/2023/12/21/metas-broken-promises/systemic-censorship-palestine-content-instagram-and
The Guardian - Meta censors pro-Palestinian views on a global scale, report claims
https://www.theguardian.com/technology/2023/dec/21/meta-facebook-instagram-pro-palestine-censorship-human-rights-watch-report
META implemented a new policy called 'community engagement expectations' (CEE) - which bans discussions about politics. CEE violations can hurt performance reviews and sometimes lead to termination of employment.
[...] Under the CEE, which launched in 2022, discussions of armed conflict, war, or other political matters are banned inside the company; violations can hurt performance reviews and, in some cases, lead to termination.
One former Meta employee believes the CEE was largely a reaction to the Black Lives Matter protests and the 2020 US presidential election. “Before that, you could have a profile photo that says ‘Make America Great Again,’” the source says. But the turmoil such political expressions caused within the company led Meta to crack down.
“Whether it is an opinion or a fact shared, at our size—and again, with the diversity of our population—an opposing or seemingly neutral comment could trigger feelings of contention, sadness, anger, distraction and even grief,” Williams, the diversity chief, wrote in her message last month. Williams reminded workers that they are welcome to speak up externally, including on Meta apps such as Instagram or Threads.
WIRED - How Watermelon Cupcakes Kicked Off an Internal Storm at Meta
https://www.wired.com/story/meta-palestine-employees-watermelon-cupcakes-censorship/
META is also implicated in a WhatsApp 'leak' that has allowed the IDF to further target Palestinians.
Against the backdrop of the ongoing war on Gaza, the threat warning raised a disturbing possibility among some employees of Meta. WhatsApp personnel have speculated Israel might be exploiting this vulnerability as part of its program to monitor Palestinians at a time when digital surveillance is helping decide who to kill across the Gaza Strip, four employees told The Intercept.
The Intercept
https://theintercept.com/2024/05/22/whatsapp-security-vulnerability-meta-israel-palestine/
This is all by design, since prominent positions in META are even held by Likud appointees like former Netanyahu advisor, Jordana Cutler, or former (and fired) Israeli government officials like Emi Palmor.
When META employees petitioned company execs about the findings of the BSR report, they were instead censored under the purview of CEE.
Human Rights Watch’s allegations validated concerns of the Palestinian Working Group, a years-old intranet forum at Meta, consisting of more than 200 staff, for discussing issues faced by Palestinian users, according to one of the sources. The group in December organized an internal letter to top executives with a list of demands aimed at making services such as Facebook and Instagram safe and fair for Palestinian supporters.
The letter drew over 450 signatures in a few hours before workers say the internal community relations team took down the petition and deleted emails soliciting support from workers’ inboxes. Akhter, one of the organizers, had her “system access” disabled for three months, she would later say on Instagram. She said the petition had been flagged for violating Meta’s community engagement expectations, or CEE.
WIRED - How Watermelon Cupcakes Kicked Off an Internal Storm at Meta
https://www.wired.com/story/meta-palestine-employees-watermelon-cupcakes-censorship/
RE: X/Twitter limiting the reach of pro-Palestine content
In the following video (not sure of the time-stamp), Elon Musk assures right-wing supporters of Israel that he will 'limit the reach' of pro-Palestine content.
The Forward
https://forward.com/culture/562411/elon-musk-ben-shapiro-twitter-antisemitism-jewish/
Facebook built two versions of a fix for clickbait this year, and decided to trust algorithmic machine learning detection instead of only user behavior, a
Medical reports filed with the employment and labour relations court in Nairobi and seen by the Guardian paint a horrific picture of working life inside the Meta-contracted facility, where workers were fed a constant stream of images to check in a cold warehouse-like space, under bright lights and with their working activity monitored to the minute.
the Responsible AI team’s work—whatever its merits on the specific problem of tackling AI bias—is essentially irrelevant to fixing the bigger problems of misinformation, extremism, and political polarization
The models that maximize engagement also favor controversy, misinformation, and extremism: put simply, people just like outrageous stuff. Sometimes this inflames existing political tensions.
In 2017, Chris Cox, Facebook’s longtime chief product officer, formed a new task force to understand whether maximizing user engagement on Facebook was contributing to political polarization. It found that there was indeed a correlation, and that reducing polarization would mean taking a hit on engagement. In a mid-2018 document reviewed by the Journal, the task force proposed several potential fixes, such as tweaking the recommendation algorithms to suggest a more diverse range of groups for people to join. But it acknowledged that some of the ideas were “antigrowth.” Most of the proposals didn’t move forward, and the task force disbanded.
Since then, other employees have corroborated these findings. A former Facebook AI researcher who joined in 2018 says he and his team conducted “study after study” confirming the same basic idea: models that maximize engagement increase polarization. They could easily track how strongly users agreed or disagreed on different issues, what content they liked to engage with, and how their stances changed as a result. Regardless of the issue, the models learned to feed users increasingly extreme viewpoints. “Over time they measurably become more polarized,” he says.
But anything that reduced engagement, even for reasons such as not exacerbating someone’s depression, led to a lot of hemming and hawing among leadership. With their performance reviews and salaries tied to the successful completion of projects, employees quickly learned to drop those that received pushback and continue working on those dictated from the top down.
But narrowing SAIL’s focus to algorithmic fairness would sideline all Facebook’s other long-standing algorithmic problems. Its content-recommendation models would continue pushing posts, news, and groups to users in an effort to maximize engagement, rewarding extremist content and contributing to increasingly fractured political discourse.
Zuckerberg even admitted this. Two months after the meeting with Quiñonero, in a public note outlining Facebook’s plans for content moderation, he illustrated the harmful effects of the company’s engagement strategy with a simplified chart. It showed that the more likely a post is to violate Facebook’s community standards, the more user engagement it receives, because the algorithms that maximize engagement reward inflammatory content.