Meta’s Ongoing Efforts Regarding the Israel-Hamas War

6 months ago 65

Like many, we were shocked and horrified by the brutal terrorist attacks by Hamas, and our thoughts go out to civilians who are suffering in Israel and Gaza as the violence continues to unfold.

Since the terrorist attacks by Hamas on Israel on Saturday, and Israel’s response in Gaza, expert teams from across our company have been working around the clock to monitor our platforms, while protecting people’s ability to use our apps to shed light on important developments happening on the ground. The following are some of the specific steps we have taken:

Taking Action on Violating Content

  • We quickly established a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation in real time. This allows us to remove content that violates our Community Standards or Community Guidelines faster, and serves as another line of defense against misinformation.
  • We continue to enforce our policies around Dangerous Organizations and Individuals, Violent and Graphic Content, Hate Speech, Violence and Incitement, Bullying and Harassment, and Coordinating Harm
    • In the three days following October 7, we removed or marked as disturbing more than 795,000 pieces of content for violating these policies in Hebrew and Arabic.
    • As compared to the two months prior, in the three days following October 7, we have removed seven times as many pieces of content on a daily basis for violating our Dangerous Organizations and Individuals policy in Hebrew and Arabic alone. 
  • Hamas is designated by the US government as both a Foreign Terrorist Organisation and Specially Designated Global Terrorists. It is also designated under Meta’s Dangerous Organizations and Individuals policy. This means Hamas is banned from our platforms, and we remove praise and substantive support of them when we become aware of it, while continuing to allow social and political discourse — such as news reporting, human rights related issues, or academic, neutral and condemning discussion. 
  • We want to reiterate that our policies are designed to give everyone a voice while keeping people safe on our apps. We apply these policies regardless of who is posting or their personal beliefs, and it is never our intention to suppress a particular community or point of view. Given the higher volumes of content being reported to us, we know content that doesn’t violate our policies may be removed in error. To mitigate this, for some violations we are temporarily removing content without strikes, meaning these content removals won’t cause accounts to be disabled. We also continue to provide tools for users to appeal our decisions if they think we made a mistake.
  • Our teams are monitoring the situation and in some cases temporarily introducing limited, proportionate, and time-bound measures to address specific, emerging risks:
    • Stronger steps to avoid recommending potentially violating and borderline violating content: We already use technology to avoid recommending potentially violating and borderline content across Facebook, Instagram and Threads. We’re working to further reduce the possibility of this happening by lowering the threshold at which our technology will take action to avoid recommending this type of content. We’re also taking steps to reduce the visibility of potentially offensive comments under posts on Facebook and Instagram. 
    • Violence and Incitement: In order to prioritize the safety of Israelis kidnapped by Hamas, we are temporarily expanding our Violence and Incitement policy and removing content that clearly identifies hostages when we’re made aware of it, even if it’s being done to condemn or raise awareness of their situation. We are allowing content with blurred images of the victims but, in line with standards established by the Geneva Convention, we will prioritize the safety and privacy of kidnapping victims if we are unsure or unable to make a clear assessment.
    • Hashtag Blocking: In line with our rules, we have restricted a number of Instagram hashtags after our team assessed that content associated with those hashtags was consistently found to be violating our Community Guidelines. This means that the hashtags are not searchable – but the content itself won’t be removed unless it violates our policies.
    • Facebook and Instagram Live: We recognize that the immediacy of Live brings unique challenges, so we have restrictions in place on the use of Live for people who have previously violated certain policies. We’re prioritizing livestream reports related to this crisis, above and beyond our existing prioritization of Live videos. We’re also aware of Hamas’ threats to broadcast footage of the hostages and we’re taking these threats extremely seriously. Our teams are monitoring this closely, and would swiftly remove any such content (and the accounts behind it), banking the content in our systems to prevent copies being re-shared.
  • We’re getting feedback from local partners on emerging risks and moving quickly to address them. We recognize that local context and language-specific expertise is essential for this work, so we will remain in close communication with experts, partner institutions and non-governmental organizations.

Safety and Security

  • Coordinated Inauthentic Behavior (CIB): Our teams have detected and taken down a cluster of activity linked to a covert influence operation we removed and attributed to Hamas in 2021. These fake accounts attempted to re-establish their presence on our platforms. We continue to stay vigilant and take action against violating adversarial behavior in the region.
  • Memorialization: We memorialize accounts when we receive a request from a friend or family member of someone who has passed away, to provide a space for people to pay their respects, share memories and support each other. When accounts are memorialized, no one (except Legacy Contacts on Facebook) can log in or make any changes to existing posts or settings. We also add the word ‘Remembering’ next to the person’s name, and don’t recommend the account to new people. Our Community Standards apply to memorialized accounts and we remove violating comments left under posts on these accounts whenever we become aware of them.  

Reducing the Spread of Misinformation

  • We’re working with third-party fact-checkers in the region to debunk false claims. Meta has the largest third-party fact checking network of any platform, with coverage in both Arabic and Hebrew, through AFP, Reuters and Fatabyyano. When they rate something as false, we move this content lower in Feed so fewer people see it. 
  • We recognize the importance of speed in moments like this, so we’ve made it easier for fact-checkers to find and rate content related to the war, using keyword detection to group related content in one place.
  • We’re also giving people more information to decide what to read, trust, and share by adding warning labels on content rated false by third-party fact-checkers and applying labels to state-controlled media publishers. 
  • We also have limits on message forwarding and label messages that haven’t originated with the sender so people are aware that something is information from a third party.

User Controls

We continue to provide tools to help people control their experience on our apps and protect themselves from content they don’t want to see. These include, but aren’t limited to:

  • Hidden words: When turned on, Hidden Words filters offensive terms and phrases from DM requests and comments, so people never have to see them. People can customize this list, to make sure the terms they find offensive are hidden. 
  • Limits: When turned on, Limits automatically hide DM requests and comments on Instagram from people who don’t follow you, or who only recently followed you.
  • Comment controls: You can control who can comment on your posts on Facebook and Instagram and choose to turn off comments completely on a post by post basis. 
  • Show More, Show Less: This gives people direct control over the content they see in their Facebook Feed. Selecting “Show more” will temporarily increase the ranking score for that post and posts like it. If you select “Show less”, you’ll temporarily decrease its ranking score. 
  • Facebook Reduce: Through the Facebook Feed Preferences settings, people can increase the degree to which we demote some content so they see less of it in their Feed. Or if preferred, they can turn many of these demotions off entirely. They can also choose to maintain our current demotions.
  • Sensitive Content Control: Instagram’s Sensitive Content Control allows people to choose how much sensitive content they see in places where we recommend content, such as Explore, Search, Reels and in-Feed recommendations. We try not to recommend sensitive content in these places by default, but people can also choose to see less, to further reduce the possibility of seeing this content from accounts they don’t follow.

The post Meta’s Ongoing Efforts Regarding the Israel-Hamas War appeared first on Meta.

Read Entire Article