The recent presidential election in the United States capped off an unprecedented year in which as many as two billion people were expected to vote in elections across some of the world’s biggest democracies, including India, Indonesia, Mexico and the European Union. As a company that operates platforms where public discourse takes place, Meta understands the responsibility it has to protect people’s ability to make their voices heard, and ensure we are prepared for the many elections around the world.
Since 2016 we have been evolving our approach to elections to incorporate the lessons we learn and stay ahead of emerging threats. We have a dedicated team responsible for Meta’s cross-company election integrity efforts, which includes experts from our intelligence, data science, product and engineering, research, operations, content and public policy, and legal teams. In 2024, we ran a number of election operations centers around the world to monitor and react swiftly to issues that arose, including in relation to the major elections in the US, Bangladesh, Indonesia, India, Pakistan, the EU Parliament, France, the UK, South Africa, Mexico and Brazil.
With the US election in the books, we are in a position to share some of the trends we’ve seen on our platforms, and explain how we have sought to balance protecting people’s ability to express themselves with keeping people safe throughout the year.
Enabling Free Expression
Striking the balance between allowing people to make their voices heard and keeping people safe is one that no platform will ever get right 100 per cent of the time. We know that when enforcing our policies, our error rates are too high, which gets in the way of the free expression we set out to enable. Too often harmless content gets taken down or restricted and too many people get penalized unfairly. Throughout the year we have sought to update and apply our content policies fairly so that people can make their voices heard, and we will continue to work on this in the months ahead.
- We launched political content controls on Facebook, Instagram and Threads to give people the ability to have more political content recommended to them if they want. These controls have launched in the US and are in the process of rolling out globally.
- We allow people to ask questions or raise concerns about election processes in organic content. However, we do not allow claims or speculation about election-related corruption, irregularities, or bias when combined with a signal that content is threatening violence. We clarified our policies to make that distinction. For paid content, we prohibit ads that call into question the legitimacy of an upcoming or ongoing election as we have since 2020.
- We updated our penalties system in early 2023 to make it fairer in protecting people’s ability to make their voices heard, while remaining effective at enforcing against people who persistently violate our policies.
- We conduct yearly audits of words we designate as slurs under our Hate Speech policy, prioritizing markets with upcoming elections, with the goal of maintaining up-to-date lists that protect vulnerable communities while ensuring we do not over-enforce on political discourse.
- We also updated our penalty protocol related to public figures suspended for violations during periods of civil unrest, updating the policy to help ensure Americans could hear from the candidates for President on the same basis. We committed to periodic reviews of whether enhanced penalties are warranted.
Throughout elections around the world in 2024, we connected people with reliable information about voting through in-app notifications on Facebook and Instagram.
- During the 2024 US general election, top of feed reminders on Facebook and Instagram received more than 1 billion impressions. These reminders included information on registering to vote, voting by mail, voting early in person, and voting on election day. People clicked on these reminders more than 20 million times to visit official government websites for more information.
- We also work with state and local elections officials to send Voting Alerts, and have sent more than 765 million notifications on Facebook since 2020. This is a tool that can adapt as needed to changing situations on the ground – for example, if specific polling places needed extended hours on Election Day.
- We ran a Search Engine Results Page interstitial on Facebook and Instagram in the US, meaning when people searched for terms related to the 2024 elections they were shown links to official sources for information about how, when and where to vote. On Instagram we continued to elevate Story stickers directing people to official voting information. We also continued to run the Voting Information Center on Facebook.
- Top of feed notifications reached millions of people in countries holding elections around the world in 2024, including:
- In the UK, our Election Day Reminder reached 43.6 million people on Facebook and Instagram, with nearly 900,000 people clicking through to register to vote or check their registration status.
- In the EU, users engaged these notifications more than 41 million times on Facebook and more than 58 million times on Instagram.
- In India, where the Voting Alert notification ran on Facebook on April 17th ahead of the National Elections. The notification launched from the Election Commission of India’s Facebook page and reached 145 million users, directing people to get more information via https://elections24.eci.gov.in/. The Elections Commission of India also deployed the WhatsApp API to run voting reminder campaigns reaching around 400 million users.
- In local elections in Brazil, users engaged reminder notifications around 9.7 million times across Facebook and Instagram.
- In France, users clicked on these in-app notifications more than 599K times on Facebook and more than 496K times on Instagram.
Meta also continues to offer industry-leading transparency for ads about social issues, elections and politics. Advertisers who run these ads are required to complete an authorization process and include a “paid for by” disclaimer. These ads are then stored in our publicly available Ad Library for seven years. This year, we continued to prohibit new political, electoral and social issue ads in the US during the final week of the election campaign as we have since 2020, because in the final days of an election there may not be enough time to contest new claims. Since January 2024, in certain cases we’ve also required advertisers to disclose when they use AI or other digital techniques to create or alter a political or social issue ad.
Monitoring the Impact of AI
At the start of the year, many people were warning of the potential impact of generative AI on the upcoming elections, including the risk of widespread deepfakes and AI-enabled disinformation campaigns. From what we’ve monitored across our services, it seems these risks did not materialize in a significant way and that any such impact was modest and limited in scope.
- While there were instances of confirmed or suspected use of AI in this way, the volumes remained low and our existing policies and processes proved sufficient to reduce the risk around generative AI content. During the election period in the major elections listed above, ratings on AI content related to elections, politics and social topics represented less than 1% of all fact-checked misinformation.
- As part of our efforts to prevent people using Meta’s Imagine AI image generator to create election-related deepfakes, we rejected 590,000 requests to generate images of President-elect Trump, Vice President-elect Vance, Vice President Harris, Governor Walz, and President Biden in the month leading up to election day.
- We also closely monitored the potential use of generative AI by covert influence campaigns – what we call Coordinated Inauthentic Behavior (CIB) networks – and found they made only incremental productivity and content-generation gains using generative AI. This has not impeded our ability to disrupt these influence operations because we focus on behavior when we investigate and take down these campaigns, not on the content they post – whether created with AI or not.
- In the lead up to the US election, we required Meta AI to respond to questions about when and how to vote with links to authoritative sources that could offer the most up to date information. On Election Day, we also required Meta AI to respond to certain questions about the candidates and election results by directing people to check with authoritative sources until a winner was projected by multiple news organizations.
- Throughout 2024, we have cooperated with others in our industry to combat potential threats from the use of generative AI. For example, in February of 2024, we signed the AI Elections Accord alongside dozens of other industry leaders, pledging to help prevent deceptive AI content from interfering with this year’s global elections. In India we launched a WhatsApp tipline in partnership with the cross-industry Misinformation Combat Alliance, which included setting up a world-first Deepfakes Analysis Unit to provide assessments of any digital content in audio or video form that users suspected could be a deepfake. And ahead of the European Parliament Elections, Meta funded and supported the European Fact-Checking Standards Network (EFCSN) in a project aimed at improving the skills and capabilities of the European fact-checkers in debunking and countering AI-generated misinformation.
Preventing Foreign Interference
This year, our teams have taken down around 20 new covert influence operations around the world, including in the Middle East, Asia, Europe and the US.
- Russia remains the number one source of the covert influence operations we’ve disrupted to date – with 39 networks disrupted in total since 2017. The next most frequent sources of foreign interference are Iran, with 31 CIB networks, and China, with 11.
- The majority of the CIB networks we’ve disrupted have struggled to build authentic audiences, and some used fake likes/followers to appear more popular than they were. For example, we took down a CIB network originating primarily in the Transnistria region of Moldova, and targeting Russian-speaking audiences in Moldova. We removed this campaign before they were able to build authentic audiences on our apps.
- The vast majority of the CIB networks we disrupted globally tried to spread themselves across many online apps, including ours, YouTube, TikTok, X, Telegram, Reddit, Medium, Pinterest, and more. We’ve seen a number of influence operations shift much of their activities to platforms with fewer safeguards than ours. For example, fictitious videos about the US elections (assessed by the US intelligence community to be linked to Russian-based influence operations) were posted on X and Telegram rather than our apps. In a minimal number of instances where people in the US reposted this content on our apps, we labeled the content as reported to be linked to Russian influence actors.
- The vast majority of CIB networks ran their own websites, likely to withstand takedowns by any one company. The largest and most persistent such operation is known as Doppelganger, which uses a vast web of fake websites, including some spoofing legitimate news and government entities. We have created the largest public repository of Doppelganger’s threat signals, exposing over 6,000 domains since 2022, so that researchers and other investigative teams can also take action, in addition to blocking them from being shared on our apps. However, despite some disruptions by governments and others, many of Doppelganger’s web domains we have exposed to date have been quickly replaced and continue to post new content, and many of their brands remain active on X and Telegram.
- Ahead of the US election, Doppelganger struggled to get through on our apps and largely abandoned posting links to its websites. The vast majority of Doppelganger’s attempts to target the US in October and November were proactively stopped before any user saw their content.
- Ahead of the US elections, we expanded our ongoing enforcement against Russian state media outlets. We banned Rossiya Segodnya, RT and other related entities from our apps globally for violations of our policies prohibiting engaging in or claiming to engage in foreign interference. This followed the unprecedented steps we took over 2 years ago to limit the spread of Russian state controlled media, including blocking them from running ads, placing their content lower in people’s feeds, and adding in-product nudges that ask people to confirm they want to share or navigate to content from these outlets.
These findings and policies are intended to give you a sense of what we’ve seen and the broad approach we have taken during this unprecedented year of elections, but they are far from an exhaustive account of everything we did and saw. Nor is our approach inflexible. With every major election, we want to make sure we are learning the right lessons and staying ahead of potential threats. Striking the balance between free expression and security is a constant and evolving challenge. As we take stock of what we’ve learned during this remarkable year, we will keep our policies under review and announce any changes in the months ahead.
The post What We Saw on Our Platforms During 2024’s Global Elections appeared first on Meta.