Meta, the parent company of social media giants Facebook and Instagram, announced a groundbreaking policy on Wednesday, mandating advertisers to disclose the use of potentially misleading AI-generated or altered content in political, electoral, and social issue advertisements.
The decision aims to address growing concerns about deceptive digital manipulation and to ensure transparency in the world of online advertising.
This new rule, slated to go into effect next year, is a response to the proliferation of what Meta refers to as “realistic” images, videos, or audio that falsely depict individuals engaging in actions they never undertook or portraying events differently from their actual occurrences.
It also applies to content featuring lifelike, fabricated individuals or events that have been digitally created. The intention behind this move is to make it clear to viewers when the content they encounter is not a genuine representation of reality.
Nick Clegg, the President of Global Affairs at Meta, shared the company’s commitment in a Threads post, saying, “In the New Year, advertisers who run ads about social issues, elections, and politics with Meta will have to disclose if image or sound has been created or altered digitally, including with AI, to show real people doing or saying things they haven’t done or said.”
Read Next: YouTube’s Anti-Ad Blocker Measures Lead to Mass Deactivations
Meta Speaks: Political Ads Must Disclose AI-Generated Content

This disclosure requirement aims to provide users with more context about the content they encounter in their social media feeds, especially when it relates to political and social issues.
To strike a balance between transparency and practicality, Meta clarified that minor edits to content, such as cropping or color correction, which are inconsequential or immaterial to the central message of the ad, do not require disclosure.
The company recognizes that not all digital alterations are created equal, and the focus is on ensuring that viewers are informed when content has been significantly manipulated to convey a misleading message.
For advertisements featuring digitally altered content, Meta has outlined plans to flag this information to users and record it in its ads database, further enhancing transparency.
This announcement follows Meta’s recent decision to ban political campaigns and groups from using its new generative AI advertising products.
These tools enabled advertisers to create multiple versions of ads with varying backgrounds, text, and image and video sizing, which raised concerns about the potential misuse of AI-generated content in political campaigns.
Meta’s commitment to disclose AI-generated content in political ads comes at a crucial time as lawmakers and regulators gear up to address the issue, particularly in anticipation of the 2024 presidential election.
In a separate development earlier this year, Rep. Yvette Clarke (D-NY) and Sen. Amy Klobuchar (D-MN) introduced bills that would require political campaigns to disclose the use of AI-generated content in their advertisements.
Additionally, the Federal Election Commission, the regulatory agency responsible for overseeing political advertising, is expected to decide on a new rule that would mandate political campaigns to disclose their use of AI-generated content. The timeline for when this rule might be voted on remains uncertain.
Meta’s move to enforce transparency in political advertising is a significant step towards curbing the spread of misleading content in an increasingly digital and AI-driven world.
It signals to the advertising industry, policymakers, and the public that the company is committed to fostering a more trustworthy and informed online environment.
As the digital landscape continues to evolve, initiatives like this will play a pivotal role in shaping the future of online discourse and political engagement.
Read Next: Tesla Adjusts Model Y Pricing in China, Raises RWD and Long-Range Prices