YouTube, the widely used video-sharing platform owned by Google, is set to implement a new policy to combat the rise of AI-generated content, particularly deepfakes.
To address concerns over the potential misuse of artificial intelligence in creating realistic but misleading videos, YouTube will now require creators to disclose when they use AI or other digital tools to alter videos.
The move is part of a broader industry response to the proliferation of generative AI technology and deepfakes, which have raised ethical and privacy concerns.
According to YouTube’s new policy, creators utilizing AI or similar tools to produce videos must disclose this information to viewers.
Failure to comply with this requirement may result in account removal or suspension of advertising revenue on the platform. The policy is set to be enforced in the coming months, reflecting YouTube’s commitment to maintaining transparency in content creation.
In addition to the disclosure mandate, YouTube will empower users to request the removal of videos that use AI to simulate identifiable individuals.
This move aligns with the platform’s privacy tools, acknowledging the potential misuse of AI technology to create deceptive content, commonly called deepfakes.
The intent is to strike a balance between creative expression and safeguarding individuals from potential harm arising from manipulated content.
YouTube’s initiative is not isolated, as other major platforms have also taken steps to address concerns related to AI-generated content.
Meta, the parent company of Facebook and Instagram, has announced similar disclosure requirements for advertisers using AI in ads related to elections, politics, and social issues.
TikTok, a popular short-form video app, mandates labels on AI-generated content depicting “realistic” scenes and prohibits deepfakes of young people and private figures in specific contexts.
YouTube Fights AI-Generated Misinformation
YouTube’s new policy represents an expansion of a prior announcement in September, where the platform mandated disclosures for political ads created with AI.
The latest move broadens the requirement to include any synthetic video that could be mistaken for actual footage. YouTube acknowledges the potential for AI-generated content to mislead viewers, mainly when alterations are not apparent.
To enhance transparency, YouTube will feature more prominent AI labels on videos dealing with sensitive topics such as elections, ongoing conflicts, public health crises, or public officials.
These labels aim to alert viewers to the possibility of AI involvement, fostering awareness and critical engagement with the content.
YouTube asserts that AI-generated content violating community guidelines, especially content depicting realistic violence intending to shock or disgust viewers, will be subject to removal.
This underscores the platform’s commitment to maintaining a safe and trustworthy user environment.
Furthermore, YouTube is introducing a privacy request process, allowing users to flag content that simulates identifiable individuals. This move acknowledges the potential misuse of AI deepfakes, particularly in the creation of non-consensual pornography targeting women.
YouTube will assess various factors, including whether the content is parody or satire, the uniqueness of the individual’s identification, and whether the person involved is a public figure.
YouTube’s proactive stance against AI-generated misinformation and deepfakes marks a significant step in the ongoing efforts to address the ethical challenges posed by evolving technologies.
As the digital landscape continues to grow, platforms increasingly recognize the need to balance creative expression with responsible content management, safeguarding users from potential harm and deception.
Implementing robust disclosure policies and privacy protection measures reflects the commitment of significant platforms to prioritize transparency and user safety in the age of artificial intelligence.