In response to the growing concern about misinformation due to the increasing use of generative AI in content creation, YouTube is introducing a crucial content-labeling tool within Creator Studio. This tool is designed to promote greater transparency and trust between creators and their audiences.
Integrated into Creator Studio, the new tool mandates that creators disclose when their content includes realistic visuals that could be mistaken for real, particularly when produced using altered or synthetic media, such as generative AI.
Some examples of content that require disclosure include:
- Using the realistic likeness of a person: Digitally altering content to replace the face of one individual with another’s or synthetically generating a person’s voice to narrate a video.
- Altering footage of real events or places: Making things look different from how they are or happened, such as making it appear as if a real building caught fire, or altering a real cityscape.
- Generating realistic scenes: Showing a realistic depiction of fictional major events, like a tornado moving toward a real town.
Labels indicating the use of altered or synthetic media will now be visible in both the expanded description and, for sensitive topics like health or news, prominently within the video player itself. This rollout will occur gradually across all YouTube platforms in the upcoming weeks, ensuring consistency and clarity for viewers.
Google and YouTube recognize the increasing importance for viewers to distinguish between altered or synthetic content and authentic material. This initiative underscores the companies’ commitment to responsible AI innovation, building upon the disclosure requirements and labels introduced in November.
However, Google also acknowledges that generative AI serves various legitimate purposes in the creative process, such as generating scripts or enhancing productivity. In such cases, disclosure is not mandatory.
In collaboration with industry partners, Google remains dedicated to enhancing transparency surrounding digital content. As a steering member of the Coalition for Content Provenance and Authenticity (C2PA), Google actively contributes to initiatives that promote trust and accountability in online content.
Furthermore, Google is in the process of updating its privacy procedures to address requests for the removal of AI-generated or synthetic content that simulates identifiable individuals. This global initiative underscores Google’s commitment to protecting user privacy and upholding the integrity of its platforms.
More information about this story is available on Google’s official blog post, The Keyword.