YouTube Requires Warning on Realistic Videos Made with AI
YouTube has started requiring content creators to notify their audience when realistic videos are made using artificial intelligence. The alert is made through a question in Creator Studio, which appears during the upload of a piece of content. Animated materials or materials that are clearly unrealistic do not need the warning.
According to YouTube, the warning will be in the expanded description of the video. When it’s a more sensitive subject, such as news and health, the label will appear more prominently. The alerts will begin in the coming weeks, initially on mobile apps, and later on computers and TVs.
Creator Studio asks if your video meets any of these criteria:
It puts a real person seemingly doing or saying something they didn’t do or say.
Alters recordings of real events or places.
It generates a scene that looks realistic, but it didn’t actually happen.
Non-realistic scenes and animations do not require the warning. It is also not necessary to put this label if the AI was only used to help with production, such as a tool for captions, image enhancements, or scripts.
YouTube has plans to penalize creators who consistently fail to include the label on videos made using AI. In addition, the company itself may place the warning, if it considers that the content may confuse or mislead people.
YouTube Prepares for AI-Powered Videos
This policy was announced in November 2023, as part of a package of measures related to generative AI and comes into effect now. At the time, YouTube also defined that record labels and distributors can ask to remove videos with imitation artists.
In the case of deepfakes, the issue is more complex. People who appear in the videos will be able to request deletion, but it will be evaluated by the company. The video platform will allow satire and parody, but defamatory content can be deleted.