The Impact of AI on Video Production
The rapid development of artificial intelligence technology is profoundly reshaping the production paradigm of video content. In the realm of short films, technologies such as AI face-swapping and deep synthesis have significantly reduced production costs and shortened creation cycles, injecting new vitality into the industry. However, this technological boon has led to unauthorized use of facial recognition data, infringing on individuals’ portrait rights, personal information rights, and even reputation rights. It is essential to clarify the existing legal system’s gaps and blind spots to systematically address the real dilemmas posed by AI short films’ ‘face theft’.
Examples of ‘Face Theft’
The issue of ‘face theft’ in AI short films is not isolated. For instance, a short film launched in April 2025 allegedly used AI face-swapping technology to splice another person’s face onto a character for a segment lasting 90 minutes. In March 2026, a Hanfu makeup blogger’s photoshoot was replicated without authorization by an AI short film, which garnered over 40 million views on a single platform, generating considerable commercial profit while harming others. Alarmingly, ‘face theft’ has given rise to a gray market for facial images, with prices ranging from tens to thousands of yuan, reaching up to 5,000 yuan per image, with usage terms varying from one to fifteen years. Once facial images are stored in databases, they may be used at will for future storylines, posing risks of uncontrolled personal information.
Legal Framework and Challenges
In response to the emerging infringement of ‘face theft’ in AI short films, China has accelerated the improvement of its legal framework. Laws such as the Civil Code of the People’s Republic of China, the Personal Information Protection Law, and various regulations on deep synthesis and generative AI services provide some institutional references for addressing such infringements. However, the current legal framework still has limitations.
The ‘recognizability’ standard faces challenges in the context of AI generation. Article 1018 of the Civil Code defines a portrait as the external image of a specific natural person that can be recognized. Article 1019 explicitly prohibits the use of information technology to forge another person’s portrait. However, when AI-generated characters incorporate multiple facial features or retain only partial recognizable traits, victims lacking professional evidence-gathering capabilities may struggle to prove ‘recognizability’, leaving professional appraisal institutions at a loss.
The personal information protection framework lags in regulating ‘face theft’. Facial images are sensitive personal information, requiring compliance with three thresholds: ‘specific purpose + sufficient necessity + separate consent’. However, AI short film producers often covertly collect facial information from social media and online image libraries. If victims do not discover the infringement promptly, the damage and negative impact they suffer may not be compensated.
The platform responsibilities of deep synthesis service providers need clarification and strengthening. The regulations require service providers to provide significant identification, but the AI short film industry involves multiple parties, including technology providers, content producers, and distribution platforms, with unclear responsibility divisions. In practice, short film platforms cannot complete infringement screenings for all portraits before release due to the inability to obtain user facial information in advance.
Disparity in Infringement and Protection Costs
There is a severe imbalance between the costs of infringement and the costs of protecting rights. The cost of using real actors can reach hundreds of thousands or millions of yuan, while generating characters through AI may cost only a few thousand yuan or even less. Whether for celebrities or ordinary individuals, protecting rights requires substantial time, energy, and financial investment. Even if infringement is established, the compensation may not cover the actual losses and costs incurred by rights holders, failing to create effective deterrence. This imbalance in cost structures encourages infringers to take risks.
Areas for Improvement in Regulations
Recently introduced departmental regulations still have room for improvement in applicability and enforcement strength. The regulations on the safety management of facial recognition technology clarify specific rules for processing facial information, but their applicability primarily targets active applications of facial recognition technology. It remains to be clarified whether they fully cover the ‘passive crawling + post-generation’ behavior patterns in AI short film production. Additionally, as departmental regulations, the legal responsibility’s binding force needs to be strengthened.
Comprehensive Governance Approach
Addressing the chaos of ‘face theft’ in AI short films requires coordinated efforts in legislation, judiciary, enforcement, platform governance, industry self-discipline, and public education. This should unfold in the following five areas:
-
Improve Legislation: Build a clear regulatory system focusing on civil and criminal law, supplemented by specialized regulations. This includes clarifying the legal boundaries of AI deep synthesis technology applications and establishing specific rules against unauthorized use of others’ facial information.
-
Judicial Guidance: The judiciary should play a crucial role in clarifying the applicable details of the ‘recognizability standard’ and reasonably distributing the burden of proof, considering the covert nature of ‘face theft’ and the difficulties victims face in gathering evidence.
-
Strengthen Administrative Enforcement: Administrative enforcement is vital in curbing the chaos of ‘face theft’. Regulatory bodies should enhance oversight of deep synthesis services and impose penalties on providers that fail to conduct safety assessments or use others’ facial information without authorization.
-
Clarify Platform Responsibilities: Platforms should establish and improve content review mechanisms, employing technical means for pre-screening suspected unauthorized video content. They should also create efficient complaint and dispute resolution mechanisms to lower the costs of rights protection for individuals.
-
Promote Industry Self-Regulation and Public Education: Industry associations should develop specific norms and self-regulatory agreements regarding the use of facial information in AI short film production. Public education should enhance awareness of personal rights related to facial information and provide accessible legal advice and guidance for rights protection.
In conclusion, in the face of the ‘face theft’ phenomenon in AI short films, the law must respond promptly to the challenges posed by emerging technologies. It should balance the need for technological innovation with the imperative to protect citizens’ personal rights and public interests. Only through improved legislation, refined judiciary processes, effective enforcement, responsible platforms, industry self-discipline, and public awareness can a systematic governance framework be established, allowing facial recognition to return to its fundamental role as a carrier of personal dignity.
Comments
Discussion is powered by Giscus (GitHub Discussions). Add
repo,repoID,category, andcategoryIDunder[params.comments.giscus]inhugo.tomlusing the values from the Giscus setup tool.