Deepfakes and Image Rights: Legal Challenges in the Age of Artificial Intelligence
- Elvia Aragon

- Oct 21
- 6 min read

Introduction
In the digital era, technology evolves faster than the laws that attempt to regulate it. Among the most striking examples are deepfakes—realistic but entirely fabricated images, videos, or audio generated by artificial intelligence. While these tools can be used for satire, education, or entertainment, they also present new challenges to personal rights, particularly the right to one's image. Deepfakes raise questions about consent, misuse, and legal liability. Across continents and legal systems, these synthetic creations have already prompted legislative debates, artistic controversy, and new concerns over how identity can be digitally rewritten without consent. This article explores the intersection between deepfake technology and image rights, highlighting the legal vacuum, the potential for abuse, and the need for effective legal and contractual safeguards.
What Are Deepfakes and How Do They Work?
Deepfakes are synthetic media generated using artificial intelligence. These neural networks train on large datasets to create hyper-realistic content that mimics real individuals' faces, voices, and expressions. Common use cases include inserting a person’s face into a movie scene, creating fake news clips, or mimicking a celebrity’s voice. Initially a niche technology, deepfakes have now proliferated through apps and open-source tools, making their creation accessible to anyone with a smartphone and an internet connection.
Their impact is significant. Viral videos of public figures making statements they never uttered or influencers appearing in fake promotional content illustrate how deepfakes can distort public perception and damage reputations. This is where the right to image comes into focus.
Image Rights and the Deepfake Dilemma
The right to image generally refers to a person’s control over the commercial and public use of their likeness, including their face, voice, or other identifying attributes. Deepfakes challenge this right in multiple ways. First, by simulating consent: a deepfake may appear as if someone has willingly participated in content they never approved. Second, by blurring the line between parody and defamation, as it becomes increasingly difficult to distinguish fiction from fact.
A clear illustrative example of this dilemma appears in the popular TV series Black Mirror, in the episode where actress Salma Hayek portrays a version of herself. In the narrative, a streaming platform uses her digital likeness—via deepfake technology—to create fictional films that damage her reputation, without her consent. This dramatized yet eerily plausible scenario demonstrates the emotional and reputational toll of synthetic media, especially when image rights are not properly safeguarded.
Legally, the situation is complex. In some jurisdictions, image rights are grounded in privacy law, while in others, they fall under intellectual property or tort law. What is clear, however, is that current frameworks often fail to adequately address the unique threats posed by synthetic media. There is also an enforcement gap: once a deepfake is posted and shared, removing it entirely from the internet is nearly impossible.
Comparative Approaches and Legal Gaps
Different regions have started to address deepfakes in their legislation. In the United States, states like California and Texas have introduced bills banning deepfakes in political campaigns and non-consensual adult content. These statutes reflect a growing recognition of the real-world harm deepfakes can cause, particularly when used to manipulate elections or spread false information.
The European Union’s proposed AI Act includes provisions on transparency and accountability for synthetic content. Under this regulation, AI-generated media must disclose its artificial nature, which could help reduce confusion and misuse in public discourse.
In Asia, countries like South Korea have criminalized the creation and distribution of certain types of deepfake content, especially those that involve sexual exploitation or impersonation. Meanwhile, China has implemented a rule that requires providers of deep synthesis services to notify users when content is generated using AI.
Latin America is still developing its response to deepfake risks. Most countries rely on traditional legal frameworks, such as defamation and privacy laws, but lack tailored provisions for synthetic media. This presents both a challenge and an opportunity for legal innovation. Given the region’s growing digital influence, especially in content creation and influencer marketing, proactive legislation could help prevent future harms. In Mexico, legislators are already pushing to penalize deepfakes used to simulate explicit content without consent. Meanwhile, in Brazil, the Superior Electoral Court has explicitly banned the use of deepfakes in political campaigns and requires clear labeling of any AI-generated media in electoral content. These measures reflect growing institutional awareness in Latin America, not only of the reputational harm these tools can cause but also of their potential to mislead voters and undermine democratic processes.
A notable real-world example of deepfake misuse occurred in 2018, when a video surfaced of former U.S. President Barack Obama apparently insulting then-President Donald Trump. Although it was later revealed to be a deepfake created for educational purposes, the clip demonstrated how easily synthetic content could mislead the public. Another high-profile case involved actress Scarlett Johansson, whose image and voice were digitally manipulated for adult content without her consent.
Hypothetical Scenarios and Legal Implications
The potential legal implications of deepfakes become even clearer when we look at how such content could play out in real-world scenarios grounded in plausible real-world contexts. Imagine a deepfake video featuring the CEO of a major corporation delivering a fabricated apology for financial fraud. Within hours, the company’s stock value collapses, investors panic, and the damage is done before the video is debunked. The legal consequences would involve not only reputational harm but also financial loss and potential securities fraud inquiries
In another case, imagine that just days before a national election in Brazil, a deepfake video circulates showing a presidential candidate appearing to accept illicit funds from a foreign government. The video goes viral before fact-checkers can intervene, swaying public sentiment and influencing voting behavior. The consequences could include investigations for electoral manipulation and reputational damage.
Consider also a marketing campaign where a beauty brand digitally inserts the likeness of a well-known influencer into an advertisement, promoting products the influencer has never used. Not only is this a clear case of commercial appropriation of image without consent, but it could also lead to significant brand confusion and personal brand harm for the influencer involved. The resulting claims might range from breach of image rights to reputational damage, all compounded by a lack of prior contractual protection.
Each of these scenarios illustrates the legal and ethical complexity of deepfakes in different contexts—corporate, political, and commercial—and underscores the urgent need for proactive legal tools.
Proposals for Mitigating the Risk of Deepfakes
Rather than criminalizing or banning the use of synthetic media outright, a more effective approach focuses on prevention, transparency, and accountability. Key strategies include:
Explicit Consent Mechanisms: Any use of a person’s likeness—whether for entertainment, advertising, or AI training—should require clear, documented consent. Standardizing this through image rights clauses in contracts would provide legal clarity
Synthetic Likeness Clauses in Contracts: Talent agreements, influencer deals, and user terms should include language that prohibits unauthorized use of one’s image in synthetic formats. Clauses should also require disclosure and approval when AI-based modifications are proposed.
Right to Opt-Out and Revoke: Individuals should have a legal right to opt out of having their image used in AI-generated content, and to request the removal of content created without consent.
Penalties for Malicious Use: When deepfakes are used to defame, impersonate, or manipulate others—especially in political or commercial contexts—there should be enforceable penalties. This includes reputational harm, emotional distress, or commercial exploitation without compensation.
Digital Literacy Campaigns: Governments and private actors should promote awareness of deepfake technologies. Training users to recognize manipulated content is key to minimizing the risk of deception and reputational harm.
Ethical Guidelines for Developers: Developers of generative AI systems should adopt ethical standards that prohibit the use of their tools for non-consensual deepfakes. Self-regulation through codes of conduct can complement formal regulation.
These ideas are all about staying ahead of the risks without shutting the door on innovation. The goal isn’t to paint new technologies as the enemy, but to make sure they grow in a way that respects people’s rights.
Conclusion
Deepfakes have transformed the way we think about identity, authenticity, and control in the digital age. As legal professionals, platform operators, and regulators navigate this new terrain, a balanced and collaborative approach is key. Image rights must be protected, but so must freedom of expression and technological innovation. By understanding the risks and working toward coherent legal solutions, we can ensure that artificial intelligence serves as a force for empowerment—not exploitation—in the realm of image and identity online.
Follow LexTalk World for more news and updates from International Legal Industry.

Comments