Unmasking Deepfakes: Legal Risks and Remedies in India & Beyond
INTRODUCTION
Deepfakes powered by artificial intelligence (AI) are rapidly transforming how information is created, consumed, and manipulated. These AI-generated audiovisuals are increasingly being used in ways that disrupt law, privacy, and public trust. This article explores the legal implications, emerging risks, and evolving regulatory responses to deepfakes in India and internationally.
A deepfake is a hyper-realistic but fake video or audio clip created using deep learning techniques such as Generative Adversarial Networks (GANs) and autoencoders. These technologies are now widely accessible and capable of producing convincing content that mimics real individuals. The implications of such content, especially when deployed maliciously, are serious.
HOW DO DEEPFAKES WORK?
The creation of deepfakes begins with collecting large datasets of a person’s images, videos, or audio. This data is then used to train AI algorithms that learn to replicate facial expressions, voice patterns, and body movements. Once trained, the model synthesizes fabricated content that appears to be authentic, even though it is entirely fake. This has enabled the creation of videos where individuals appear to say or do things they never actually did.
LEGAL FRAMEWORK IN INDIA
In India, deepfakes fall under the purview of several provisions within the Information Technology Act, 2000. Section 66C addresses identity theft and applies to anyone who fraudulently uses another person’s facial features or voice to cause harm. Section 66E penalizes the violation of privacy through the use or distribution of morphed or fabricated imagery without consent. Additionally, Section 66D criminalizes impersonation by using AI-generated content to defraud individuals, such as through financial scams.
The Bhartiya Nyaya Sanhita, 2023, introduced a further legal safeguard under Section 356, which penalizes defamation. Deepfakes intended to injure a person’s reputation fall within the ambit of “visible representation” and can attract criminal liability.
A notable enforcement example is the case involving actress Rashmika Mandanna, where her likeness was superimposed on another individual’s body. The Delhi Police acted swiftly and arrested a suspect under multiple provisions, including Sections 465 and 469 of the Indian Penal Code and relevant provisions of the IT Act.
ASSESSING THE POTENTIAL RISKS
Deepfakes carry several inherent risks. One significant concern is the spread of misinformation. Falsified videos or audio recordings can mislead the public and distort facts, particularly in politically sensitive environments. Identity theft is another critical issue, as fraudsters use deepfake technology to impersonate others and commit financial crimes.
In a landmark case, Ankur Warikoo v. John Doe & Ors. [CS(COMM) 514/2025], the Delhi High Court granted an ex parte injunction, directing platforms such as Meta to take down manipulated videos impersonating the petitioner. These videos were being used to lure unsuspecting victims into financial scams.
Privacy violations represent yet another concern, especially when it comes to non-consensual explicit content. Involuntary deepfakes can significantly harm an individual’s dignity and autonomy.
The Advertising Standards Council of India1 (ASCI), in its report titled AdNext: The AI Edition2 (dated 19 March 2025), examined the challenges arising from the use of deepfake technology in advertising. The report cautioned that improperly tested AI models could lead to the unauthorized commercial use of a celebrity’s likeness. Furthermore, it highlighted the potential of biased training data to generate discriminatory outcomes, particularly in areas such as recruitment and workforce management.
In response, ASCI issued guidelines urging platforms and advertisers to obtain express written consent from individuals whose likeness is used, to ensure fairness in AI-assisted decision-making, and to adopt a “human-in-the-loop” approach for content moderation and removal.
DEEPFAKES AND PERSONALITY RIGHTS
Indian courts have increasingly recognized the personality rights of public figures, which extend to their name, image, voice, and likeness. Any unauthorized commercial use of these attributes without consent constitutes a violation of personality rights under Article 21 of the Constitution of India.
In Karan Johar v. Indian Pride Advisory Pvt. Ltd.3, the Bombay High Court reaffirmed that public figures hold vested rights in their persona, and any misuse for commercial gain constitutes a legal wrong. The Delhi High Court in Titan Industries Ltd. v. Ramkumar Jewellers4 held that plaintiffs seeking to assert their right to publicity must demonstrate ownership of an identifiable persona and show that they are a well-known personality.
In Amitabh Bachchan v. Rajat Negi5, the Court held that using a celebrity’s voice and image for commercial purposes without consent infringes upon their publicity rights. The growing recognition of these rights is crucial as AI-generated content becomes more prevalent.
In a more recent development, the Delhi High Court, while hearing a writ petition filed by Chaitanya Rohilla6, directed the Ministry of Electronics and Information Technology (MeitY) to constitute a committee for framing regulatory guidelines on the creation, identification, and removal of deepfakes. The Court also recommended that the committee take note of international legislative models, including those from the United States and the European Union.
INTERNATIONAL STANCE
Globally, jurisdictions are beginning to confront the threat of deepfakes with dedicated legislative responses. In the United States, federal legislation such as the SHIELD Act7 (2024) criminalizes the non-consensual distribution of sexually explicit deepfake content. The Take It Down Act8 (2025) imposes a duty on platforms to provide removal mechanisms for such content. The DEFIANCE Act9 (2024) enables civil suits by victims whose likeness is misused.
The European Union’s AI Act (2024) directly addresses the risks associated with AI-generated content, including deepfake pornography and gender-based online harassment. Denmark has taken a unique approach by permitting individuals to copyright their facial features and voice. This legislation not only allows for the removal of unauthorized content but also provides legal avenues for compensation and enforcement against platforms.
THE DIGITAL INDIA BILL (2023)
In India, the Digital India Bill10, which is expected to replace the Information Technology Act, 2000, proposes a more nuanced approach to digital regulation. Though the final draft is awaited, early reports indicate that the Bill will include specific provisions to address artificial intelligence and deepfake-related harms.
Among the anticipated reforms are periodic risk assessments for digital platforms, classification of cybercrimes such as impersonation and identity theft, and category-specific rules for intermediaries. These changes are aimed at closing regulatory gaps that have allowed misuse of emerging technologies.
WAY FORWARD
To address the growing threat of deepfakes, a multi-pronged legislative and enforcement strategy is required. India must introduce specific laws to regulate the creation and distribution of AI-generated content. Consent mechanisms must be strengthened so that individuals have meaningful control over the use of their likeness in any form of content.
Moreover, technological solutions must be developed and deployed to detect and track deepfakes in real time. Platforms must be held accountable through clearly defined liability frameworks, requiring swift removal protocols and transparency in their moderation processes.
CONCLUSION
The proliferation of deepfake technology poses profound legal and ethical challenges. While existing laws such as Section 66E of the IT Act offer some safeguards, they are inadequate in addressing the complexity and scale of the issue. Judicial pronouncements have begun to fill the void, but a comprehensive statutory regime is necessary.
The proposed Digital India Bill offers a timely opportunity to modernize India’s regulatory approach. By imposing stricter obligations on digital intermediaries and introducing clear standards for the governance of AI-generated content, India can move toward a more secure and accountable digital ecosystem.
Written By: Shubhangi Dengre (Associate)
1 Advertising Standards Council of India, ASCI Report: Indian Advertising Industry Embraces AI (Mar. 19, 2025), https://www.ascionline.in/wpcontent/uploads/2025/03/AI-report-press-release.docx-1.pdf.
2 Advertising Standards Council of India, ADNext Report: How AI Is Shaping the Future of Advertising (Mar. 2025), https://www.ascionline.in/academy/wp-content/uploads/2025/03/ADNext-Report-digital.pdf.
3 Karan Johar v. Indian Pride Advisory (P) Ltd. [2024 SCC OnLine Bom 2444]
4 Titan Industries Ltd. v. Ramkumar Jewellers [2012 SCC OnLine Del 2382]
5 Amitabh Bachhan v. Rajat Negi [2022 SCC OnLine Del 4110]
6 Chaitanaya Rohilla v. Union of India [W.P. (C) 15596/2023]
7 S.412 – SHIELD Act of 2023
8 S.146 – TAKE IT DOWN Act of 2025
9 S.3696 – DEFIANCE Act of 2024
10 MoS Rajeev Chandrasekhar to hold a Digital India Dialogues’ session tomorrow in Mumbai on principles of Digital India Act, https://www.pib.gov.in/PressReleasePage.aspx?PRID=1926443