INFORMATION TECHNOLOGY (INTERMEDIARY GUIDELINES AND DIGITAL MEDIA ETHICS CODE) AMENDMENT RULES, 2026: A DETAILED LEGAL ANALYSIS
1. Background
In recent years, Artificial Intelligence (“AI”) has evolved at an unprecedented pace. Today, advanced AI tools can generate highly realistic content from fabricated videos of individuals speaking words they never uttered, to cloned voices, manipulated images, and deepfakes that are nearly indistinguishable from reality. While these technological advancements offer immense innovation potential, they also pose significant risks to privacy, reputation, and the integrity of information in the digital ecosystem.
On 10 February 2026, the Ministry of Electronics and Information Technology (“MeitY”) has notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 20261 (“2026 Amendment Rules”), thereby further amending the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules 2021”). The amendments were issued under Section 87 of the Information Technology Act, 20002 and will come into force from 20 February 2026. In this Article we discuss the amendments and their implementation.
1.1 IT Rules 2021 Framework
The IT Rules 2021 introduced a structured regulatory regime for intermediaries, including due diligence obligations, grievance redressal mechanisms, and enhanced compliance requirements for significant social media intermediaries (“SSMIs”). These rules operate alongside Section 79 of the Information Technology Act, which grants intermediaries conditional “safe harbour” protection subject to compliance with prescribed due diligence.
Subsequent amendments in 2022, 2023 and 2025 strengthened user protection and governmental oversight over digital platforms. Against this evolving regulatory backdrop, the 2026 Amendment Rules mark a significant expansion of the compliance regime, particularly in response to the rapid proliferation of AI technologies, deepfakes, and synthetically generated content.
1.2 Purpose and Policy Objective
The principal objectives of the amendments include:
- Regulation of AI-generated and synthetic content;
- Mitigation of risks arising from deepfakes and identity misuse;
- Enhanced platform accountability through proactive monitoring;
- Faster content removal and grievance response timelines; and
- Strengthening enforcement mechanisms affecting safe harbour protection.
The amendments indicate a clear policy shift towards technological governance and platform responsibility in addressing misinformation and digitally manipulated content. Collectively, these objectives signal a move towards technology-neutral but future-oriented regulation, seeking to address both current deepfake risks and emerging forms of synthetic media manipulation.
2. Key Amendments Introduced
New Definitions under Rule 2 Amendment: The amendment introduces new definitions and clarifications under Rule 2 of the IT Rules 2021.
(i) Rule 2(1)(ca): “Audio, Visual or Audio-Visual Information”
A broad definition covering images, graphics, videos, sound recordings or any content created, generated, modified, or altered through a computer resource.
(ii) Rule 2(1)(wa): “Synthetically Generated Information”
Defined as content artificially or algorithmically created or altered using computer resources in a manner that appears real or authentic and is indistinguishable from real persons or events.
The definition excludes:
- Routine or good-faith editing or enhancement;
- Preparation of documents or educational materials;
- Technical improvements such as translation or accessibility enhancement.
(iii) Rule 2(1A): Scope Expansion
Any reference to “information” used for unlawful acts now includes synthetically generated information.
(iv) Rule 2(1B): Safe Harbour Clarification
Removal or disabling of unlawful information by intermediaries using reasonable technical measures will not violate Section 79(2) safe harbour conditions.
Regulatory significance: The definitional framework creates the legal basis for regulating deepfakes and AI-generated content.
A. Due Diligence Obligations – Amendments to Rule 3
(i) Periodic User Notification – Rule 3(1)(c) (Substituted)
Intermediaries must inform users at least once every three months regarding:
- Consequences of non-compliance with platform rules;
- Potential penalties under applicable law;
- Mandatory reporting obligations for offences.
These obligations also imply the need for robust internal record-keeping, audit trails, and user communication systems, as compliance may need to be demonstrated to regulators and courts in the event of disputes.
(ii) Additional Disclosure Obligations for Synthetic Content Tools — Rule 3(1)(ca)
Intermediaries facilitating creation of synthetic content must inform users that:
- Misuse may attract liability under multiple laws (including IT Act, Bharatiya Nyaya Sanhita, POCSO Act, Representation of the People Act, etc.)
- Violations may result in content removal, account suspension, identity disclosure, and reporting to authorities.
(iii) Expeditious Action Requirement – Rule 3(1)(cb)
Intermediaries must take prompt action upon becoming aware of violations relating to synthetic content through complaints or actual knowledge.
B. Compliance Timeline Changes under Rule 3 and Rule 4
The amendment substantially reduces response timelines across the framework.
(i) Rule 3(1)(d) – Government Direction Compliance
| Requirement | Earlier Timeline | Amended Timeline |
|---|---|---|
| Removal or disabling access upon government direction | 36 hours | 3 hours |
This represents a significant increase in regulatory urgency.
(ii) Rule 3(2)(a)(i) – Grievance Resolution
| Requirement | Earlier | Amended |
|---|---|---|
| Grievance disposal period | 15 days | 7 days |
| Action in urgent cases (proviso) | 72 hours | 36 hours |
(iii) Rule 3(2)(b) – Specific Content Complaints
| Requirement | Earlier | Amended |
|---|---|---|
| Response to certain user complaints | 24 hours | 2 hours |
Impact: These changes require near real-time compliance infrastructure. In practical terms, this necessitates 24/7 operational readiness, automated escalation workflows, and tight integration between legal, trust and safety, and technical teams, particularly for multinational platforms operating at scale.
C. Regulation of Synthetic Content – New Rule 3(3)
A new due diligence framework specifically governs synthetically generated information.
(i) Rule 3(3)(a)(i): Proactive Technical Safeguards
Intermediaries must deploy reasonable technical measures to prevent synthetic content that:
- Contains child sexual abuse material;
- Involves non-consensual intimate imagery;
- Creates false documents or electronic records;
- Relates to weapons or explosives;
- Deceptively portrays individuals or events.
(ii) Rule 3(3)(a)(ii): Mandatory Labelling and Disclosure
Permissible synthetic content must:
- Be prominently labelled;
- Include audio or visual disclosure;
- Embed permanent metadata and unique identifiers identifying origin;
- Ensure visibility of such labels.
(iii) Rule 3(3)(b): Non-Removal of Labels
Intermediaries must prevent modification or removal of labels or metadata.
D. Changes for Significant Social Media Intermediaries – Rule 4
(i) New Rule 4(1A): Pre-Publication Verification
SSMIs must:
- Require users to declare whether content is synthetically generated;
- Verify such declaration using technical measures;
- Ensure clear labelling where synthetic content is identified.
Failure to comply may constitute failure to exercise due diligence.
(ii) Rule 4(4): Mandatory Deployment of Technology
The earlier obligation to “endeavour” to deploy detection tools is replaced with a mandatory requirement to deploy appropriate technological measures.
E. Enforcement Changes
- Rule 7 Amendment: Reference to the Indian Penal Code is replaced with the Bharatiya Nyaya Sanhita, 2023.
- Government Direction Procedure (Rule 3(1)(d)): Government or police directions must be issued through authorised officers, including officers not below the rank of Deputy Inspector General of Police, where issued by police administration.
3. Legal and Regulatory Impact
A. Impact on Intermediaries
The amendments impose significant technological and operational obligations, particularly on platforms hosting user-generated content or offering AI tools. The platforms must implement verification mechanisms, metadata embedding systems, and automated moderation technologies.
B. Increased Compliance Burden
The reduced timelines and technical requirements may substantially increase compliance costs. Smaller intermediaries may face disproportionate operational challenges in meeting these standards.
C. Safe Harbour Implications
While Rule 2(1B) clarifies that removal of unlawful content does not violate safe harbour, the expanded due diligence obligations effectively raise the threshold for retaining immunity under Section 79. As a result, the availability of section 79 immunity is likely to become increasingly fact-intensive and contested in litigation, particularly in cases involving alleged failures of proactive due diligence.
D. Privacy and Free Speech Concerns
The amendments raise potential privacy and free speech concerns, including risks of over-removal due to strict timelines, a possible chilling effect on lawful expression, identity disclosure implications, and increased reliance on automated monitoring by intermediaries. There is also a real risk of collateral censorship and over-blocking, especially where automated tools are deployed under strict and short compliance deadlines.
E. Practical Implementation Challenges
The amendments may pose practical challenges, including ensuring accurate deepfake detection, maintaining permanent metadata, addressing cross-border jurisdictional issues, and determining the scope of “reasonable technical measures.”
F. Shift from Reactive to Proactive Regulation
The 2026 Amendment Rules move from a notice-based liability model to proactive technological regulation and verification obligations.
4. Implications for Intermediaries and Platforms
The 2026 Amendment rules establish a formal regulatory framework for AI and deepfake content while significantly tightening compliance timelines for intermediaries. They introduce mandatory labelling and metadata requirements for synthetic content and require significant social media intermediaries to obtain user declarations prior to publication and deploy proportionate technical measures to verify such declaration. Further, the availability of safe harbour protection is increasingly contingent upon strict adherence to enhanced due diligence obligations, thereby requiring intermediaries to undertake substantial operational and compliance restructuring.
5. Conclusion
The 2026 Amendment Rules significantly reshape India’s intermediary liability regime by introducing comprehensive regulation of synthetically generated content and strengthening enforcement mechanisms. The amendments reflect a policy emphasis on proactive platform governance and AI accountability.
While the regulatory objectives address emerging digital harms, their implementation will require careful balancing against technological feasibility, constitutional protections, and industry capability. Intermediaries operating in India must undertake immediate compliance reviews and technological upgrades to meet the expanded obligations.
Written by:
Mitali Umat (Associate) and
Shubhangi Dengre (Associate)