Editorial: India’s proposed IT rule changes risk expanding executive overreach
Though the intent of ensuring a “safe, trusted and accountable Internet” is laudable, the mechanism could upset the delicate balance between regulation and freedom
Published Date – 2 April 2026, 09:58 PM
In an increasingly digitised world, nations have an unenviable task of formulating policies that maintain a fine balance between the need for regulating online content in the larger public interest and protecting privacy and individual rights. The latest draft amendments to the Information Technology (IT) Rules, 2021, released by the Ministry of Electronics and Information Technology, raise concerns over executive overreach and undermining privacy rights. Most important among the proposals is the one that requires the digital intermediary platforms to comply with a broad array of government-issued advisories, guidelines and standard operating procedures as a condition for retaining safe harbour protections under Section 79 of the IT Act. While the intent of ensuring a “safe, trusted and accountable Internet” is laudable, the mechanism risks upsetting the delicate balance between regulation and freedom. The proposed amendments significantly tighten government control over online content, potentially making social media platforms directly liable for user-generated content and treating influencers as publishers. The proposals empower the government to mandate content removal, expand data retention, and threaten “safe harbour” protections for non-compliance, creating major challenges for free speech, privacy, and intermediary liability. Stricter and faster timelines have been proposed for removing online content. Increased executive control and swift takedown timelines threaten free speech and may lead to widespread self-censorship. Such over-regulation could lead to social media platforms losing legal immunity for user content, forcing them to become proactive moderators rather than passive hosts. Digital rights groups have warned that the proposals could significantly increase executive control over online speech and represent a major expansion of censorship powers.
By making compliance with non-statutory instruments such as advisories and clarifications mandatory, the government will effectively grant itself sweeping powers without clear legislative backing. This can create a regulatory grey zone. The fear is not unfounded: ambiguity in rules often leads to excessive caution, which can stifle free speech. The proposed widening of content oversight is also worrisome. Bringing intermediaries and even user-generated news content under the purview of an Inter-Departmental Committee is a step toward tighter scrutiny of online communication. Treating everyday users and influencers on a par with digital news publishers in terms of compliance obligations could lead to complications. While accountability is necessary, particularly in this era of misinformation and fake news, the lack of transparent checks and balances in such oversight mechanisms could lead to arbitrary decision-making. At the same time, the government’s concerns over deepfakes and other forms of misuse of AI tools are not without merit. The rapid proliferation of AI-generated misinformation presents a genuine threat to public trust and social stability. However, combating deepfakes and harmful content should not come at the cost of undermining fundamental digital freedom. Ultimately, effective regulation must be rooted in clarity, proportionality, and accountability. Balancing innovation with regulation is key to ethical AI governance.
Comments are closed.