Court Targets Deepfake Spread
The Gujarat High Court is formally investigating the spread of AI-generated deepfake content. It has issued notices to major online platforms like Meta India, Google, X, Reddit, and Scribd. This move comes after a Public Interest Litigation (PIL) highlighted the urgent need for strong rules to stop AI misuse. The court is checking if platforms follow their legal duties for illegal online content. It stressed that weak enforcement, not a lack of laws, is the main problem. The court ordered prompt action on official notices and integration with the government's SAHYOG portal, aiming for more accountability and quicker content moderation. The PIL argues deepfake technology seriously threatens public order and democracy because it looks real and people find it hard to tell it from genuine content.
India Tightens AI Rules for Platforms
This court action comes as India's regulations for digital platforms are rapidly changing and becoming much stricter. Amendments to the IT Rules, set for February 2026, mark a major shift from a reactive to a proactive approach. Key changes include significantly faster takedown deadlines: a 3-hour window for illegal content after official notice, and an even tighter 2-hour deadline for non-consensual sexual imagery, including deepfakes. This is a big change from the old 24-36 hour periods. Platforms must also clearly label all 'AI-generated content' (like text, images, audio, and video) and, if possible, add traceable markers. The SAHYOG portal, created by the Ministry of Home Affairs, centralizes takedown requests and helps law enforcement work directly with platforms.
Tech Firms Face Compliance Hurdles
For companies like Meta, Google, X, Reddit, and Scribd, these new rules mean tougher compliance and major operational changes. The fast takedown requirements call for significant investment in automated detection tools and more human reviewers working around the clock for major platforms. Not meeting these rules risks losing legal protection under Section 79 of the IT Act, which would make platforms directly liable for user content. While Meta and Google are said to be improving compliance, others struggle to integrate with systems like SAHYOG. X, formerly Twitter, was previously questioned by India's IT ministry over its Grok AI tool creating inappropriate content, showing platform-specific risks. India's strict AI rules align with a global trend of governments demanding more transparency and accountability from tech companies.
Concerns Over Strict AI Rules
The tougher regulatory environment creates considerable risks for online platforms. The very short takedown window of 2-3 hours poses practical difficulties. Automated systems might mistakenly remove content because they lack human judgment, potentially leading to free speech issues and damage to a platform's reputation. Critics worry these strict measures could lead to too much censorship, hindering creative expression and satire. They also point out that the SAHYOG portal's unclear operation could bypass court oversight, effectively allowing censorship through one-sided takedown orders. The wide definition of 'AI-generated content' might also cover harmless AI uses, expanding the rules beyond just malicious deepfakes.
India's Stance on AI Content
India's rules, backed by the Gujarat High Court's recent action, show a clear position against the misuse of AI-generated content. The fast timelines and required transparency set a global standard, pushing platforms to adapt quickly. The success of these rules depends on how well platforms operate, their ability to balance compliance with user rights, and how detection technology improves. The combined effort of court oversight and government regulation shows an ongoing push to strengthen India's digital oversight against the growing challenges of AI-generated media.