THE SEAMLESS LINK
The proposed regulatory framework for Synthetically Generated Information (SGI) in India, building upon the IT Rules 2021, seeks to address the growing challenges of deepfakes and AI-generated content. While the intent to fortify user safety, traceability, and accountability is evident, the chosen regulatory path, heavily emphasizing intermediary liability, confronts a complex and fragmented AI value chain. This complexity breeds concerns that the practical implementation of these rules may disproportionately impact India's vibrant creator economy and its numerous micro, small, and medium enterprises (MSMEs) reliant on AI tools.
The AI Value Chain vs. Intermediary Liability
The traditional model of internet regulation, where distinct roles defined content creators and platform hosts, is rendered inadequate by generative AI. A single piece of AI-generated content can traverse a path involving foundation model developers, deployment specialists, individual creators leveraging tools, and multiple dissemination platforms. In this intricate ecosystem, no single entity possesses complete oversight of the content's lifecycle. The draft amendments, introducing regulations for SGI, attempt to anchor responsibility primarily with intermediaries. This approach, while providing a legal basis for labelling and accountability, risks oversimplifying a multi-faceted problem. Unlike the European Union's comprehensive, risk-based AI Act, which mandates transparency and labelling for generative AI outputs, or the U.S.'s decentralized, sector-specific approach relying on existing laws and voluntary commitments, India's focus on intermediary liability may struggle to map onto the distributed nature of AI development and deployment.
Stifling the Creator Economy and MSMEs
India's creator economy is a significant engine of digital inclusion, employment, and cultural expression. Valued at approximately $3.9 billion by 2030, it supports millions of creators who increasingly depend on AI-enabled tools for enhanced content quality and reduced production costs. Similarly, MSMEs recognize AI's potential for growth and productivity, with 91% believing AI should be democratically accessible, yet cost and skill gaps remain substantial barriers. The proposed SGI rules, by mandating user declarations, verification, and prominent labelling, coupled with potential loss of safe harbour for non-compliance, could impose significant operational burdens. This might lead to excessive content removal or self-censorship, particularly for smaller creators and businesses operating on thin margins. The risk is that regulatory caution, driven by fear of liability, could inadvertently suppress legitimate creative expression and economic activity, rather than precisely targeting harmful deepfakes and deceptive content.
Global Regulatory Contrast and India's Policy Evolution
Globally, AI governance is evolving rapidly. The EU AI Act, effective in stages, categorizes AI systems by risk and imposes stringent transparency requirements, including machine-readable labelling for AI-generated content. The U.S., conversely, adopts a more fragmented strategy, leveraging existing consumer protection, civil rights, and sector-specific regulations, emphasizing voluntary risk management. India's own journey with digital regulation, including the foundational IT Rules 2021 and the 'Startup India' initiative which has sought to streamline compliance and incentivize innovation, now faces the challenge of adapting to generative AI's unique characteristics. While past regulatory overhauls have aimed to foster growth, the broad strokes of the proposed SGI rules raise questions about whether they are sufficiently nuanced to avoid unintended consequences for a dynamic digital economy. The government's recent efforts to support deep tech startups by extending classification periods and increasing revenue caps for incentives signal an awareness of the need for tailored policy, but the SGI rules appear to lean towards a more uniform, liability-driven approach.
The Forensic Bear Case
While the amendments aim to clarify obligations, the definition of SGI, encompassing audio, visual, or audio-visual content that appears real, could still leave text-based AI outputs in a grey area. The mandatory "suo motu" duty for intermediaries to proactively hunt down and act on unlawful SGI, alongside stringent takedown timelines, places a heavy onus on platforms. Significant Social Media Intermediaries (SSMIs) face enhanced obligations, including user declarations and verification, with the potential loss of safe harbour protection serving as a potent deterrent against non-compliance. This heightened liability environment may incentivize over-compliance, leading to broader content moderation that extends beyond genuinely harmful material. Expert analyses suggest that poorly calibrated enforcement can result in uneven application, over-restriction of lawful content, and a chilling effect on innovation. The lack of clearly defined technical measures or sufficient flexibility for nuanced contextual judgment could amplify these risks.
Future Outlook
The effective governance of generative AI in India hinges on its ability to balance robust safeguards with the imperative to foster innovation and economic growth. As global regulatory frameworks mature, India has an opportunity to refine its approach, potentially integrating more risk-based considerations and ensuring that policies are adaptable to the rapid pace of technological change. The success of the proposed SGI regulations will likely depend on calibrated enforcement, clear guidelines for technological implementation, and a continuous dialogue with the digital economy's stakeholders to avoid inadvertently stifling the very ecosystem they aim to protect.
