Tech
|
30th October 2025, 7:31 AM

▶
India's Ministry of Electronics and Information Technology (MeitY) is proposing significant changes to the Information Technology (IT) Rules, 2021, mandating that all artificial intelligence (AI) generated or synthesized content must be clearly labeled. Under the proposed amendments, platforms will need to mark such content prominently, with labels covering at least 10% of the visual area or initial audio. Large social media intermediaries will also be required to implement technical systems for automatic detection and labeling.
The move is driven by the rapid advancement and adoption of AI technologies, including sophisticated deepfake generators like OpenAI's Sora and Google Veo, which blur the lines between reality and fiction. The government aims to mitigate risks associated with hoaxes, scams, and reputational damage caused by AI-generated misinformation. This initiative also aligns India with international efforts, such as those in the European Union and California, to regulate AI content.
Legal precedents, like cases involving actors Aishwarya Rai Bachchan and Hrithik Roshan concerning deepfakes, have also highlighted the need for such regulations. Major platforms like YouTube and Meta are already taking steps towards labeling AI-generated content.
However, the proposal has drawn criticism from organizations like the Internet Freedom Foundation (IFF). They argue that the definition of 'synthetically generated information' could be overly broad, potentially stifling creativity and leading to 'compelled speech'. Concerns also exist about the technical feasibility and evasion of rules by malicious actors, as well as the cost of implementing detection tools. Some experts suggest accountability should lie more with the creators of foundational AI models.
Impact This news will significantly impact the Indian technology and media sectors. It will necessitate compliance changes for social media platforms operating in India, affect content creators, and influence the adoption and development of AI technologies within the country. The rules aim to foster greater transparency but could also introduce compliance challenges. Impact Rating: 8/10
Difficult Terms Deepfakes: Highly realistic synthetic media, typically videos or images, created using AI to depict someone saying or doing something they never did. Synthetically generated information: Content that has been created or modified by algorithms in a way that makes it appear authentic or true, especially when generated by AI. Intermediaries: Online platforms or services that host, transmit, or link to third-party content, such as social media sites and search engines. LLM (Large Language Model): A type of AI designed to understand, generate, and process human language. Examples include models developed by OpenAI, Google, and Anthropic. Compelled speech: A legal concept referring to the requirement to express a particular viewpoint, which can infringe on freedom of speech.