India Court Orders Meta, Google to Tackle AI Deepfakes

LAWCOURT
Whalesbook Logo
AuthorAarav Shah|Published at:
India Court Orders Meta, Google to Tackle AI Deepfakes
Overview

India's Delhi High Court is set to order Meta and Google to remove AI-generated deepfakes and instances of unauthorized merchandise using cricketer Gautam Gambhir's likeness. This follows a ₹2.5 crore damages suit and signals increasing platform accountability for digital impersonation, which has surged since 2025. The move reflects India's stricter stance on AI content and personality rights, impacting the digital media landscape.

Celebrities Seek Protection from AI Impersonation

Cricketer Gautam Gambhir's pursuit of ₹2.5 crore in damages against Meta and Google for alleged misuse of his likeness highlights a growing trend of digital impersonation fueled by AI. Since late 2025, campaigns leveraging AI tools like face-swapping and voice-cloning have surged, impacting public figures. Gambhir's case, involving unauthorized merchandise and deepfakes, is one of many instances where celebrities are turning to the courts for protection. Figures like Anil Kapoor, Amitabh Bachchan, and Sonakshi Sinha have also sought similar legal recourse against AI-generated imitations and commercial exploitation of their images.

India's Evolving AI Rules

This legal action unfolds against a backdrop of India's evolving regulatory framework for AI. Proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules in 2025 are poised to include "synthetically generated information" under explicit oversight, increasing due diligence demands for online platforms. Indian courts are increasingly recognizing personality rights, often derived from constitutional rights to dignity and privacy, even without a single codified statute. Precedents, such as a 2024 ruling concerning Anil Kapoor, have reinforced celebrities' control over their image and voice.

Global Risks and Platform Costs

The global rise of AI-generated content poses significant challenges. Deepfakes are projected to make up as much as 90% of online content by 2026, carrying substantial financial fraud risks, estimated at $40 billion in the U.S. by 2027. These risks directly threaten the integrity of digital commerce and advertising models essential for companies like Meta and Google. Platforms often employ passive takedown systems, which can lag behind the viral spread of deepfakes. This reactive approach may lead to minimal compliance rather than proactive content screening. The associated costs — including legal fees, potential fines, and investment in advanced AI detection technology — are considerable.

Platform Scrutiny and Future Impact

The sheer scale of these platforms, with Meta Platforms (META) valued at ~$1.50 trillion and Alphabet (GOOGL) at ~$3.51 trillion as of March 25, 2026, means the financial stakes for compliance and technological investment are immense. A precedent-setting ruling in India could broaden platform liability beyond user-generated content to cover the facilitation of AI-driven misinformation, potentially impacting advertising revenue. Unlike some regions where regulatory pressure is less intense, Meta and Google face growing scrutiny in India, where personality rights are being actively reinforced. As India's judiciary continues to interpret AI's role in content, and as regulations sharpen, platforms face increasing pressure to enhance moderation and verification. This could reshape global standards for policing and monetizing digital identity.

Disclaimer:This content is for informational purposes only and does not constitute financial or investment advice. Readers should consult a SEBI-registered advisor before making decisions. Investments are subject to market risks, and past performance does not guarantee future results. The publisher and authors are not liable for any losses. Accuracy and completeness are not guaranteed, and views expressed may not reflect the publication’s editorial stance.