Who is legally responsible if an AI system generates defamatory or harmful content — the developer, the user, or the AI company?

Indian law does not yet have a dedicated statutory framework specifically addressing liability for AI-generated content, such as defamatory statements (e.g., false accusations harming reputation) or harmful outputs (e.g., misinformation leading to injury or discrimination). Instead, courts apply existing laws like the Indian Penal Code (IPC), 1860; the Information Technology (IT) Act, 2000; the Consumer Protection Act (CPA), 2019; and tort principles. The Digital Personal Data Protection Act, 2023 (DPDP Act) may indirectly apply if personal data misuse contributes to harm.

The Supreme Court and High Courts have not issued direct judgments on AI-specific defamation as of October 2025, but precedents on intermediary liability (e.g., Shreya Singhal v. Union of India, 2015) and copyright authorship (e.g., recent Delhi High Court rulings upheld by the Supreme Court emphasizing human-centric laws) provide guidance. Liability is fact-specific, often shared, and hinges on control, knowledge, and intent. AI itself cannot be held liable, as it lacks legal personality—only humans or entities (e.g., companies) can be accountable.

Key Legal Frameworks

  • Defamation (IPC Sections 499–502): Covers false statements harming reputation (civil or criminal). Punishment includes up to 2 years' imprisonment or fine.
  • Harmful Content (IT Act Sections 66A, 67, 69A; IT Rules, 2021): Addresses obscene, threatening, or misleading electronic content. Platforms must remove harmful material within 36 hours of notice.
  • Product/Service Liability (CPA, 2019, Sections 2(34), 84–87): Treats defective AI as a "product" or "service." Strict (no-fault) liability applies if harm results from a defect in design, manufacturing, or warnings.
  • Torts (Defamation Act principles): Civil suits for damages due to negligence.
  • Intermediary Safe Harbor (IT Act Section 79): AI providers (as platforms) are exempt from liability for user-generated content unless they have "actual knowledge" or fail due diligence.

Who Bears Responsibility?

Liability depends on the AI's role (e.g., generative tool like ChatGPT) and the harm's nature. It's often distributed, but courts prioritize the party with most control. Here's a breakdown:

 

PartyPotential LiabilityKey Reasons and Examples
UserPrimary for publication/use (high)- The user is the "publisher" if they disseminate AI output (e.g., posting defamatory text on social media). - Under IPC Section 499, intent or negligence in verifying content matters. If the user prompts the AI to generate harmful material (e.g., deepfakes for harassment), they face criminal charges (IT Act Section 66E for privacy violation). - Example: In a hypothetical suit, a user generating and sharing AI-falsehoods about a rival could be sued for defamation, as they "republished" it. - Safe if content is private/not shared, but ethical risks remain.
Developer (individual coder/programmer)For design flaws (moderate)- Liable under torts/CPA if negligent in training data (e.g., biased inputs leading to discriminatory output). - Rarely sole target unless proven intent (e.g., coding to produce libelous content). - Example: If an app's AI hallucinates facts due to poor algorithms, the developer could face negligence claims, but proving "fault" is challenging without human oversight. - Precedent: Aligns with Donoghue v. Stevenson (negligence principle, adopted in India).
AI Company (e.g., OpenAI or Indian firm)For systemic defects (high, as vicarious liability)- Treated as "manufacturer" under CPA for defective products/services causing harm (strict liability—no need to prove negligence). - Loses safe harbor under IT Act Section 79 if they fail to act on complaints or enable foreseeable harm (e.g., not filtering deepfakes). - Liable if outputs infringe copyrights (Copyright Act, 1957) or violate DPDP Act (fines up to ₹250 crore for data misuse). - Example: In Anil Kapoor v. Simply Life India (Delhi HC, 2023), courts held platforms liable for AI-misuse of personality rights; similar for defamation. - Company bears vicarious liability for employees' acts (State of Rajasthan v. Vidyawati, 1962).

Judicial Approach and Precedents

  • Human-Centric View: The Supreme Court has upheld that only humans/entities can be "authors" or liable (RAGHAV AI case, Delhi HC 2023, affirmed 2025)—AI isn't a legal person, so responsibility traces to humans/companies.
  • Intermediary Cases: In Google India Pvt. Ltd. v. Visakha Industries (2019), the Supreme Court ruled platforms lose immunity if they ignore takedown notices for defamatory content. Applied to AI: Companies must monitor outputs proactively.
  • Product Liability: CPA's strict regime (inspired by EU directives) holds AI firms accountable for "defective" systems, even if autonomous (Uber self-driving accident analogies).
  • Challenges in Proof: Courts require evidence of "actual knowledge" or defect (e.g., forensic audit of AI logs). Black-box AI complicates this, but CPA shifts burden to the provider.

Emerging Trends and Recommendations

  • Regulatory Gaps: The proposed Digital India Act (expected 2025–26) may introduce AI-specific rules, including risk classification and mandatory audits (per MeitY guidelines). NITI Aayog's "Responsible AI" framework urges shared liability.
  • Shared Models: Experts advocate "proportional liability"—e.g., 60% on company for design flaws, 40% on user for misuse—to balance innovation and accountability.
  • Practical Advice:
    • Users: Verify AI outputs; avoid publishing unedited content.
    • Developers/Companies: Implement safeguards (e.g., content filters, disclaimers); conduct bias audits; comply with IT Rules.
    • Victims: File under IPC/IT Act for quick takedowns; CPA for compensation.

This is a rapidly evolving area—consult a lawyer for case-specific advice. For updates, refer to MeitY or Supreme Court resources.