According to a report by The Times of India, the Indian government has introduced new IT rules about AI-generated content like deepfake videos, edited images, and synthetic audio.
What are the new rules?
AI content must be clearly labeled
If a photo, video, or audio is created or changed using AI, platforms must clearly mention that it is AI-generated.
Users should be able to easily understand that the content is not completely real.
Platforms must act very fast
If the government or a court orders removal of illegal content, platforms now have only 3 hours to remove it.
Earlier, they had 36 hours.
Platforms affected
Big platforms like Google, YouTube, and Instagram will have to follow these rules.
Why this rule was made
The government wants to stop misuse of AI, especially:
Deepfake videos
Fake documents
Misleading or harmful content
Child abuse material
Responsibility of platforms
Social media and tech companies must:
Ask users if their content is AI-generated
Use tools to detect fake or synthetic content
Clearly warn users about misuse and penalties
When will this start?
The new rules will come into effect from 20 February 2026.


