As artificial intelligence makes it easier to create realistic but fake content, the government is moving closer to introducing rules that will require clear labelling of AI-generated material. The aim is to help users identify synthetic content and prevent it from being mistaken for real information. IT Secretary S. Krishnan said the proposed rules are now in the final stages and will be notified after legal vetting is completed. Speaking at an industry event in New Delhi, he said mandatory labelling would allow people to judge content more carefully and reduce the risk of misinformation spreading online. Krishnan said: Labelling something as AI-generated content offers people the opportunity to examine it. You know that it is AI-generated and that it is not masquerading as the truth. Rules to apply to AI tools and social media platforms The upcoming framework will place responsibility on two main groups: companies that build AI tools and social media platforms where such content is shared. This includes providers of AI systems such as ChatGPT, Grok and Gemini, as well as large platforms like Facebook and YouTube. According to Krishnan, these companies are primarily large technology firms that already possess the technical capacity to implement labeling systems.
Since they control how content is created and distributed, the government believes they are best placed to ensure that AI-generated material is properly identified. Draft IT rule changes under legal review The government had first proposed amendments to the IT Rules in October, making it mandatory to label AI-generated or altered content and increasing the accountability of large digital platforms for detecting and flagging such material. Krishnan confirmed that these draft rules are currently undergoing legal checks and are close to final approval. Once cleared, the rules will become part of the existing IT framework rather than a separate new law focused only on artificial intelligence. Also read: Planning to buy something during the Republic Day Sale 2026? 7 things to keep in mind to avoid scams
Deepfake misuse driving urgency The IT Ministry has repeatedly flagged concerns over the rapid spread of deepfake videos, fake audio clips, and manipulated images on social media. Officials have warned that generative AI can create highly realistic content that may be used to spread false information, damage reputations, influence elections, or carry out financial scams. The ministry has described such content as capable of creating “convincing falsehoods”, which makes it harder for ordinary users to separate truth from fabrication, especially when content goes viral within minutes. What kind of labels are being proposed Under the draft amendments, companies may be required to:
1. clear visual or audio markers, and also embed technical metadata
2. show that content has been generated or modified using AI
The idea is to make disclosures visible and difficult to remove when content is shared across platforms. The proposal suggests that labels should cover at least 10% of the screen in visual content or appear during the first 10% of an audio clip, ensuring that users notice the disclosure before engaging with the content. Also read: Why ISRO’s Gaganyaan mission hits a small pause?
No separate AI law for now On whether India needs a dedicated AI law, Krishnan said the government is not planning one immediately but has not ruled it out for the future. “We are not having it tomorrow, or in the next session of Parliament, but in the future we may need an Act,” he said. For now, the government believes that updating existing digital laws is sufficient to handle current risks linked to AI-generated content, while keeping the option open for stronger legislation if new challenges emerge.
The post appeared first on .

