AI Content Rules 2026: Labels necessary on AI content, social media platforms will have to remove deepfake photos-videos within 3 hours

If any photo, video or audio is created using AI, then it has been made mandatory to label it. The social media platform will have to remove any objectionable content within three hours of receiving the complaint. These new rules will come into effect from February 20, 2026. A notification was issued on 10 February.

PM said, “Authenticity Label” is necessary on the content.

A day before the implementation of these rules, on February 19, Prime Minister Narendra Modi had suggested the label at the AI ​​Summit. He had said that just as there are nutrition labels on food items, similarly there should be labels on digital content too. This will help people to identify what is real and what is artificial, i.e. created by AI. If metadata is tampered with, the post will be deleted.

  1. AI Label: A ‘digital stamp’ on videos

Just like food packets indicate whether it is ‘vegetarian’ or ‘non-vegetarian’, now every AI video, photo or audio will have a label. Suppose you have created a video using AI in which a leader is giving a speech. It should clearly be written in the corner of the video, “AI Generated.”

  1. Technical Marker: Digital DNA

Metadata can be considered the ‘digital DNA’ of a file. It is not visible on the screen, but is hidden in the coding of the file. It will contain information about the date the photo or video was created, which AI tool it was created with, and on which platform it was first uploaded. If someone commits a crime using AI, then the police will be able to trace its real source through this ‘technical marker’.

  1. Tamper Protection: Labels cannot be removed

In the past, people would remove watermarks by cropping or editing the corners of AI-generated photos to make them look real. Now, the government has made it illegal. Social media platforms will have to adopt such technology that if someone tries to remove labels or metadata, the content will be deleted automatically.

Strict action on child pornography and deepfakes

If AI is used to promote child pornography, obscenity, fraud, weapons-related information or to imitate anyone, then it will be considered a serious crime.

Deadline of 3 hours, earlier it was 36 hours

With the new changes in IT rules, social media companies will now have very little time to act. Earlier, the time of 36 hours to remove illegal content has now been reduced to just 3 hours. If users provide wrong information then the platforms will be responsible.

Now, whenever a user uploads something on social media, the platform will have to take a declaration from the user confirming whether the content has been created using AI or not. Companies will have to use tools that verify this claim. If any platform allows AI content to be published without disclosure, it will be held responsible.

Center said that this will make the internet more reliable

The Information Technology Ministry clearly said that the objective of this step is to create an “open, secure, reliable and accountable internet”. This will address the risks posed by generative AI such as misinformation, copying and election manipulation. This will make the internet more reliable.

What are IT Amendment Rules, 2026?

These rules strengthen the IT Rules 2021 to prevent the spread of AI-generated information (SGI) and the harm it causes online.

Why were these amendments needed?

AI has now made it easy to create realistic looking deepfakes. These rules have been brought in to prevent the spread of misinformation, identity theft and pornography (NCII). These rules will be applicable across the country on February 20, 2026.

Comments are closed.