Fake videos and pictures made with AI will be controlled
Summary: Government’s big preparations against deepfakes, AI content will be watermarked
The Indian government is preparing to bring new rules to ban fake videos and pictures made with AI. Under these rules, watermarks and labels will be mandatory on AI-generated content.
AI Watermark Regulations India: While on one hand Artificial Intelligence (AI) technology is making work easier, on the other hand its misuse has created serious challenges for both the government and the society. Fake videos and pictures are being created with the help of AI, which deepfake It is said that now they are no longer limited to just entertainment, but are becoming a means of social unrest, cyber crime and harming personal image.
In view of this increasing threat, the Government of India AI We have prepared to take a big step towards identifying and controlling the content created by internet. The government is planning to make watermarks and labels mandatory on all such AI-generated content, so that genuine and fake content can be clearly differentiated.
Draft rules for AI content ready
According to a report, the Ministry of Electronics and Information Technology (MeitY) has prepared the draft of rules related to placing watermark on AI made content. The Ministry believes that cyber crimes are increasing rapidly due to faceless AI content.
According to the government, fake videos, photos and audios are being created by misuse of AI, which are being used for fraud, blackmailing, spreading rumors and creating social tension. In such a situation, it has become necessary that the content created by AI can be identified in advance.
What will change with watermark?
After the implementation of the new rules, it will have to be clearly mentioned on the video, picture or audio created by AI that the content is AI-generated. This will enable common people to understand that what they are seeing is not real.
Its biggest advantage will be that:
- Timely action can be taken against videos that disturb law and order.
- Content that spreads fear or confusion in society will be stopped
- It will be easier to identify sexual exploitation of children or objectionable material
- People can be warned in advance
The government believes that when a video or picture has a clear label, its truth will be revealed even before it goes viral.
Concern increased over celebrity deepfake cases
Many cases related to misuse of AI have come to light in recent months. Fake videos and pictures of some popular actresses went viral on social media, damaging their image. These cases have made it clear that it is becoming very difficult for the common man to differentiate between genuine and fake content.
AI technology has become so advanced that even faces, voices and gestures appear real. This is the reason why the government is now moving towards bringing concrete rules and not just warnings.
Grok AI controversy becomes latest example
Recently another big controversy came to light regarding obscene content created by AI. A large number of objectionable images were created and shared on social media using Elon Musk’s AI tool ‘Grok’. As the matter escalated, the government had to intervene and instructions were given to remove such content.
This incident clearly showed that if AI content is not controlled, it can affect every section of the society.
When will the new rules come into effect?
According to reports, the government is finalizing the revised guidelines. The new AI framework may soon be made public. After its implementation, it will be the responsibility of AI platforms, developers and content creators to follow the rules.
Comments are closed.