India demands beta AIs secure government permission before going public

Not compulsory for now, but IT minister says that's coming after Google's Gemini said the quiet part out loud

India's Ministry of Electronics and Information Technology (MeitY) issued an advisory last Friday stipulating AI technology still in development acquire government permission prior to release to the Indian public.

"The use of under-testing/unreliable Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s) and its availability to the users on Indian internet must be done so with explicit permission of the Government of India and be deployed only after appropriately labelling the possible and inherent fallibility or unreliability of the output generated," states [PDF] the ministry's notice.

The document also outlines plans to use a "consent popup" mechanism to inform users of potential defects or errors produced by AI, and to inform users by labelling deepfakes with permanent unique metadata or other identifiers.

It also orders that all intermediaries or platforms must ensure any LLM or other AI model product does not permit bias, discrimination, or threaten the integrity of the electoral process.

Compliance is requested within 15 days of the advisory's issuance. Some reports have suggested that after complying and applying for permission to release a product, a developer could be asked to perform a demo for government officials or undergo a stress test – a process IT minister Rajeev Chandrasekhar said would provide more "rigor."

While the document issued late Friday is not legally binding – for now – it does signal the government's expectations and the future direction of regulation.

"We are doing it as an advisory today asking you (the AI platforms) to comply with it," Chandrasekhar reportedly explained, adding that at some point this stance will be encoded in legislation.

"Generative AI or AI platforms available on the internet will have to take full responsibility for what the platform does, and cannot escape the accountability by saying that their platform is under testing," continued the minister, according to local media.

India has suffered a number of challenges and controversies regarding AI technologies. Google fell afoul in late February when its AI, Gemini, responded positively to a query asking if prime minister Narendra Modi is a fascist.

In fairness to Gemini, it responded that Modi had been "accused of implementing policies some experts have characterized as fascist," and indeed that is the case. But lots of politicians, of lots of political stripes, get called "fascist" by their opponents and the commentariat – it doesn't make it true.

Nonetheless Chandrasekhar subsequently called out Google for violating sections of the country's IT Act.

Google reportedly responded that generative AI is only a tool and "may not always be reliable." The Chocolate Factory promised it's been constantly working to improve.

Last November India's attention turned to the impact caused by deepfakes when an AI image of actor Rashmika Mandanna went viral.

Once again, an advisory was issued – that one stating that social media platforms need to remove deepfakes within 36 hours after they're reported. ®

More about


Send us news

Other stories you might like