The government has released an advisory for social media and other platforms to designate under-trial AI models and avoid hosting illicit content, following the uproar that erupted a few days ago over Google’s AI platform’s response to questions about Prime Minister Narendra Modi.
The Ministry of Electronics and Information Technology issued an alert to intermediaries/platforms on March 1st, threatening criminal penalties for noncompliance.
“All intermediaries or platforms to ensure that use of Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s) on or through its computer resource does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content,” the advisory stated.

“Non-compliance with provisions would result in penal consequences.” Within days of Prime Minister Modi’s policies being disparaged by Google’s AI tool Gemini, an advisory was issued. The government reacted sharply to Gemini’s answer to questions concerning the prime minister; Rajeev Chandrasekhar, the minister of state for information technology, described it as a breach of IT regulations.
Union Minister for Electronics and IT, Rajeev Chandrashekhar said, “The episode of Google Gemini is very embarrassing, but saying that the platform was under trial and unreliable is certainly not an excuse to escape from prosecution”. “I advise all platforms to openly disclose to the consumers and seek consent from them before deploying any under-trial, erroneous platforms on the Indian public internet. Nobody can escape accountability by apologising later. Every platform on the Indian internet is obliged to be safe and trusted,” he added.

“Use of under-testing / unreliable Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with the explicit permission of the Government of India and be deployed only after appropriately labelling the possible and inherent fallibility or unreliability of the output generated,” it mentioned.
It also covered that the” ‘consent popup’ mechanism may be used to explicitly inform the users about the possible and inherent fallibility or unreliability of the output generated”.