Regulation of AI Technologies: Insights from the Chief Justice of India
Chief Justice of India Bhushan R. Gavai articulated significant concerns regarding the potential misuse of artificial intelligence (AI) technologies on Monday.
During a public interest litigation (PIL) hearing, he underscored awareness among judges about the use of AI tools, particularly those responsible for generating and disseminating morphed images aimed at judiciary members.
The CJI emphatically stated that any initiative to impose regulations on such technologies should stem from the executive branch rather than the judiciary. CJI BR Gavai (HT)
We have encountered our morphed images as well,” the CJI remarked, emphasizing the gravity of the issue at hand while hearing the PIL that called for a comprehensive legal or policy framework governing the application of generative AI (GenAI) within the judiciary and quasi-judicial realms.
He reiterated, “This constitutes a matter of policy that necessitates a decision from the executive.
The bench, which includes Justice K. Vinod Chandran, displayed a palpable reluctance to intervene, indicating that matters concerning the governance of emerging technologies are predominantly within the purview of policy-making. Nevertheless, at the counsel’s request, the case was deferred for a period of two weeks.
The PIL, initiated by advocate Kartikeya Rawal with support from advocate-on-record Abhinav Shrivastava, seeks guidance for the Central government to legislate or develop a thorough policy aimed at ensuring a “regulated and uniform” application of GenAI in judicial settings.
In delineating GenAI from conventional AI, the plea highlighted its capacity to autonomously generate text, data, and reasoning constructs, thus creating the risk of hallucinations—situations where the system might produce fictitious legal doctrines or erroneous case citations.
“The inherent opacity of GenAI, often characterized as a ‘black box’, introduces potential ambiguity within the legal framework,” the petition elucidated, warning that such outputs could yield spurious case laws, biased interpretations, and arbitrary reasoning, thereby infringing upon Article 14, which guarantees the right to equality.
The petitioner asserted that judicial systems are heavily reliant on precedent and transparent reasoning. The ‘black box’ nature of GenAI models implies that even their developers may not possess a full comprehension of how conclusions are derived, complicating oversight.

Additionally, the petition raised alarms regarding the likelihood of GenAI models, trained on real-world datasets, perpetuating or even exacerbating existing societal biases against marginalized groups.
It contended that in the absence of definitive standards concerning data neutrality and ownership, AI-enhanced judicial operations pose a risk to citizens’ right to information as enshrined in Article 19(1)(a).
The petition also warned about the increased susceptibility of AI-integrated systems to cyberattacks, especially in scenarios where court documents or procedures are incorporated into automated platforms.
Source link: Hindustantimes.com.





