By Inderjit Badhwar
Courts are not merely places where disputes are resolved. They are institutions where reason, precedent, and constitutional morality converge. Every judgment is expected to be anchored in verified facts, settled law, and human deliberation. That is why the recent revelations around the misuse of Artificial Intelligence (AI) in legal proceedings are more than a cautionary tale—they are a warning shot.
The cover story this week examines a moment that should give the legal fraternity pause. In Omkara Assets Reconstruction Pvt Ltd vs Gstaad Hotels Bengaluru, the Supreme Court was confronted with a startling disclosure: over a hundred case citations placed before it simply did not exist. They were hallucinations—fabricated precedents generated by AI tools and submitted without verification. While the two-judge bench chose to proceed on merits, the episode exposed a dangerous vulnerability in modern legal practice.
AI is no longer a futuristic add-on to the justice system. It is already embedded in the daily workflows of lawyers, judges, court staff, and litigants. From drafting pleadings and summarising judgments to translating orders and managing case files, algorithms promise speed, efficiency, and scale. For a system burdened with pendency and delays, this technological assistance can feel like salvation.
But speed without scrutiny is perilous.
At its core, the justice system is built on trust—trust that citations are real, facts are accurate, and arguments are made in good faith. AI systems, however, are not repositories of truth. They are predictive engines designed to generate plausible responses, not verified realities. When such systems fabricate legal authorities, they do not merely make errors; they strike at the heart of judicial reasoning, which depends on precedent as its lifeblood.
The implications extend beyond one case or one courtroom. If unchecked, AI hallucinations risk normalising falsehoods within judicial records. They threaten to blur the line between researched argument and machine-generated fiction. Worse, they could undermine public confidence in the courts themselves—an institution whose legitimacy rests not on force, but on credibility.
This is why the Supreme Court’s response matters. By reconstituting its Artificial Intelligence Committee and releasing a White Paper on AI and the Judiciary, the Court has signalled that technology must be governed, not merely adopted. The emphasis on mandatory disclosure, advocate certification, human oversight, and in-house AI tools reflects a crucial understanding: AI must remain a servant of justice, never its substitute.
Internationally, similar concerns are being echoed. UNESCO’s Guidelines for the Use of AI Systems in Courts and Tribunals underline a global consensus that ethical guardrails are essential. Human rights, transparency, auditability, and accountability cannot be outsourced to algorithms. Judicial discretion—shaped by empathy, experience, and constitutional values—cannot be coded.
The danger is not that AI will replace judges. The danger is that it will quietly reshape legal culture, lowering thresholds of diligence while giving an illusion of competence. When lawyers rely on AI outputs without verification, when courts are flooded with machine-generated material, and when training lags behind adoption, the system risks becoming faster—but weaker.
Yet, this is not an argument against AI. Properly designed and responsibly used, AI can democratise access to justice, reduce delays, and assist judges overwhelmed by information overload. India’s initiatives—SUPACE, SUVAS, PANINI, and LegRAA—demonstrate that technology, when tailored to local legal ecosystems, can enhance judicial capacity without compromising integrity.
The real challenge lies in governance. Training judicial officers and lawyers, enforcing ethical standards, demanding transparency, and maintaining human control over decision-making are no longer optional. They are constitutional necessities.
As this cover story shows, we stand at an inflection point. The question is not whether AI will shape the future of justice—it already is. The question is whether we will allow algorithms to dilute the discipline of law, or whether we will insist that technology operates within the moral and constitutional framework that defines justice itself.
In an age of automation, the most radical act may be restraint.
The post Justice In The Age Of Algorithms appeared first on India Legal.
