Post-Human Arbitration: Justice Beyond the Human Horizon

by Rishabh Gandhi*

Post-Human Arbitration

In 2050, a complex commercial dispute is submitted to arbitration. There are no hearing rooms, no three-member tribunal, no solemn oaths. Instead, parties upload their claims and defences to a secure digital platform. Within minutes, an artificial intelligence (AI) system—trained on millions of past awards, procedural rules, and case law—renders a reasoned decision. No humans intervene. No human deliberates. And yet, the system outputs an award that mirrors the logic, language, and legal structure of a traditional ruling. Parties comply. Enforceability is not challenged. Efficiency is unparalleled. But was justice done? This article explores the emerging contours of post-human arbitration—a speculative but increasingly plausible phase in the evolution of dispute resolution where human actors no longer occupy the center.

From human-centric to machine-shaped justice

Historically, arbitration has been lauded for its flexibility, informality, and human responsiveness. Arbitrators were valued not merely for legal expertise, but for empathy, cultural fluency, and the capacity to read between the lines. But as AI systems become capable of drafting awards, summarising submissions, ranking evidence, and even simulating deliberative logic, the very modalities of adjudication are shifting (Hildebrandt, 2018). The rise of what the author has elsewhere termed Modus Machina—where arbitration adopts the logic of automation, privileging pattern recognition over context-sensitive judgment—signals a deeper structural evolution. What began as efficiency-enhancing assistance has become a foundational influence on how reasoning unfolds. Arbitration increasingly conforms to the grammar of machines (Yeung, 2018). At its extreme, this transition culminates in what might be called post-human arbitration: a condition where machines do not merely assist but supplant human adjudicators. Human roles become peripheral, reduced to reviewing machine outputs or managing procedural logistics. In such a regime, arbitral awards may still quote precedents, apply doctrine, and echo human style—but the conscience behind the decision is algorithmic (Pasquale, 2015).

The threshold of machine-made justice

This raises the spectre of the arbitral singularity—a hypothetical point at which machine-generated awards become indistinguishable from human ones in form, substance, and acceptability. This is not mere science fiction. As natural language generation tools like generative pre-training transformer (GPT) models grow more sophisticated, the prospect of awards authored entirely by AI that satisfy legal standards becomes conceivable (Zalnieriute & Moses, 2020). In pilot studies and mock arbitrations, AI-generated rulings have already begun to approximate human decisions in both tone and outcome. The Arbitration Turing Test—a term coined by the author in previous research—proposes a future benchmark: if lawyers and Judges cannot reliably distinguish between an award written by a machine and one authored by a seasoned arbitrator, has the machine passed the legitimacy threshold? Yet this metric is misleadingly thin. Imitation of human form is not equivalence in human function. An award that reads like justice may still lack the moral architecture that sustains justice: empathy, contextual reasoning, cultural understanding, and normative judgment. A well-written paragraph is not a fair hearing (Eubanks, 2018).

The problem of legitimacy without humanity

At the heart of the post-human arbitration dilemma lies a legitimacy paradox. Arbitration derives its legitimacy not just from procedural correctness, but from the perception of fairness, transparency, and human judgment. If parties believe that their disputes are being resolved by an inscrutable black box, even accurate decisions may lack moral or psychological acceptance (Barocas, Hardt and Narayanan, 2019). Consider a case involving indigenous land rights, where oral traditions, collective memory, and spiritual significance matter as much as documentary evidence. Can an AI system trained primarily on commercial contracts truly adjudicate such a matter with integrity? Can it weigh silence as meaningfully as speech? Can it discern the moral resonance of a pause, a gesture, or a generational wound? Moreover, post-human arbitration risks flattening diverse legal cultures into a homogenous algorithmic template. Civil law systems emphasise codes; common law systems prize precedent. But indigenous, religious, and customary laws often hinge on relationships, rituals, and oral norms. An AI that averages these traditions risks erasing their distinctiveness in the name of efficiency (Yeung, 2018).

Post-human procedure or post-legitimacy?

We must then ask: if arbitration becomes fully automated, is it still arbitration? Or is it something else—a new procedural species that borrows the shell of arbitration but hollows out its normative core? Some argue that as long as outcomes are predictable, enforceable, and cost-effective, human involvement is dispensable. But this technocratic view misreads the essence of justice. Arbitration is not merely about getting answers—it is about how answers are reached. Process matters because dignity demands recognition, not just resolution (Hildebrandt, 2018). This is not a rejection of technology. On the contrary, AI has tremendous potential to improve consistency, reduce bias, and expand access to justice. But as we approach the post-human threshold, we must decide which functions can be mechanised and which must remain human. Not because humans are more efficient—but because they are accountable, empathetic, and capable of moral imagination.

Envisioning a hybrid future

The path forward is neither full automation nor romanticised resistance. A hybrid model is possible. Imagine a system where AI handles routine tasks—document sorting, case scheduling, evidence summarisation—while human arbitrators retain control over interpretation, moral reasoning, and final judgment. Such a model preserves efficiency without sacrificing legitimacy. Institutions must develop new procedural safeguards: mandatory disclosure of AI involvement, explainability protocols, opt-out rights, and cultural impact assessments. Arbitrators must be trained not just in law, but in algorithmic literacy. Parties must retain the right to demand a human hearing (Gandhi, forthcoming). In one future, arbitration becomes a high-speed highway paved with code, where human voices are muffled beneath machine logic. In another, it becomes a richer terrain—augmented by AI, but still governed by human conscience.

Conclusion: The soul of justice in a synthetic age

I have seen hearings hinge on a glance, a pause, a tone of voice that no dataset could capture. I have seen arbitrators hold space for stories too jagged for algorithms to process. In these moments, justice is not an output—it is a presence. As we stand on the threshold of post-human arbitration, the question is not just whether machines can adjudicate, but whether we will accept a form of justice that forgets what it means to feel, to err, to care.

Will we trade empathy for efficiency, nuance for neatness, conscience for code?

Or will we insist that justice remain a human art—even in an age of artificial minds?

The answer we give will shape not just the future of arbitration—but the future of justice itself.

References

Solon Barocas, Moritz Hardtand and Arvind Narayanan, Fairness and Machine Learning: Limitations and Opportunities (fairmlbook.org, 10-11-2023).

Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (St. Martin’s Press, 2018).

R. Gandhi (Forthcoming), “The Ethics of AI-Driven Arbitration: Balancing Fairness, Transparency, and Accountability in the Digital Dispute Resolution Era”.

Mireille Hildebrandt, Law for Computer Scientists and Other Folk (Oxford University Press, 2018).

Frank Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (Harvard University Press, 2015).

Karan Yeung, “Algorithmic Regulation: A Critical Interrogation”, (2018) 12(4) Regulation & Governance 505-523 (doi.org).

Monika Zalnieriute and Lyria Bennett Moses, “The Rule of Law and Automation of Government Decision-Making”, (2019) 82(3) Modern Law Rev 1468-2230.


*Founder, Rishabh Gandhi & Advocates.

Join the discussion

Leave a Reply

Your email address will not be published. Required fields are marked *