Introduction
Algorithms are rapidly influencing and monitoring citizens’ movements, transactions, and decisions, under the pretext of development and Government. Traffic cameras at important crossroads are part of an e-challan network deliberately put in the city’s busiest areas. Algorithmic cameras attempt to minimise road fatalities and enhance traffic flow by automatically enforcing regulations. Violations, such as not wearing a seat belt or running a red light, are automatically recorded, analysed, and penalised. Algorithmic governance is reshaping daily life in the city through quiet enforcement of driving habits. However, traffic enforcement is just one facet of the algorithmic governance.1 Further, facial recognition systems, log office entries, and AI-driven systems manage transactions, assess fraud risk, adjust credit scores in real time, and advertisements on social media and billboards are tailored based on past online behaviour. In the end, citizens’ digital footprints for the entire period are analysed by various governmental organisations, which assess potential health risks and security concerns, subtly influencing future interactions with State services.
This situation exemplifies algorithmic constitutionalism2, a key advancement in Government in which core ideals of the Indian Constitution, such as equality, privacy, and the rule of law, are re-engineered within today’s digital context. In this new world, algorithms are more than just tools; they are builders of public life, changing the core of democratic government. However, the increased dependence on these systems presents several important problems, such as: What exactly does the word algorithmic governance mean, and what is its scope? How can algorithmic governance affect constitutional rights when the procedures are hidden by opaque code? What role do the legislative and judicial branches play in resolving these issues? What steps may be made to improve justice and equality in algorithmic systems?
Navigating the algorithmic governance and its challenges
Algorithms are not neutral tools; they are firmly integrated in a governance system and reflect the developers’ intentions: social democrats use them to further social democracy, libertarians to increase personal freedom, and authoritarian governments to maintain control. This dynamic is part of the larger notion of algorithmic governmentality, which entails viewing algorithms as tools of governance that both impact and are influenced by governing processes. Algorithmic governmentality may be divided into two major themes: (a) the development of digitised information via algorithms that regulate programmers and coders; and (b) algorithms as a kind of anticipatory governance. The former represents how algorithms shape knowledge creation and impact individuals who build and use them. The latter, anticipatory governance, entails forecasting and guiding future behaviours using algorithmic predictions and categorisations. The concept is that algorithms are not just governance tools, but also governance subjects, representing the socio-technical dynamic in which human and non-human actors coexist in governance practices. This interpretation is consistent with actor-network theory, which sees all players as networks and all networks as actors, implying that algorithms, as tools of Government, have agency and actively create political rationalities and power relations.3
The emergence of artificial intelligence has further separated algorithmically generated information from human experience. Artificial intelligence creates its own categories and expertise using internal variables and datasets, frequently without direct human supervision. This tendency results in “algocracy”.4 Anticipatory algorithmic governmentality is a clear example of this, as seen in a variety of applications including search engines, recommender systems, and predictive police. These systems anticipate and affect future behaviours by combining past data and probabilistic computations. Predictive policing systems,5 for example, use historical data to forecast criminal activity, frequently duplicating and increasing pre-existing prejudice. This type of governance not only predicts but also preempts acts, resulting in new forms of governable subjects and maintaining socio-economic imbalances.
In India, digital divisions and socio-economic inequities intensify the situation,6 posing particular issues for judicial systems. One major difficulty is that AI models are based on datasets that may omit substantial portions of the population. In India, where over half of the population lacks internet access, especially among women, rural groups, and Adivasis, critical demographic data is frequently missing or misinterpreted. For example, India’s mandated use of Aarogya Setu for COVID-19 contact tracking drew criticism for excluding hundreds of millions of people owing to access restrictions.7 Many AI applications in India have been chastised for focussing on health issues affecting the upper middle class, such as heart illness and cancer, while ignoring serious health challenges faced by underprivileged groups, such as TB.8
Constitutional implications and judicial role in algorithmic decision-making
Algorithms have become an essential component of decision-making in modern public administration and law enforcement. From predictive policing models that foresee crime hotspots to automated legal guidance, digital technologies are rapidly taking on roles previously held by human authorities. This transition from human judgment to data-driven decision-making is a key change in a new governance paradigm known as algocracy in which algorithms assume tasks traditionally held by experts. While these tools are more accurate and objective, they run the danger of reducing complex human experiences to simple data points, thereby missing subtleties in governance processes. Central to this issue is algorithmic opacity, sometimes known as the “black box” problem, where the decision-making processes of AI systems are not clear or understandable to those impacted.9 Citizens may see such systems as opaque and lacking accountability, making it difficult for them to understand, for example, why a traffic charge has been enforced. This, in turn, deprives individuals of the opportunity to question or grasp choices made by these systems, resulting in violations of due process as guaranteed by Article 21 of the Indian Constitution10, which ensures that legal proceedings are fair, just, and not arbitrary.11 Furthermore, the ideals of justice and non-discrimination entrenched in Article 14 of the Indian Constitution12 are prone to degradation as a result of inherent biases in algorithms.13 These biases are sometimes caused by the data used to train AI algorithms, which can reflect and exacerbate societal preconceptions rather than neutralise them. For example, face recognition technologies, which are commonly used in law enforcement and surveillance, have been demonstrated to create much greater mistake rates for persons of colour, disproportionately affecting underprivileged populations.14
In India, the regulation of AI and algorithmic governance is a dynamic terrain with major problems. The Digital Personal Data Protection Act of 202315 is a significant step towards full data protection. However, the regulatory environment for AI is still fragmented and inadequate. Existing legal laws are largely concerned with data protection, rather than the larger implications of AI governance. AI systems have failed to properly address key challenges such as algorithmic bias, transparency, and accountability. This emphasises the necessity for a thorough rethink of the law as it connects with AI. Traditional legal frameworks designed for human-centered decision-making must evolve to handle the difficulties of algorithmic governance. This progress needs more than simply minor tweaks or technological fixes; it necessitates a fundamental change towards a bottom-up approach that includes feminist, decolonial, and anti-caste viewpoints.16 This involves investigating who holds the data, how openness is maintained, and the safeguards in place to avoid abuse of power.
Meanwhile, the judiciary can help bridge regulatory gaps and protect constitutional rights in the context of AI. Indian courts have long been aggressive in resolving privacy and civil rights concerns. In K.S. Puttaswamy (Privacy-9J.) v. Union of India17, the Supreme Court established privacy as a basic right under Article 21 of the Indian Constitution, establishing a precedent for personal data protection.18 The courts have the authority to require openness in AI decision-making processes and hold people accountable for algorithmic prejudice. This includes investigating incidents where AI applications result in discriminatory practices or breach privacy rights, highlighting the necessity for strong regulatory safeguards.
Promoting fairness in algorithmic governmentality
The challenges posed by algorithmic governmentality, such as violations of due process and principles of fairness and non-discrimination, can be addressed through proactive strategies grounded in legal principles, such as firstly, it is critical to promote the “no detriment” concept, which guarantees that algorithmic choices do not injure people while potentially providing advantages.19 While maintaining the positive potential and protecting people’s rights and welfare, this approach seeks to lessen unfair disadvantages brought about by algorithms. In order to put this into practice, authorities must set objective benchmarks to gauge how fair the algorithms are and adjust their code if they depart from them. Although the idea seems doable in theory, putting it into practice calls for planning ahead, which may be accomplished through dialogue and compromise amongst different parties. Secondly, in order to mitigate the polarising impacts of algorithmic governance, it is imperative to move towards universal ideals of equality and shared access, such as universal education.20 In order to promote ethical use of algorithms and critical thinking about their use, education is essential. By ensuring that the advantages of technology are shared equitably among all people and addressing the differentiation brought about by algorithms, which can fragment social experiences and worsen inequality, this strategy fosters a more just society. Thirdly, adopting theoretical frameworks like liquid modernity and queer theory can also aid in challenging constrictive algorithmic classifications; outsider jurisprudence’s queer theory challenges the heteropatriarchal legal system’s hegemony and thus calls for a close examination of how State institutions function in order to establish a just and equitable society.21 On the other side, liquid modernity asserts that although people have the ability to choose in a globalised society, this freedom comes with its own set of difficulties.22 As a result, it implies that the social structure need to acknowledge the “diversity of the choice” and refrain from establishing some options as universal standards. By adopting these viewpoints, the digital system will become more inclusive, enabling people to stand up for their independence and reject being limited.
In order to challenge algorithmic governance, it is essential to cultivate a dedication to common humanity. By critically examining algorithms and decision-making procedures, we can make sure that these instruments uphold democratic principles and serve society in an ethical manner. This requires not just challenging the role of algorithms in governance but also arguing for policies that guarantee technology supports rather than undermines the ideal of fairness.
Conclusion
Algorithms have the ability to improve decision-making consistency and efficiency, but they also present serious problems with regard to individual rights, justice, and transparency. Current legal frameworks must change to reflect these new digital realities in order to handle the complexity of algorithmic governance. The creation of a technically and legally sound AI fairness code is required for this adaption. Because of India’s distinct socio-economic circumstances and varied populace, the use of western-centric AI fairness models may not be sufficient there. As a result, AI technologies need to be inclusive of all groups and customised for local circumstances. This entails reassessing the basic presumptions of AI systems critically, integrating a variety of representative data sources, and interacting with local populations to comprehend their particular requirements. The advantages of AI technologies may be more fairly spread across all societal sectors by taking a more inclusive stance, which would better match them with the justice and equality inherent in the Indian Constitution. The ultimate objective should be to establish a government structure that protects each citizen’s constitutional rights while allowing algorithms to make constructive contributions to society.
*3rd year law student, Rajiv Gandhi National University of Law, Punjab.
**4th year law student, Rajiv Gandhi National University of Law, Punjab. Author can be reached at: : yagyaagarwal21095@rgnul.ac.in.
1. Yogima Seth Sharma, “NHAI Upgrades Advanced Traffic Management System”, (economictimes.indiatimes.com, 17-10-2023).
2. Oreste Pollicino & Giovanni De Gregorio, “Constitutional Law in the Algorithmic Society”, inHans-W. Micklitz, et al. (Eds.), Constitutional Challenges in the Algorithmic Society (Cambridge University Press, 2021) pp. 3-24.
3. Baruch B. Schwarz, et al., “Actor-Network Theory as a New Direction in Research on Educational Dialogues”, (link.springer.com, 5-6-2024).
4. A. Aneesh , “Technologically Coded Authority: The Post-Industrial Decline in Bureaucratic Hierarchies” (web.stanford.edu).
5. Christophe Lazaro & Marco Rizzi, “Predictive Analytics and Governance: A New Sociotechnical Imaginary for Uncertain Futures”, (2022) 19(1) International Journal of Law in Context 70.
6. Apoorva Mahendru, et al., India Inequality Report 2022: Digital Divide, (Oxfam India, 5-12-2022).
7. Alex Akinbi, et al., “Contact Tracing Apps for the COVID-19 Pandemic: A Systematic Literature Review of Challenges and Future Directions for Neo-Liberal Societies”, (2021) 9 Health Information Science & Systems 18 (April 2021).
8. Claire Muñoz Parry & Urvashi Aneja, “Artificial Intelligence for Healthcare: Insights from India”, Centre for Universal Health & Asia-Pacific Programme (July 2020).
9. Bartosz Brożek, et al., “The Black Box Problem Revisited: Real and Imaginary Challenges for Automated Legal Decision Making”, (2024) 32 Artificial Intelligence & Law 427-440 (link.springer.com, 4-4-2023).
10. Constitution of India, Art. 21.
11. Maneka Gandhi v. Union of India, (1978) 1 SCC 248.
12. Constitution of India, Art. 14.
13. Ministry of Defence v. Babita Puniya, (2020) 7 SCC 469.
14. Chad Boutin, “NIST Study Evaluates Effects of Race, Age, Sex on Face Recognition Software”, (www.nist.gov, 18-5-2020)..
15. Digital Personal Data Protection Act, 2023.
16. Richard A. Posner, “Legal Reasoning from the Top Down and from the Bottom Up: The Question of Unenumerated Constitutional Rights”, (1992) 59(1) University of Chicago Law Review 433.
18. K.S. Puttaswamy, (2017) 10 SCC 1.
19. Michael Keating, “What is Detriment?”, (centreonconstitutionalchange.ac.uk, 22-1-2015). www.centreonconstitutionalchange.ac.uk/opinions/what-detriment.
20. United Nations, Universal Declaration of Human Rights
21. Damir Banović, “Queer Legal Theory” (papers.ssrn.com, 28-3-2022).
22. Nicholas Gane, “Zygmunt Bauman: Liquid Modernity and Beyond”, (2001) 44(3) Acta Sociologica 267.