Legal Accountability of Algorithmic Bias : Examining the Role of Law in Preventing Discriminatory AI Decisions

Authors

  • Maulana Fahmi Idris Universitas Sains dan Teknologi Komputer
  • Methodius Kossay Universitas Sains dan Teknologi Komputer

DOI:

https://doi.org/10.62951/ijls.v2i2.521

Keywords:

AI governance, Algorithmic bias, Indonesia, Legal accountability, Regulatory enforcement

Abstract

The increasing adoption of artificial intelligence (AI) in decision-making processes has raised significant concerns regarding algorithmic bias and legal accountability. This study examines the regulatory challenges and enforcement gaps in addressing AI bias, with a particular focus on Indonesia’s legal landscape. Through a comparative analysis of AI governance frameworks in the European Union, the United States, China, and Indonesia, this research identifies key deficiencies in Indonesia’s regulatory approach. Unlike the EU’s AI Act, which incorporates risk-based classification and strict compliance measures, Indonesia lacks a dedicated AI legal framework, leading to limited enforcement mechanisms and unclear liability provisions.The findings highlight that transparency mandates alone are insufficient in mitigating algorithmic discrimination, as weak enforcement structures hinder effective regulatory oversight. Furthermore, the study challenges the notion that global AI regulatory harmonization is universally applicable, emphasizing the need for a context-sensitive hybrid model tailored to Indonesia’s socio-legal environment. The research suggests that Indonesia must adopt a comprehensive AI legal framework, strengthen regulatory institutions, and promote interdisciplinary collaboration between legal experts and AI developers. Future research should focus on empirical case studies, the development of context-specific AI accountability models, and the role of public engagement in AI bias mitigation. These efforts will be essential in shaping effective AI governance strategies that ensure fairness, transparency, and accountability in Indonesia’s digital transformation.

Downloads

Download data is not yet available.

References

Ashar, A., Ginena, K., Cipollone, M., Barreto, R., & Cramer, H. (2024). Algorithmic impact assessments at scale: Practitioners’ challenges and needs. Journal of Online Trust and Safety, 2(4). https://doi.org/10.54501/JOTS.V2I4.206

Bacchini, F., & Lorusso, L. (2019). Race, again: How face recognition technology reinforces racial discrimination. Journal of Information, Communication and Ethics in Society, 17(3), 321–335. https://doi.org/10.1108/JICES-05-2018-0050

Buijsman, S. (2023). Navigating fairness measures and trade-offs. AI and Ethics, 4(4), 1323–1334. https://doi.org/10.1007/S43681-023-00318-0

Cancela-Outeda, C. (2024). The EU’s AI act: A framework for collaborative governance. Internet of Things, 27, 101291. https://doi.org/10.1016/J.IOT.2024.101291

Chaudhary, G. (2024). Unveiling the black box: Bringing algorithmic transparency to AI. Masaryk University Journal of Law and Technology, 18(1), 93–122.

Choudhary, E., Narayanan, S., & Khan, F. (2024). Legal and regulatory landscape. In Handbook of AI Governance and Ethics (pp. 222–239). IGI Global. https://doi.org/10.4018/979-8-3693-7452-8.CH014

de Almeida, P. G. R., dos Santos, C. D., & Farias, J. S. (2021). Artificial intelligence regulation: A framework for governance. Ethics and Information Technology, 23(3), 505–525. https://doi.org/10.1007/S10676-021-09593-Z

Gipson Rankin, S. M. (2021). Technological tethereds: Potential impact of untrustworthy artificial intelligence in criminal justice risk assessment instruments. Washington and Lee Law Review, 78. https://heinonline.org/HOL/Page?handle=hein.journals/waslee78&id=667

Hall, P., & Ellis, D. (2023). A systematic review of socio-technical gender bias in AI algorithms. Online Information Review, 47(7), 1264–1279. https://doi.org/10.1108/OIR-08-2021-0452

Jui, T. Das, & Rivas, P. (2024). Fairness issues, current approaches, and challenges in machine learning models. International Journal of Machine Learning and Cybernetics, 15(8), 3095–3125. https://doi.org/10.1007/S13042-023-02083-2

Kaewtubtim, T. (2024). Enhancing AI for gender equality: A multi-stakeholder approach. In Inteligencia Artificial Feminista: Hacia una agenda de investigación para América Latina y el Caribe. https://doi.org/10.21428/E25FA4CA.B02C725E

Kesari, A., Sele, D., Ash, E., & Bechtold, S. (2024). A legal framework for explainable artificial intelligence. Center for Law & Economics Working Paper Series, 09/2024. https://doi.org/10.3929/ETHZ-B-000699762

Kothandapani, H. P. (2025). Social implications of algorithmic decision-making in housing finance: Examining the broader social impacts of deploying machine learning in lending decisions, including potential disparities and community effects. Journal of Knowledge Learning and Science Technology, 4(1), 78–97. https://doi.org/10.60087/JKLST.V4.N1.009

Nishant, R., Schneckenberg, D., & Ravishankar, M. N. (2024). The formal rationality of artificial intelligence-based algorithms and the problem of bias. Journal of Information Technology, 39(1), 19–40. https://doi.org/10.1177/02683962231176842

Novak, M., & Kovač, A. (2024). Advancing fairness in machine learning: Comparative analysis of bias mitigation strategies. Advances in Computer Sciences, 7(1). https://acadexpinnara.com/index.php/acs/article/view/186

Pessach, D., & Shmueli, E. (2022). A review on fairness in machine learning. ACM Computing Surveys (CSUR), 55(3). https://doi.org/10.1145/3494672

Qiao-Franco, G., & Zhu, R. (2024). China’s artificial intelligence ethics: Policy development in an emergent community of practice. Journal of Contemporary China, 33(146), 189–205. https://doi.org/10.1080/10670564.2022.2153016

Ruohonen, J. (2024). On algorithmic fairness and the EU regulations. https://doi.org/10.1007/s13347-022-00584-6

Shah, M., & Sureja, N. (2024). A comprehensive review of bias in deep learning models: Methods, impacts, and future directions. Archives of Computational Methods in Engineering, 32(1), 255–267. https://doi.org/10.1007/S11831-024-10134-2

Subías-Beltrán, P., Pitarch, C., Migliorelli, C., Marte, L., Galofré, M., & Orte, S. (2024). The role of transparency in AI-driven technologies: Targeting healthcare. In AI - Ethical and Legal Challenges [Working Title]. https://doi.org/10.5772/INTECHOPEN.1007444

Tanaka, H., & Nakamura, Y. (2024). Ethical dilemmas in AI-driven decision-making processes. Digital Transformation and Administration Innovation, 2(2), 17–23. https://journaldtai.com/index.php/jdtai/article/view/13

Verma, S., Paliwal, N., Yadav, K., & Vashist, P. C. (2024). Ethical considerations of bias and fairness in AI models. 2024 2nd International Conference on Disruptive Technologies (ICDT 2024), 818–823. https://doi.org/10.1109/ICDT61202.2024.10489577

Wadipalapa, R. P., Katharina, R., Nainggolan, P. P., Aminah, S., Apriani, T., Ma’rifah, D., & Anisah, A. L. (2024). An ambitious artificial intelligence policy in a decentralised governance system: Evidence from Indonesia. Journal of Current Southeast Asian Affairs, 43(1), 65–93. https://doi.org/10.1177/18681034231226393

Winardi, B. T., & Halim, E. J. (2024). The impact of artificial intelligence and regulatory strategies on the economics: Learning from Indonesia, China, and Europe. Jurnal Bisnis dan Manajemen (JBM), 20(3), 150–163. https://doi.org/10.23960/JBM.V20I3.3437

Downloads

Published

2025-03-04

How to Cite

Maulana Fahmi Idris, & Methodius Kossay. (2025). Legal Accountability of Algorithmic Bias : Examining the Role of Law in Preventing Discriminatory AI Decisions. International Journal of Law and Society, 2(2), 244–256. https://doi.org/10.62951/ijls.v2i2.521