THE "ACCOUNTABILITY GAP" IN AUTONOMOUS SYSTEMS: RECONCILING LEGAL PERSONHOOD WITH ALGORITHMIC OPACITY
Main Article Content
Аннотация:
As Artificial Intelligence (AI) transitions from narrow task-optimization to autonomous decision-making in critical infrastructures—notably healthcare, finance, and law—it outpaces existing regulatory frameworks. This paper addresses the burgeoning "accountability gap," a socio-technical phenomenon in which the opacity of deep learning architectures (the "black-box" problem) complicates traditional legal doctrines of negligence and liability. While current discourse emphasizes bias and privacy, this study focuses on the intersection of algorithmic opacity and legal personhood. By synthesizing current jurisprudence with the technical constraints of eXplainable AI (XAI), the research evaluates whether granting AI "electronic personhood" is a viable solution or an ethical scapegoat for developers. Through a multidisciplinary analysis of case law and algorithmic auditing, we propose a "Hybrid Liability Framework" that balances innovation with human-centric safety. Our findings suggest that addressing the accountability gap requires a shift from retroactive litigation to proactive algorithmic governance, ensuring that AI autonomy remains tethered to human institutional responsibility.
Article Details
Как цитировать:
Библиографические ссылки:
Buolamwini, J., Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 1-15. https://proceedings.mlr.press/v81/buolamwini18a.html
Hill, K. (2020). Wrongfully Accused by an Algorithm. The New York Times. https://www.nytimes.com/2020/06/24/technology/facial-recognition-arrest.html
Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Sandel, M. (2020). Ethical concerns mount as AI takes a bigger decision-making role. Harvard Gazette. https://news.harvard.edu/gazette/story/2020/10/ethical-concerns-mount-as-ai-takes-bigger-decision-making-role/
Selbst, A. D., et al. (2019). Fairness and Abstraction in Sociotechnical Systems. Proceedings of the Conference on Fairness, Accountability, and Transparency. https://dl.acm.org/doi/10.1145/3287560.3287598
European Union. (2016). General Data Protection Regulation (GDPR). http://data.europa.eu/eli/reg/2016/679/oj
European Commission. (2021). Proposal for a Regulation by the European Parliament and the Council establishing harmonized rules on Artificial Intelligence. https://eur-lex.europa.eu/legal content/EN/TXT/uri=CELEX%3A52021PC0206
Ribeiro, M. T., Singh, S., Guestrin, C. (2016). "Why Should I Trust You?": Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. https://dl.acm.org/doi/10.1145/2939672.2939778
European Parliament. (2016). Draft Report with recommendations to the Commission on Civil Law Rules on Robotics. https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
Goodfellow, I., Shlens, J., Szegedy, C. (2015). Explaining and Harnessing Adversarial Examples. International Conference on Learning Representations. https://arxiv.org/abs/1412.6572
Sharkey, A. (2019). Autonomous weapons systems, killer robots, and human dignity. Ethics and Information Technology, 21, 75-87. https://link.springer.com/article/10.1007/s10676-018-9494-0
World Economic Forum. (2020). The Future of Jobs Report 2020. https://www.weforum.org/reports/the-future-of-jobs-report-2020
World Economic Forum. (2023). Jobs of Tomorrow: Mapping Opportunity in the New Economy. https://www.weforum.org/publications/jobs-of-tomorrow-mapping-opportunity-in-the-new-economy/
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
Partnership on AI. (n.d.). About Us. https://partnershiponai.org/about/
Ankushchk. (2025). AI can't be "ethical" on its own. X Post ID: 1945962464408903950.
Rufo, C. (2025). Been thinking about the ideological principles that guide AI models. X Post ID: 1943033546769338701.
Bayesianboy. (2025). This is junk science that could be highly harmful to vulnerable populations if ever deployed. X Post ID: 1944485596077912389.
Anthropic. (2023). Claude’s Constitution. https://www.anthropic.com/news/claudes-constitution
UNESCO. (2024). Thailand to host the Asia-Pacific’s first UNESCO Global Forum on the Ethics of AI in 2025. https://www.unesco.org/en/artificial-intelligence/recommendation-ethics
