Auditing Artificial Intelligence as a New Layer of Mediation: Introduction of a new black box to address another black box
Article Sidebar
Citacions a Google Acadèmic
Main Article Content
The discourse of auditing artificial intelligence (AI) is emerging as a field of applied AI ethics to address the problems of fairness, bias, and accountability of AI systems. The injunction of auditing AI assumes and proposes an expert figure of an auditor to address the black box problematic of AI. Contrary to this, the text argues that the figure of an expert auditor installs another layer of mediation. This layer of mediation introduces another black box –the auditor, auditing practices and professional scepticism- to address the existing black box problematic. The text argues that it is important to radically interrogate the need or the efficacy of the figure of an expert AI auditor to whom all users of an AI system delegate their representation. The introduction of audit cultures in the AI ecosystem risks foreclosing future debates on AI and its relationship with humans and other non-humans and risks reducing the problem again to a technical one.
Article Details
Aquesta obra està sota una llicència internacional Creative Commons Reconeixement-NoComercial-SenseObraDerivada 4.0.
(c) Cheshta Arora, Debarun Sarkar, 2023
Drets d'autor
Aquesta obra està subjecta a una llicència de Creative Commons Reconeixement-NoComercial-SenseObraDerivada 4.0 Internacional (CC BY-NC-ND 4.0)
Cheshta Arora, Investigadora independent
Cheshta Arora. She is an independent researcher, writer and ethnographer based out of India studying socio-technical systems in the contemporary. Cheshta holds a PhD from the National Institute of Advanced Studies/Manipal Academy of Higher Education. Currently, she is a co-PI on an AI auditing project funded by the Notre Dame-IBM Tech Ethics Lab, University of Notre Dame (Project Website: https://audit4sg.org/).
Debarun Sarkar, Investigador independent
Debarun Sarkar. He is an independent researcher and writer based in India. He is a PhD candidate at the Department of Sociology, University of Mumbai. As an independent researcher, he is currently a co-PI on an AI auditing project funded by the Notre Dame-IBM Technology Ethics Lab, University of Notre Dame (Project Website: https://audit4sg.org/).
Ayling, J., & Chapman, A. (2022). Putting AI ethics to work: Are the tools fit for purpose? AI and Ethics, 2, 405–429. https://doi.org/10.1007/s43681-021-00084-x
Cowls, J., Tsamados, A., Taddeo, M., & Floridi, L. (2021). A definition, benchmark and database of AI for social good initiatives. Nature Machine Intelligence, 3(2), 111–115. https://doi.org/10.1038/s42256-021-00296-0
De Togni, G., Erikainen, S., Chan, S., & Cunningham-Burley, S. (2021). What makes AI ‘intelligent’ and ‘caring’? Exploring affect and relationality across three sites of intelligence and care. Social Science & Medicine, 277, 113874. https://doi.org/10.1016/j.socscimed.2021.113874
DeFond, M., & Zhang, J. (2014). A review of archival auditing research. Journal of Accounting and Economics, 58(2–3), 275–326. https://doi.org/10.1016/j.jacceco.2014.09.002
Floridi, L., Cowls, J., King, T. C., & Taddeo, M. (2020). How to Design AI for Social Good: Seven Essential Factors. Science and Engineering Ethics, 26(3), 1771–1796. https://doi.org/10.1007/s11948-020-00213-5
Laato, S., Tiainen, M., Najmul Islam, A. K. M., & Mäntymäki, M. (2022). How to explain AI systems to end users: A systematic literature review and research agenda. Internet Research, 32(7), 1–31. https://doi.org/10.1108/INTR-08-2021-0600
Latour, B. (2005). Reassembling the social: An introduction to Actor-Network-Theory. Oxford University Press.
Mökander, J., & Floridi, L. (2021). Ethics-Based Auditing to Develop Trustworthy AI. Minds and Machines, 31(2), 323–327. https://doi.org/10.1007/s11023-021-09557-8
Morley, J., Elhalal, A., Garcia, F., Kinsey, L., Mökander, J., & Floridi, L. (2021). Ethics as a Service: A Pragmatic Operationalisation of AI Ethics. Minds and Machines, 31(2), 239–256. https://doi.org/10.1007/s11023-021-09563-w
Nickel, P. J., Franssen, M., & Kroes, P. (2010). Can We Make Sense of the Notion of Trustworthy Technology? Knowledge, Technology & Policy, 23(3), 429–444. https://doi.org/10.1007/s12130-010-9124-6
Olsen, C. (2017). A Study of Professional Skepticism. Springer International Publishing. https://doi.org/10.1007/978-3-319-49896-6
Pasquinelli, M., & Joler, V. (2021). The Nooscope manifested: AI as instrument of knowledge extractivism. AI & SOCIETY, 36(4), 1263–1280. https://doi.org/10.1007/s00146-020-01097-6
Phan, T., Goldfein, J., Kuch, D., & Mann, M. (2022). Introduction: Economies of Virtue. In T. Phan, J. Goldfein, D. Kuch, & M. Mann (Eds.), Economics of Virtue – The Circulation of ‘Ethics‘ in AI (pp. 6–22). Institute of Network Cultures. https://networkcultures.org/wp-content/uploads/2022/12/EconomiesofVirtueINC2022TOD46-2.pdf
Schelenz, L., & Pawelec, M. (2022). Information and Communication Technologies for Development (ICT4D) critique. Information Technology for Development, 28(1), 165–188. https://doi.org/10.1080/02681102.2021.1937473
Shneiderman, B. (2020). Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems. ACM Transactions on Interactive Intelligent Systems, 10(4), 1–31. https://doi.org/10.1145/3419764
Shore, C., & Wright, S. (2015). Governing by numbers: Audit culture, rankings and the new world order. Social Anthropology, 23(1), 22–28. https://doi.org/10.1111/1469-8676.12098
Strathern, M. (2003). Audit Cultures: Anthropological Studies in Accountability, Ethics and the Academy. Routledge.
Vecchione, B., Levy, K., & Barocas, S. (2021). Algorithmic Auditing and Social Justice: Lessons from the History of Audit Studies. Equity and Access in Algorithms, Mechanisms, and Optimization, 1–9. https://doi.org/10.1145/3465416.3483294
von Eschenbach, W. J. (2021). Transparency and the Black Box Problem: Why We Do Not Trust AI. Philosophy & Technology, 34(4), 1607–1622. https://doi.org/10.1007/s13347-021-00477-0