Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.

Journal: Science and engineering ethics
Published Date:

Abstract

This paper discusses the problem of responsibility attribution raised by the use of artificial intelligence (AI) technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of "many things" is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, which draws attention to the issues of transparency and explainability. In contrast to standard discussions, however, it is then argued that this knowledge problem regarding agents of responsibility is linked to the other side of the responsibility relation: the addressees or "patients" of responsibility, who may demand reasons for actions and decisions made by using AI. Inspired by a relational approach, responsibility as answerability thus offers an important additional, if not primary, justification for explainability based, not on agency, but on patiency.

Authors

  • Mark Coeckelbergh
    Centre for Computing and Social Responsibility, Faculty of Technology, De Montfort University, Gateway House, Leicester, LE1 9BH, UK. mark.coeckelbergh@dmu.ac.uk.