Within artificial intelligence (AI), explainable AI (XAI), often overlapping with interpretable AI or explainable machine learning (XML), is a field of research that explores methods that provide humans with the ability of intellectual oversight over AI algorithms.[1][2] The main focus is on the reasoning behind the decisions or predictions made by the AI algorithms,[3] to make them more understandable and transparent.[4] This addresses users' requirement to assess safety and scrutinize the automated decision making in applications.[5] XAI counters the "black box" tendency of machine learning, where even the AI's designers cannot explain why it arrived at a specific decision.[6][7]
XAI hopes to help users of AI-powered systems perform more effectively by improving their understanding of how those systems reason.[8] XAI may be an implementation of the social right to explanation.[9] Even if there is no such legal right or regulatory requirement, XAI can improve the user experience of a product or service by helping end users trust that the AI is making good decisions.[10] XAI aims to explain what has been done, what is being done, and what will be done next, and to unveil which information these actions are based on.[11] This makes it possible to confirm existing knowledge, challenge existing knowledge, and generate new assumptions.[12]
^Edwards, Lilian; Veale, Michael (2017). "Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For". Duke Law and Technology Review. 16: 18. SSRN2972855.