Explaining Machine Learning: Opening the “black box”Steep progress in advanced machine learning techniques has resulted in an unprecedented surge of interest in utilizing increasingly complex artificial intelligence (AI) architectures, such as Deep Neural Networks (DNN), due their unparalleled performance in solving numerous challenging problems. Nevertheless, when using such deep architectures in critical domains, such as biomedical applications or autonomous devices, a single incorrect decision could lead to catastrophic consequences. Although some efforts have been deployed to explain the decisions of DNN, in the so-called explainable artificial intelligence (XAI) realm, there is still a long way to go in order to open the “black box.” In this Blog, we propose to discuss the critical questions that XDNN should be able to answer in order to build the trust required for certain applications.
Some of the key questions we would like to answer are listed below:
Addressing such key questions could lead to a number of advances, such as: