Data and AI Learning Group

 View Only

Explaining Machine Learning: Opening the “black box”

By Graciana Puentes posted Sun April 10, 2022 02:19 PM

  

Explaining Machine Learning: Opening the “black box”

Steep progress in advanced machine learning techniques has resulted in an unprecedented surge of interest in utilizing increasingly complex artificial intelligence (AI) architectures, such as Deep Neural Networks (DNN), due their unparalleled performance in solving numerous challenging problems. Nevertheless, when using such deep architectures in critical domains, such as biomedical applications or autonomous devices, a single incorrect decision could lead to catastrophic consequences. Although some efforts have been deployed to explain the decisions of DNN, in the so-called explainable artificial intelligence (XAI) realm, there is still a long way to go in order to open the “black box.” In this Blog, we propose to discuss the critical questions that XDNN should be able to answer in order to build the trust required for certain applications.
 

 

Some of the key questions we would like to answer are listed below: 

  • What is happening inside the DNN? 
  • What is the function of each hidden layer? 
  • How and when is a decision taken by the DNN? 
  • Why should we trust such decisions? 
  • What kind of metrics are required to build trust? 
  • Can a DNN be endorsed with ethical judgement? 

    Addressing such key questions could lead to a number of advances, such as: 

    • Validation of models 
    • Detection and correction of errors 
    • Improving accuracy and reliability 
    • Extracting new insights into hidden layers 
    • Building an ethical AI 

      Selected References 

      1. Edwards, Lilian; Veale, Michael (2017). "Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For". Duke Law and Technology Review. 16: 18. SSRN2972855.
      2. Gunning, D.; Stefik, M.; Choi, J.; Miller, T.; Stumpf, S.; Yang, G.-Z. (2019-12-18). "XAI-Explainable artificial intelligence". Science Robotics. 4 (37): eaay7120. doi:10.1126/scirobotics.aay7120. ISSN 2470-9476
      3. Rieg, Thilo; Frick, Janek; Baumgartl, Hermann; Buettner, Ricardo (2020-12-17). "Demonstration of the potential of white-box machine learning approaches to gain insights from cardiovascular disease electrocardiograms". PLOS ONE. 15 (12): e0243615. Bibcode:2020PLoSO..1543615R
      4. Loyola-González, O. (2019). "Black-Box vs. White-Box: Understanding Their Advantages and Weaknesses From a Practical Point of View". IEEE Access. 7: 154096–154113. 
      5. Alizadeh, Fatemeh (2021). "I Don't Know, Is AI Also Used in Airbags?: An Empirical Study of Folk Concepts and People's Expectations of Current and Future Artificial Intelligence". doi:10.1515/icom-2021-0009. S2CID 233328352

               #ibmchampions-highlights-home

              #ibmchampions-highlights
              0 comments
              57 views

              Permalink