Global AI and Data Science

 View Only

Interpretable Machine Learning Book

By Michael Mansour posted Thu October 03, 2019 08:41 AM

  

Interpretable Machine Learning Book
[Link]

Summary

Providing classifications straight from a black box model can might erode confidence in machine learning from decision makers; it’s important in many cases to explain our conclusions to them.  Chris Molnar assembled a book, Interpretable Machine Learning, that helps us to do just that.  He aggregates several model-agnostic methods for interpreting black box models since no resource had previously existed.  Local model interpretability, as discussed in the book, helps us to answer questions like “why did the model come to this conclusion” and “what effect did a certain feature have on a prediction”.  The methods work for almost any model type, and he covers rudimentary interpretation methods like feature importance and surrogate models, to more advanced prediction explainers like LIME (Locally Interpretable Model Explanations) and SHAP (Shapley Additive Explanations).  He also has a wider discussion about interpretation that is informative as to why this is a useful activity. Lastly there is a package implemented in R that ties these methods into a useful API, for which Chris provides example data to learn with. The book is available as a web-resource and is updated as new methods come online in this rapidly evolving corner of machine learning. 

Image Courtesy of XKCD

Commentary

Interpretable machine learning is unfortunately not taught in curriculum, especially for those coming from other scientific domains now working in data science; this underscores the importance of the Interpretable Machine Learning book to the field, but poses a larger challenge in finding ways to teach model interpretation methods that go beyond just performance metrics, something pupils may be all too focused on during the learning phase.  Spot checking some of classifications is simply not sufficient. 

Even if interpretability is not important in your use case, one can use the methods in this book to choose a model that provides classifications for the right reasons. This arguably would lead to selecting models that generalize better with unseen data and may help prevent embarrassing misclassifications. 


What Do You Think?

Do you have any stories of machine learning outputs going wrong that might be addressed by using one of the tools in the Interpretable Machine Learning Book?  Share now so others may learn.


#GlobalAIandDataScience
#GlobalDataScience
0 comments
38 views

Permalink