MMxZlr_L6YE.jpg

Finale Doshi-Velez: “A Roadmap for the Rigorous Science of Interpretability” | Talks at Google

With a growing interest in interpretability, there is an increasing need to characterize what exactly we mean by it and how to sensibly compare the interpretability of different approaches. In this talk, I’ll start by discussing some research in interpretable machine learning from our group, and then broaden it out to discuss what interpretability is and when it is needed. I’ll argue that our current desire for “interpretability” is as vague as asking for “good predictions” — a desire that. while entirely reasonable, must be formalized into concrete needs such as high average test performance (perhaps held-out likelihood is a good metric) or some kind of robust performance (perhaps sensitivity or specificity are more appropriate metrics). This objective of this talk is to start a conversation to do the same for interpretability: I will suggest a taxonomy for interpretable models and their evaluation, and also highlight important open questions about the science of interpretability in machine learning.

Comments

comments

Leave a Reply

Your email address will not be published. Required fields are marked *