

Global perspective The explainer should explain a representative set to the user so that the user has a global intuition of the model.Model Agnostic The explainer should be able to explain any model and should not make any assumptions about the model while providing explanations.Having said that it should be at least locally faithful i.e it must replicate the model’s behavior in the vicinity of the instance being predicted. Local Fidelity It might not be possible for an explanation to be completely faithful unless it is the complete description of the model itself.Interpretable It should provide a qualitative understanding between the input variables and the response.An ideal model explainer should contain the following desirable properties: LIME explains a prediction so that even the non-experts could compare and improve on an untrustworthy model through feature engineering. This representative set would provide an intuitive global understanding of the model. A method to select a representative set with explanations to make sure the model behaves consistently while replicating human logic. What has LIME had to offer on model interpretability? 1. LIME ( Local Interpretable Model-agnostic Explanations )is a novel explanation technique that explains the prediction of any classifier in an interpretable and faithful manner by learning an interpretable model locally around the prediction. The name of this magical library is LIME. But what if I tell you there exists a tool that explains your model’s decision boundary in a human-understandable way. In other words, we cannot understand it’s learning or figure out its spurious conclusions. This does not help in understanding why some of our predictions are correct while others are wrong nor can we trace our model’s decision path. These simulations give an aggregated view of model performance over unknown data. In order to build trust in the model, we run multiple cross-validations and perform hold-out set validation. A practical example of using LIME on a classification problem.How does LIME achieve model explainability?.What makes LIME a good model explainer?.This post will cover the following topics: The words (features) highlighted in blue support atheism. Example of an explanation by LIME for a binary classification model(atheism/Christian).
