paint-brush
The Intuition Behind the “LIME” Concept in AI & MLby@sanjaykn170396
542 reads
542 reads

The Intuition Behind the “LIME” Concept in AI & ML

by Sanjay KumarDecember 27th, 2022
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

LIME stands for “Local Interpretable Model-Agnostic Explanations”  It is a technique used by researchers in machine learning to explain the trust of the model and to measure the confidence of the algorithm. The problem with the mathematical evaluation metrics is that they cannot explain the reasons for the prediction in a simple understandable way. LIME is not an opposite technique to that of traditional evaluation metrics but it is just a set of guidelines to increase the trustiness.
featured image - The Intuition Behind the “LIME” Concept in AI & ML
Sanjay Kumar HackerNoon profile picture


LIME stands for “Local Interpretable Model-Agnostic Explanations”.  It is a technique used by researchers in machine learning to explain the trust of the model and to measure the confidence of the algorithm.


Now, you will have an obvious doubt that we have many evolution metrics to measure the performance of the models irrespective of being a supervised or unsupervised category. So, why do we need LIME?


The problem with the mathematical evaluation metrics is that they cannot explain the reasons for the prediction in a simple understandable way.  LIME is not an opposite technique to that of traditional evaluation metrics but it is just a set of guidelines to increase the trustiness of the model in addition to that of statistical metrics with some additional interpretable information.


The intuition behind the “LIME” concept in AI & ML

Let us consider that we have built an image classification algorithm to predict whether a person is male or female if we provide the photo as input-


Image source: Illustrated by the author


The model will predict Image 1 as male and Image 2 as female based on some evaluation metrics like accuracy, precision, recall or f1 score.


Now, what is the reason behind these evaluation metrics which tends the model to predict the input as male or female?


Seems confusing?


Let us have another example,


John is suffering from a fever. He consulted a doctor to check whether his fever is due to coronavirus or not. He explained the symptoms currently he is facing like sneezing, body pain, headache etc. The doctor conducted some medical tests and said to John that it was not a corona fever but just a usual one in just a single word (Yes/No). Now, do you think John will be satisfied with the doctor’s inference just because of the reason that the doctor is an expert in the medical field?


Image by Max from Pixabay


No, he will be somewhat confused and not confident because the response from the doctor looks like a naïve- binary Yes/No answer.


The above case is similar to any machine learning model. Most of the models deployed in the production environment are black boxes i.e. the people who consume it as naïve users don’t know about the mathematical logic coded inside the model for doing the prediction. Implementing the predictions from such a model may result in losses even if the evaluation metrics are confidently positioned.


Let us consider that doctor says to John that -


“John, I have conducted an antigen test to diagnose whether SARS-CoV-2 is present in your body or not. Luckily, it is not present. Additionally, I have also done serology tests to detect antibodies that indicate whether you have been infected with the virus or not. The result is negative. You are not affected by the coronavirus…Stay happy!!!”


Now, John will be very confident about the inference provided by the doctor and his trust in the doctor has increased. He will be no more confused about his health condition. He was a little bit confused when the doctor gave him a naïve-binary response. This is the actual difference between LIME and traditional acceptance of prediction. Instead of taking the prediction which is recommended by the model, LIME enforces to interpret the explanation behind the reasoning ability of the model.


In our Image classification example, the user will be much more satisfied if the model says the mathematical reason behind the prediction instead of naively classifying the image as male or female such as-

  • The person in Image 1 has a beard and moustache. He seems to have a hard skin tone and short hair. Hence the person should be a male (Accuracy – 90% and precision – 85%).
  • The person in Image 2 has no beard or moustache. She seems to have a soft skin tone and long hair. Hence the person should be a female (Accuracy – 90% and precision – 85%).


Now, the prediction seems more easily interpretable and acceptable as the model explains the reason behind the predictive inference.


This is nothing but LIME. Let us have a look at the intuition and interpretation behind the LIME. LIME is the abbreviated form of –

  • Local
  • Interpretable
  • Model-Agnostic
  • Explanation


The “explanation is nothing but a clarification and descriptive statement that can briefly convey the clear-cut and unbiased reasons behind the predictive inference.

Image by Mariana Anatoneag from Pixabay


To become model “agnostic”, LIME usually don’t go to the depth of the approaches used inside the algorithm but to find out which part of the interpretable input is contributing to the prediction, we perturb the input around its neighbourhood and see how the model's predictions behave.


For example,

Let us consider that we are building a model to analyze the sentiment about a film – whether it is good or bad based on the reviews floating on the internet.  Consider that a reviewer writes that “I loved the movie”. Now, LIME recommends perturbing the sentence like “I loved”, ‘I movie”, “loved the movie”, “I movie”, etc. as inputs to the model. Now, after some iterations, the model can intuitively understand that the word “loved” is the most important word to measure the sentiment of the reviewer. Rest of the words like “I”, “movie”, “the” etc. are not so important in an algorithmic perspective.

Image by Gerd Altmann from Pixabay


“Interpretable” is nothing but one of the characteristics of LIME that makes the user understandable the reasons and explanations in a very lucid and easy manner. There should be no high-level mathematics or scientific concepts so that the naïve user finds it difficult to understand the reason behind the predictions of the model.



For example,

In word embedding, we can see that the model explains the inferences in complex charts that is not at all visually appealing and easy to understand. LIME can interpret those kinds of information in simple forms (like sentences and words in the English language).


The output of LIME is a list of explanations, reflecting the contribution of each feature to the prediction of a data point. This provides local interpretability, and it also allows us to determine which feature changes will have the most impact on the prediction.

Image by Mariana Anatoneag from Pixabay


Let us see this example from a visual perspective-

Imagine that we are working on a binary classification problem.

Consider that the blue circle denotes the decision boundary (prediction area) of this model in 2 feature dimensions – X and Y.

Image source: Illustrated by the author (Fig 1)



The green diamond is a particular data point and the red circles are perturbed data points (the ones we generated/tweaked).  The size of the circles indicates the proximity of the original data point. The data points can be assigned certain priority weights according to their proximity.

Image source: Illustrated by the author (Fig 2)


Now, we have to find how the model’s prediction changes for perturbed inputs. For that, we should train a surrogate model on the dataset by using the perturbed data points as the target.


Image source: Illustrated by the author



This explanation and findings may not be acceptable globally but acceptable for the local region around the green diamond.


Here is the standard step-by-step procedure-

Image source: Illustrated by the author


“Local” refers to local trust or local belief - i.e., we want the explanation to reflect the behaviour of the classifier "around" the instance being predicted. The fidelity measure (how well the interpretable model approximates the black box predictions) gives us an idea of how reliable the interpretable model is in explaining the black box predictions in the neighbourhood of the data point of interest.

Image by Gerd Altmann from Pixabay


This is an example mentioned in the research paper. Here, the classifier predicts Electric Guitar even though the image contains an acoustic guitar. The explanation image shows the reason for its prediction. (This library is not still able to explain the reasons in a sentence format and currently, the reasons are being explained in an image format for image classification problems. There are some libraries which can explain the reasons in sentence format in Python and R).

Image source: Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin, “Why Should I Trust You?” - Explaining the Predictions of Any Classifier


This is another example from the research paper where the model is built to predict whether a paragraph proposes Christianity or Atheism based on the words used in the paragraph.

Image source: Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin, “Why Should I Trust You?” - Explaining the Predictions of Any Classifier


First of all, we need to understand the difference between model-specific and model-agnostic interpretation tools to understand how this is implemented in LIME-


Model-specific interpretation tools

Model-specific interpretation tools are specific to a single model or group of models.  These tools depend heavily on the working and capabilities of a specific model.


For example,

  • Linear regression depends upon various parameters like slope, coefficient etc to make predictions.
  • The decision tree uses a technique called entropy to find the best split of nodes to increase the predictive ability of the model.
  • Neural networks have millions of parameters for connecting the neurons and making a prediction.
  • All these above parameters and techniques are specific to their model architecture. These rules, guidelines or parameters of the model cannot be applied globally for making interpretations. These are called model-specific interpretations.


Model-agnostic interpretation tools

Model-agnostic tools can be used on any machine learning model no matter how complicated. These agnostic methods usually work by analysing feature input and output pairs. Model-specific interpretations will have more accuracy since their model architecture will be complex and well-built like linear regression, random forest, neural network etc., but model-agnostic interpretation will have low accuracy since the interpretations are done only by analysing the patterns of input and output.


So how can we build model-agnostic interpretable models that don’t compromise on accuracy?

One of the solutions to this problem is the global surrogate method.


This method suggests using an additional model called an “interpretable model” or “surrogate model” which can explain the predictions done by the black box model.


For example,

If we want to build an ML model for predicting whether a student will pass the exam or not then we will have 2 models-

  1. Black box model (To predict whether the student will pass the exam or not)
  2. Surrogate model (To interpret the approximate logic of the black box used for the decision making).


The black box model will be a complex algorithm and the surrogate model will be a simple algorithm.


For example,


We need to build a machine learning model for predicting whether a student will pass the exam or not. We have 100’s independent variables. Hence, we built a neural network which is highly complex and not easily interpretable. Now, after getting the results, how can we interpret what kind of features and characteristics of predictor variables resulted in the pass/fail of students?

For that, we can build a simple decision tree which is very less complicated than a neural network and easily interpretable. Using this decision tree, it will be easy to find which all are factors mainly influenced the pass/fail of the students in the examination like “Marks in the assignments”, “Marks in the internal exams”, “Attendance percentage” etc.


This is done by training a decision tree on the predictions of the black-box model. Once it provides good enough accuracy, we can use it to explain the neural network.

Here, the neural network is a black box model and the decision tree is a surrogate model. Hence, we can conclude the black-box model by interpreting the surrogate model.


The following image shows the steps to build a surrogate model-

Image source: Illustrated by the author


The above procedure is based on an assumption that – “When the complexity of the model increases, accuracy increases and interpretability decreases. Hence, it makes sense to make predictions using a complex machine learning algorithm for getting highly accurate predictions and use the same inputs and predictions from the complex machine learning model to train a simple machine learning model so that the interpretability of the predictions can be easily inferred through the low complicated architecture of simple model”.


However, a disadvantage of the global surrogate method is that it can explain the global features and rules that can interpret the predictions of a black-box model. However, it cannot interpret how a single prediction was made for a given observation.


For example,

If we are predicting the exam results (pass/fail) of thousands of students in a university then a global surrogate method can show which are the factors that globally influence the marks of students like “Marks in internal exams”, “Marks in projects”, “Attendance percentage” etc.

But it cannot interpret the reason why “John (Roll number -198) belonging to MBA final year” was not able to clear the exam.


LIME’s model-agnostic approach becomes a key attraction in these kinds of scenarios where it can interpret these kinds of scenarios.


Advantages of LIME

  • We can change the black box model and still use the same local surrogate model because it does not have any heavy impact on interpretability.

    For example,

    In our above example (Fig 1 and Fig 2), let us consider that the blue-coloured decision boundary (binary classification problem) is created by Kernel SVM and we use a decision tree as the local surrogate model. We can change the black box model from Kernel SVM to Random forest and still use the same decision tree as a surrogate model for local interpretation. Sometimes, the local surrogate model will give much more information about the feature importance than the black box model which may be a helpful global inference.


  • LIME can be implemented in any kind of data like raw texts, images, data frames etc.


  • Inbuilt packages are available in R and Python.

Disadvantages of LIME

  • Simple perturbations might not be enough and complex perturbations may lead to bias in the model. Hence, it requires some soft skill algorithms or techniques to introduce perturbation to the model.


  • Although, LIME is model agnostic, the type of modifications that need to be performed on the data are use-case specific.

    For example, a model that predicts sepia-toned images to be retro cannot be explained by the presence or absence of superpixels (Iris dataset)


  • Non-linearity at local regions in specific datasets makes the model difficult to interpret. Such models are not easily adaptable with LIME.

Conclusion

The “Explainability” of the machine learning outputs from complex black box models is always a concern for researchers in the analytics field. To an extent, LIME can be considered a solution to this problem for many use cases. There are many more libraries that are getting developed as a part of the ongoing research and development in the analytics field and LIME is a successful initial step in this learning journey. I hope, you got a brief understanding of the fundamental principles of LIME in this article. For more in-depth information, you can refer to the official research paper which I have mentioned in the reference section.

References