update eli5 post
continuous-integration/drone/push Build was killed
Details
continuous-integration/drone/push Build was killed
Details
This commit is contained in:
parent
0ced70f48e
commit
5fc0e11d87
Binary file not shown.
After Width: | Height: | Size: 101 KiB |
Binary file not shown.
After Width: | Height: | Size: 222 KiB |
|
@ -0,0 +1 @@
|
||||||
|
<mxfile host="Electron" modified="2022-01-13T17:55:19.083Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/16.1.2 Chrome/96.0.4664.55 Electron/16.0.5 Safari/537.36" etag="vKbJ9cuR-e8dDawoIGwq" version="16.1.2" type="device"><diagram id="trVi9swQPnjRLx_srgvz" name="Page-1">7Vnbbts4EP0aA92HGhJ1sfxoO0l30W1rbICmfVrQEiOxpUQtRfnSr+9QoizrYsfbWkmQNoBj8fB+5sxwKI+sRbx9I3AaveMBYSNkBNuRdTVCCBkugi+F7ErERJZXIqGggcZq4JZ+Ixo0NJrTgGSNhpJzJmnaBH2eJMSXDQwLwTfNZvecNWdNcUg6wK2PWRe9o4GMStRDkxr/k9AwqmY23WlZE+Oqsd5JFuGAbw4g63pkLQTnsnyKtwvCFHsVL2W/myO1+4UJkshzOtx9XC/97ONb+8M/XnrnCf7+E3/tlKOsMcv1hvVi5a5igARAiC5yISMe8gSz6xqdC54nAVHTGFCq2/zNeQqgCeAXIuVOWxfnkgMUyZjp2nJONdHRvWko47nwyYkNVRrBIiTyRDt7bwHQLuExkWIH/QRhWNJ1cx1Yayjct6tphgfN9P9g3exh3f0vV0qY/1WsAjOmlsP4GniF7URErcOXNAnV4Mkei/makjE8zJS6cSJxJqmvyCPinosYJ75qttqNx+N6ko6RJdnKpk0yKfhXsuCMC0ASnihL31PGWhBmNEyg6IOpCODzNRGwAsxmuiKmQVDIZBNRSW5TXNhvA/GiIx1NCwxAtqeV0LVc1UF7WxVvPK3wTe28pqvbRAeOaxkD2Rq9OBezz3Qx9KQ+ZvU52ePzDuyK3Sfdvyh8VoWxUxWvtoeVV7uGKzyuvSY/aS/ddckpLHHvkXvX0i5pmS1XKxeme9VWnwmBdwfNUtUgOzGP3ZpHe14tonLEWlL7Pf64yuyOyJYQvXKxwhKCZFtvFLKUpjxoXOYfNIa4fuMzCgaGPd28IVj8ayJvC59xqoL+T0dGZLRC48TohMa+yOgOFhnP8NADqvqOD8ip0oK+jC/yFYHdz1fY/xoWbvshl4yqM6pkOuOzJCzGNZ3WUTZCllv8XeYMst0W006X6WkP0yYajGrUoXrOgCiA5nwL/3Xm/lKzAttrWgR552l/uKzAelj7lbj9XLDdXIC5VPjuJezAQjV35rAqd9oqt7ucoh5OUTv+X45U91kc+Rc8utHZZ/cRWz1OroW6x+CRCw3ATIEreAiLm4gzfze7fTtyrqoqWEFdi1wcK4Enq0x9/b4HnX3ae8/tImR1z6Cn8M4jCfmJdHzija0pmpqm66GJN3XcqhZSPQrcKGtfOmev3i8N7/g/lGvbzslc+8H2IM+WngbIzSsOf0elZxuV9nfBp3s943VEUtii3wLn0n5gqXueSB2M4JjUZT2wecogHdNdgH+rdQdEPTmbafTwbw92KvQlwqXLKaYaVqj8Q1W8zgpOlV+ZVro9dJ7SSZevUp5RFQL/UB6u+hvGeALmhoUay1cJCXFZq17H1LPCLsqJK19/mUpw7YfvqI+shN4kcjAlOE6/EmznV1NC+yw3nlwJ3VfmLzgmtz1xyJgMxfoHxzLPqn+3ta6/Aw==</diagram></mxfile>
|
Binary file not shown.
After Width: | Height: | Size: 35 KiB |
|
@ -0,0 +1 @@
|
||||||
|
<mxfile host="Electron" modified="2022-01-13T17:56:50.563Z" agent="5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) draw.io/16.1.2 Chrome/96.0.4664.55 Electron/16.0.5 Safari/537.36" etag="s7fXGPMBWlyq6OiZMqEq" version="16.1.2" type="device"><diagram id="trVi9swQPnjRLx_srgvz" name="Page-1">7Vnbcts2EP0azaQP0ZDgRdSjJF+SSdNo6pk4eepAJEyiAQkWhG79+ixIULxaZhzJdt14RhZxsCSAs2cXC2pkLeLdtcBp9JEHhI2QEexG1sUIIWS4CL4Usi8QE1legYSCBhqrgBv6L9GgodE1DUjWMJScM0nTJujzJCG+bGBYCL5tmt1x1hw1xSHpADc+Zl30lgYyKlAPTSr8HaFhVI5sutOiJ8alsV5JFuGAb2uQdTmyFoJzWVzFuwVhir2Sl+K+q3t6DxMTJJFDbrj9vFn62ecP9qc/vfTWE/yPL/ytUzxlg9laL1hPVu5LBkgAhOgmFzLiIU8wu6zQueDrJCBqGANalc3vnKcAmgD+TaTca+/iteQARTJmure7FL26jK+FT47Mv5QEFiGRR+zswk6tpTaAJuqa8JhIsQcDQRiWdNN0PtYaCg92Fc1woZn+AdbNHtbdf9ZKCfP3+SwwY2o6jG+AV1hfRNQ8fEmTUD08OWAx31AyhouZUjdOJM4k9RWbRNxxEePEV2ar/Xg8rgbpOFmSnWz6JJOCfyMLzrgAJOGJ8vQdZawFYUbDBJo++I4APt8QATPAbKY7YhoEuUy2EZXkJsW5Q7eQLzrSuVcH6plkd9RzZa+OtjLfeFrh2yp4TVfbRLXAtYwz+Rr910PMHhhi6GXFmNUXZE/PO9At9l/0/Xnjq2qMnbJ5sat3XuwfCoXT+Wtyan/pW5ecwpwPEXkILR2SltkKtWKm+q7K6zMh8L5mliqD7Mg4dmscHXmViIonVpI6rPHxKrMfFhmF0qSpCRoXRQeNIZlf+YyCm2EhV9cEi79M5O3gM05Vpv/pdIiMVj6cGJ182JcO3bOlwwFhWaOqb8+AQirN6cv4Yr0isPr5CvvfwjxWP60lo2pjKpjO+CwJ8+eaTmv/GiHLzf9Os/HYbotpp8v0tIdpE52NatShes6AKIDmfAf/dbn+WksB22t6BHnDtH++UsB6WPuluP21YPu5AHepJN5LWM1DFXfmeVXutFVudzlFPZyidtI/Hanui9jnH79fo8EbtvmiCizUt/f1nmIAZgpcwUWYHz+c+cfZzYeRc1F2wQyqXuTiWAk8WWXq69fhZ/Bu772004/V3YOeIzrvqcKP1OATb2xN0dQ0XQ9NvKnjlr1LIihwo7z9k4V6WYA/Q+A/qsC2naMF9oP2IM+Wns5QkJekDnu30p9qfjhf/cpJw3PS4fj3fG9kvI5Ecl/0e2Ao7TVP3fFE6lQEm6Ru6webxxzScd0J+LdaJ0DUU7GZRg//9tn2hL4yuAgrxVTDC2V8qI63Wc6piivTSnf14CkCcfkm5RlVCfA3FcXqfsMYT8DdMFFj+SYhIS561RuYalRYRTFwGc+vUwluO38bz66E3hLybEpwnH4l2M7/XAl97yqeWAndt+SvOCe79tPlZGhWvzEWVVb1U611+R0=</diagram></mxfile>
|
Binary file not shown.
After Width: | Height: | Size: 163 KiB |
|
@ -0,0 +1,218 @@
|
||||||
|
---
|
||||||
|
title: Painless Explainability for NLP/Text Models with LIME and ELI5
|
||||||
|
type: post
|
||||||
|
description: An introduction to LIME ML model explainability in the context of NLP usage and how to use ELI5 library - a painless way to use LIME local explainability for almost any model.
|
||||||
|
resources:
|
||||||
|
- name: feature
|
||||||
|
src: images/scrabble.jpg
|
||||||
|
date: 2022-01-13T07:47:11+00:00
|
||||||
|
url: /2022/01/13/painless-explainability-for-text-models-with-eli5
|
||||||
|
tags:
|
||||||
|
- machine-learning
|
||||||
|
- work
|
||||||
|
- explainability
|
||||||
|
---
|
||||||
|
|
||||||
|
# Contents
|
||||||
|
|
||||||
|
- [Contents](#contents)
|
||||||
|
- [Introduction](#introduction)
|
||||||
|
- [Understanding LIME](#understanding-lime)
|
||||||
|
- [Local](#local)
|
||||||
|
- [Interpretable](#interpretable)
|
||||||
|
- [Model-Agnostic](#model-agnostic)
|
||||||
|
- [Explanation](#explanation)
|
||||||
|
- [Usage Examples](#usage-examples)
|
||||||
|
- [ELI5 and Sci-kit Learn](#eli5-and-sci-kit-learn)
|
||||||
|
- [ELI5 and Transformers/Huggingface](#eli5-and-transformershuggingface)
|
||||||
|
- [Loading The Model](#loading-the-model)
|
||||||
|
- [Defining the Interface with ELI5](#defining-the-interface-with-eli5)
|
||||||
|
- [Getting an explanation](#getting-an-explanation)
|
||||||
|
- [ELI5 and a Remotely Hosted Model / API](#eli5-and-a-remotely-hosted-model--api)
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Introduction
|
||||||
|
|
||||||
|
|
||||||
|
Explainability of machine learning models is a hot topic right now - particularly in deep learning where models are that bit harder to reason about and understand. These models are often called 'black boxes' because you put something in, you get something out and you don't really know how that outcome was achieved. The ability to explain machine learning model's decisions in terms of the features passed in is both useful from a debugging standpoint (identifying features with weird weights) and with legislation like [GDPR's Right to an Explanation](https://www.privacy-regulation.eu/en/r71.htm) it is becoming important in a commercial setting to be able to explain why models behave a certain way.
|
||||||
|
|
||||||
|
In this post I will give a simplified overview of how LIME works (I may take some small technical liberties and manufacture some contrived examples to demonstrate some of these mechanisms and phenomena - apologies) and then I'll give a brief explanation of how LIME can be applied to a sci-kit learn SVM-based sentiment model and then a huggingface/torch sentiment model.
|
||||||
|
|
||||||
|
{{<figure src="images/scrabble.jpg" caption="understanding individual contributions of words is useful when working with NLP Classification Models">}}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
# Understanding LIME
|
||||||
|
|
||||||
|
Lime stands for **L**ocal, **I**nterpretable **M**odel-agnostic **E**xplanations and is a technique proposed by [Ribeiro et al.](https://arxiv.org/abs/1602.04938) in 2016. The basic premise is that for a given input example (in an image classifier we're talking 1 image, in a text classifier we're talking 1 unit of text e.g. a paragraph or a sentence, in a numerical model trained on tabular data we're talking 1 row from that table), LIME can approximate how much of an effect each of the features extracted from the input have on the final output (i.e. How important are a cluster of pixels in an image?, How important are specific words/phrases in a sentence?, How important is each column in that row of numbers?).
|
||||||
|
|
||||||
|
For a given example both contributing and negating features are highlighted (reasons for and against that decision).
|
||||||
|
|
||||||
|
|
||||||
|
{{<figure src="images/figure1.png" caption="Figure 1 from the [Ribeiro et al](https://arxiv.org/abs/1602.04938) paper giving an overview of how LIME works">}}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Local
|
||||||
|
|
||||||
|
The local aspect of LIME is described in [the paper](https://arxiv.org/abs/1602.04938):
|
||||||
|
|
||||||
|
|
||||||
|
> ...Although it is often impossible for an explanation to be completely faithful unless it is the complete description of the model itself, for an explanation to be meaningful it must at least be locally faithful, i.e. it must correspond to how the model behaves inthe vicinity of the instance being predicted...
|
||||||
|
>
|
||||||
|
|
||||||
|
|
||||||
|
This is a really important constraint of LIME: it offers excellent example-specific explanations that work well for pockets of similar data points but these explanations can't necessarily be generalised for the whole of the model under examination. The authors of the paper also attempt to illustrate this limitation in a diagram:
|
||||||
|
|
||||||
|
{{<figure src="images/figure3.png" caption="Figure 3 from the [Ribeiro et al](https://arxiv.org/abs/1602.04938) attempts to illustrate how LIME can offer explanations within a local neighbourhood of data samples">}}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
This is especially important in tasks that are highly context dependent (like text classification). Here's a contrived example of a spam detection use case. Take the words "7 million usd" as in:
|
||||||
|
|
||||||
|
>Sir,
|
||||||
|
>
|
||||||
|
>I am a wealthy widow and if you help me I will pay you 7 million usd
|
||||||
|
>
|
||||||
|
>Best Regards
|
||||||
|
|
||||||
|
and also
|
||||||
|
|
||||||
|
>Kevin,
|
||||||
|
>
|
||||||
|
>the new term sheet from the investors is in, they're offering 7 million usd for 5% equity,
|
||||||
|
>
|
||||||
|
> Brian Smith <br/>
|
||||||
|
> Head of Mergers & Acquisitions
|
||||||
|
|
||||||
|
In the first example, the words "7 million usd" contribute to the suspicion that this is a scam in the presence of "wealthy widow" and "help me". In the second example the words "7 million usd" aren't as important, they're words that you'd probably expect in a legitimate email about an investment opportunity from your colleague in Mergers.
|
||||||
|
|
||||||
|
The point I'm trying to make is that it's very difficult to come up with good general rules about which words are important without any context (and indeed if you can do that then you probably don't need machine learning, you can just build a rule-based system that checks for the presence or absense of words on a list). The overall decision function of "spam or not spam" is much more complicated than "these words are good and these words are bad" but for a certain set of "spammy" examples we can certainly say which words are more spammy and which words are less spammy. This is analogous to the concepts at play in LIME too.
|
||||||
|
|
||||||
|
Therefore when we're using LIME, we should avoid saying things like "The model seems to consider the words 'million' and 'usd' spammy" and we should say things like "in cases similar to the widow email, it looks like the words 'million' and 'usd' contributed to the decision that this email was spam in the absense of any other redeeming words".
|
||||||
|
|
||||||
|
|
||||||
|
## Interpretable
|
||||||
|
|
||||||
|
Some machine learning models like [linear models](https://scikit-learn.org/stable/modules/linear_model.html) and [Decision Trees](https://scikit-learn.org/stable/modules/tree.html) are inherently interpretable through being able to measure parameter coefficients (how big the weight of the feature is when calculating the decision boundary line) in the case of the former and how early on a feature appears in a decision tree (since decision trees use [information gain](https://en.wikipedia.org/wiki/Information_gain_in_decision_trees) to put features that tell us most about the final classification/decision near the top of the tree so that they impact more data points) in the case of the latter.
|
||||||
|
|
||||||
|
LIME exploits these explainable models in order to explain the local context around a given input example. We perturb (slightly change) the input example and use the black-box model under analysis to make predictions. As words are added or removed from the input, the output from the black box model changes slightly (in the [contrived again] example below, removing the word 'love' from the movie review reduces the probability that the review is positive.)
|
||||||
|
|
||||||
|
{{<figure src="images/perturbation.png" caption="LIME perturbs input examples by changing words around in order to understand the individual contributions of words to an outcome">}}
|
||||||
|
|
||||||
|
|
||||||
|
These perturbed inputs and the outputs from the 'black box' model that we're analysing outputs are then used as a training set to train the local, interpretable model.
|
||||||
|
|
||||||
|
For text models, LIME uses [Bag-of-Words](https://en.wikipedia.org/wiki/Bag-of-words_model) (BoW) representations of the perturbed input as the features for the local model.
|
||||||
|
|
||||||
|
We can then use the interpretable information (parameter coefficients/feature position in decision tree) for the local model to approximately interpret the effect that the different words have on the bigger model since each word in the local BoW vocabulary will have an associated coefficient.
|
||||||
|
|
||||||
|
|
||||||
|
## Model-Agnostic
|
||||||
|
|
||||||
|
LIME's model agnosticism is one of its most useful attributes. As long as you know how to encode the input data and your model has the ability to provide probabality distributions over its outputs, you can provide local explanations for any type of model. This is because the explanation comes from the local model and the BoW features therein rather than the black box model.
|
||||||
|
|
||||||
|
In the section below I've provided some examples of how to use ELI5 with some different types of models.
|
||||||
|
|
||||||
|
## Explanation
|
||||||
|
|
||||||
|
As we saw at the beginning of the post, the explanations that are produced by LIME for NLP models are usually
|
||||||
|
|
||||||
|
|
||||||
|
# Usage Examples
|
||||||
|
|
||||||
|
## ELI5 and Sci-kit Learn
|
||||||
|
|
||||||
|
## ELI5 and Transformers/Huggingface
|
||||||
|
|
||||||
|
[Transformers](https://huggingface.co/docs/transformers/index) is an open source library provided by HuggingFace which provides an easy to use wrapper around PyTorch and Tensorflow specifically to make it easy to use transformer-based NLP models like BERT, RoBERTa etc. In order to use ELI5 with Transformers from huggingface, we need to have Python3, [transformers](https://huggingface.co/docs/transformers/index) and a recent version of [pytorch](https://pytorch.org/) installed. You will probably want to run this code in a [Jupyter Notebook](https://jupyter.org/) so that you can see the pretty graphical explanations. Of course you'll also need [eli5](https://eli5.readthedocs.io/en/latest/autodocs/lime.html#eli5.lime.lime.TextExplainer) library installed too.
|
||||||
|
|
||||||
|
This example will work on a machine without a GPU provided you aren't planning on training your transformer model from scratch. I am using [this sentiment model](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) which evaluates the sentiment/rating of reviews from 1 to 5 in English, Dutch, German, French or Spanish.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
### Loading The Model
|
||||||
|
|
||||||
|
The following snippet of code simply loads the model into memory amd sets up the tokenizer ready for use with new text examples
|
||||||
|
|
||||||
|
```python
|
||||||
|
|
||||||
|
from transformers import AutoModelForSequenceClassification
|
||||||
|
from transformers import AutoTokenizer
|
||||||
|
import numpy as np
|
||||||
|
import pandas as pd
|
||||||
|
from typing import List
|
||||||
|
|
||||||
|
# this is the name of the model we want to evaluate on
|
||||||
|
# huggingface.com/models or alternatively you could train your own
|
||||||
|
MODEL="nlptown/bert-base-multilingual-uncased-sentiment"
|
||||||
|
|
||||||
|
tokenizer = AutoTokenizer.from_pretrained(MODEL)
|
||||||
|
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Defining the Interface with ELI5
|
||||||
|
|
||||||
|
This snippet of code defines the all important `model_adapter` function which we use to interface between PyTorch and ELI5.
|
||||||
|
|
||||||
|
ELI5 expects to be able to pass in a list of perturbed texts and get back a set of probability distributions (a matrix in the shape [NUM_EXAMPLES, NUM_CLASSES]).
|
||||||
|
|
||||||
|
In our function we have to encode the text into a BERT compatible input format using the [tokenizer](https://huggingface.co/transformers/main_classes/tokenizer.html).
|
||||||
|
Then we pass the encoded input to the model and receive some predictions.
|
||||||
|
|
||||||
|
Finally we use `softmax()` which will convert the raw *logits* generated by the model into nice smooth probability functions that LIME is expecting to see.
|
||||||
|
|
||||||
|
You may be wondering about the for loop and the batches? ELI5 tries to get results for 5000 samples at a time (by default) and that might be fine in a smaller, less powerful model but with a transformer we can't fit all of those examples into memory. Therefore we split the samples into batches of 64 at a time so that we don't end up running out of RAM.
|
||||||
|
|
||||||
|
```python
|
||||||
|
def model_adapter(texts: List[str]):
|
||||||
|
|
||||||
|
all_scores = []
|
||||||
|
|
||||||
|
for i in range(0, len(texts), 64):
|
||||||
|
|
||||||
|
batch = texts[i:i+64]
|
||||||
|
|
||||||
|
# use bert encoder to tokenize text
|
||||||
|
encoded_input = tokenizer(batch,
|
||||||
|
return_tensors='pt',
|
||||||
|
padding=True,
|
||||||
|
truncation=True,
|
||||||
|
max_length=model.config.max_position_embeddings-2)
|
||||||
|
|
||||||
|
# run the model
|
||||||
|
output = model(**encoded_input)
|
||||||
|
# by default this model gives raw logits rather
|
||||||
|
# than a nice smooth softmax so we apply it ourselves here
|
||||||
|
scores = output[0].softmax(1).detach().numpy()
|
||||||
|
|
||||||
|
all_scores.extend(scores)
|
||||||
|
|
||||||
|
return np.array(all_scores)
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
### Getting an explanation
|
||||||
|
|
||||||
|
The last piece in the puzzle is to actually run the model and get our explanation. Firstly we initialize our explainer object
|
||||||
|
|
||||||
|
|
||||||
|
Here we pass in the text that we'd like to get an explanation for. `n_samples` gives the number of perturbed examples that LIME should generate in order
|
||||||
|
to train the local model (more samples should give a more faithful local explanation at the cost of more compute/taking longer).
|
||||||
|
Random state is simply a number that is used to seed Python's pseudo-random number generator which LIME uses to randomly decide what
|
||||||
|
samples to pick. Setting random state explicitly is a good habit to get into in order to preserve the reproducibility of your models.
|
||||||
|
|
||||||
|
```python
|
||||||
|
from eli5.lime import TextExplainer
|
||||||
|
|
||||||
|
te = TextExplainer(n_samples=5000, random_state=42)
|
||||||
|
te.fit("""The restaurant was amazing, the quality of their
|
||||||
|
food was exceptional. The waiters were so polite.""", model_adapter)
|
||||||
|
te.explain_prediction(target_names=list(model.config.id2label.values()))
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
## ELI5 and a Remotely Hosted Model / API
|
File diff suppressed because one or more lines are too long
Loading…
Reference in New Issue