diff --git a/brainsteam/content/posts/2022/01/13-01-painless-explainability-for-text-models-with-eli5/index.md b/brainsteam/content/posts/2022/01/13-01-painless-explainability-for-text-models-with-eli5/index.md index 35afd8d..9554763 100644 --- a/brainsteam/content/posts/2022/01/13-01-painless-explainability-for-text-models-with-eli5/index.md +++ b/brainsteam/content/posts/2022/01/13-01-painless-explainability-for-text-models-with-eli5/index.md @@ -43,16 +43,15 @@ For a given example both contributing and negating features are highlighted (rea The local aspect of LIME is described in [the paper](https://arxiv.org/abs/1602.04938): -> ...Although it is often impossible for an explanation to be completely faithful unless it is the complete description of the model itself, for an explanation to be meaningful it must at least be locally faithful, i.e. it must correspond to how the model behaves inthe vicinity of the instance being predicted... +> ...Although it is often impossible for an explanation to be completely faithful unless it is the complete description of the model itself, for an explanation to be meaningful it must at least be locally faithful, i.e. it must correspond to how the model behaves in the vicinity of the instance being predicted... > - This is a really important constraint of LIME: it offers excellent example-specific explanations that work well for pockets of similar data points but these explanations can't necessarily be generalised for the whole of the model under examination. The authors of the paper also attempt to illustrate this limitation in a diagram: {{
}} - +#### To Spam or not to spam: that is the question This is especially important in tasks that are highly context dependent (like text classification). Here's a contrived example of a spam detection use case. Take the words "7 million usd" as in: