diff --git a/brainsteam/content/posts/2022/01/13-01-painless-explainability-for-text-models-with-eli5/index.md b/brainsteam/content/posts/2022/01/13-01-painless-explainability-for-text-models-with-eli5/index.md index 5d793b0..35afd8d 100644 --- a/brainsteam/content/posts/2022/01/13-01-painless-explainability-for-text-models-with-eli5/index.md +++ b/brainsteam/content/posts/2022/01/13-01-painless-explainability-for-text-models-with-eli5/index.md @@ -104,7 +104,7 @@ In the section below I've provided some examples of how to use ELI5 with some di Explanations that are produced by LIME for NLP models are expressed in terms of which words/phrases were considered as the biggest contributing factors towards a class decision by the model. -If you look at the results in Jupyter you'll get blue and green highlights over the text input showing the degree to which each word contributed (green) or reduced (red) the likelihood that the input example is from the class under the microscope. In the example below you can see that kidney stones and medication are keywords that the model has learned can be used to classify examples in this neighbourhood (remember these explanations don't apply globally) as medical and that the presence of these words detracts from the likelihood that the email is about religion or graphic design. +If you look at the results in Jupyter you'll get red and green highlights over the text input showing the degree to which each word contributed (green) or reduced (red) the likelihood that the input example is from the class under the microscope. In the example below you can see that kidney stones and medication are keywords that the model has learned can be used to classify examples in this neighbourhood (remember these explanations don't apply globally) as medical and that the presence of these words detracts from the likelihood that the email is about religion or graphic design. {{
}}