diff --git a/brainsteam/content/posts/2021-01-02-nlp-model-rationale/assets/simple_classifier.xml b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/assets/simple_classifier.xml
index 9253ca9..940d0cb 100644
--- a/brainsteam/content/posts/2021-01-02-nlp-model-rationale/assets/simple_classifier.xml
+++ b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/assets/simple_classifier.xml
@@ -1 +1 @@
-zVdZc9owEP41nmkfYIyFOR45Qo9MaTpkpvRRsYWtibBcWYDpr+9Klm8TkrTTFBiQPmkP7X6rNRZa7NMPAsfhF+4TZjm2n1poaTnOYOg4lvrY/jlDps4wAwJBfbOpBDb0FzGgbdAD9UlS2yg5Z5LGddDjUUQ8WcOwEPxU37bjrG41xgFpARsPszb6nfoyzNCJMy7xj4QGYW55MJpmK3ucbzYnSULs81MFQjcWWgjOZTbapwvCVPDyuGRyqwurhWOCRPI5Ar31dH+0v63RaTW5jZJ1GvPbntFyxOxgDmw5Iwb65uFARZDRINL46OdBOTr3wBoR5RxGgfrd5nIPIodyBFxSyupgnAOfEpXSkAii7EVnGdIogOEDkdoQrOEIftbU4wyrzQuVMseercEt+zMP1eq9wEdgBbbQCmYr7KkdX3c7df5EK1ZfnqQ80kHzHonf7/efdDouMZ0/ec5JIUmqQyT3DIABDBNQqfxGS7ec3fMYgB7kBc1PIZVkEyvH0PIEtQIYPxKxY5oTIfV9ApGeC36IfKJSpqR2lLEFZ1xou2jnqrcyIAV/JJWVkX4pCR7JCp69ijOAQfD9IoEGBS2hngnfEynOsMUIjAyRTSU7QzM/lXVR7AkrNTExGDalGBSaS7bCwBD2BeR1OsjbyBSJ/Jm6BWDmAXsS6tXzlgkQv3UJXA1K5dRux6FzTBCGJT3W1XdFwli44xQMFzFHTiPozWAm/CA8YqSq1d9UNL6iSGIRENlSpBNTHPv1uXJbuVq9275v5aurTCrpwkmcXfI7mqoi0XTfGPFBPs+aCDSav0L7VuQ6eD/5l7Qf/TntSUrltjL+oe6bvmtmy9RcP3pyNpPnlgo4ojl5jQxvVlLTej5d+5UlNZy8bUmNL/Zunx6b/UwVRo0jeQNXC71El8wMNjjDOG1393OlLWaaWt2yw+ZlN7yiQ5WWkG0vFioXTeN3PKE630/70NnDay5c7eL54455yEFzdXFQeBicmYU9tGkl3tnOG627eTP9T628Rd2h03dbt9oAdVTh2H3xtQbT8jE3Y3/5ZwHd/AY=
\ No newline at end of file
+5VhZb+IwEP41kXYfQMHOAY8cpd2tSg8qLX1apYmTWA1x1jEQ9tevnTjkpNATrdpWxfPZHtsz3xxCgeNlck6tyL8iDgoUoDqJAicKAD0NAEX8qc42QwbQyACPYkcuKoA5/oskqEp0hR0UVxYyQgKGoypokzBENqtgFqVkU13mkqB6amR5qAHMbStoor+ww/wM7QOzwC8Q9vz85J4xyGaWVr5YviT2LYdsShA8U+CYEsKy0TIZo0AYL7dLtm+6Z3Z3MYpCdsyGzmywXKu3M7iZ9i/DeJZE5LIjtaytYCUfrAAj4PpGfk9YMMBemOLGn5W46MjmpyFayHzkic9Fvu+R5lCO8CsJZVUwyoEfsXCpjygS54Vb5uPQ48NHxNKD+JwV8o8ZtklgicVj4TKgDmf8WupP4ovZe2qtOSssBU65NLVsseLadcX741Sx+GczTMLUaPYTcrrd7rOXjgos9R/b5qRgKElNxJYBB3p8GHOV4t5wohfSPYk40OF+gaONjxmaR+JicLLhscIxskbUDVJO+NhxELf0iJJV6CDhMrHLxUEwJgGh6bnQ1cWvOIBR8oRKM0b6I3aQkJXw7Gf3hjJbctcjyp9TgiR7zhFZIka3fElSDUkZyUCT8qaIC2BIzC/FRF9ilgxFb6e5YCsfSMK+gLyghbw1T6HQGYoswCWbsyfGdtVve82CnEpeaBql9Gq95dE5RlFgMbyuZpM2S8gTbgjmN9nZHJg1o5uDrl5VEpMVtZHcV47/mioIDqpiFvUQa6hKnbN7+lH+cs3r2xA9nM3Qb9Ix7i6uHubTlmQz/bb43vBZW6iUXGbFUZboXZyIQEkpP5fbe7mcFRKg7Xzc4HmL2/dS36jbroX7/Q+ifqsp34H6KMFsURo/iJzT1aU0SWQKSoWtFFpjozV+MlYeIsOpwsrQqv7U1Zqjjg0qE9aIUff4B4cU3Fu/Hbyu1zQRGBWO5EVcTHTiNGSGfAHQoqRZ4bel0phpalTMljP3X8PeVaniJKiq47HwRf3wGxLj1N/P36G1jleucLCS5y2PbHTgSCQOzBvCoZxY8lIttreW9Fr5rmemdy7nb8ppDepqIK8HpSA0W4LQ1PfH25uymtZg8/l/UiAaxfXUBULfmxhe2NjfvU8b/7X67DdxSTMOc6nX+0wymafpNri96HZRFkq7hFhsS6UP6FJkRsqq+KFwO1U3Awfv1M1otbZI0z+3m+l/WZ7pR/LspF2zXuOZ9lqeGfoBRa/mGReLb9Sy5cX3kvDsHw==
\ No newline at end of file
diff --git a/brainsteam/content/posts/2021-01-02-nlp-model-rationale/assets/simple_classifier_with_generator.xml b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/assets/simple_classifier_with_generator.xml
new file mode 100644
index 0000000..db543b0
--- /dev/null
+++ b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/assets/simple_classifier_with_generator.xml
@@ -0,0 +1 @@
+5VhZb+IwEP41kXYfQMHOAY8cpd2tSg8qLX1apYmTWA1x1jEQ9tevnTjkpNATrdpWxfPZHtsz3xxCgeNlck6tyL8iDgoUoDqJAicKAD0NAEX8qc42QwbQyACPYkcuKoA5/oskqEp0hR0UVxYyQgKGoypokzBENqtgFqVkU13mkqB6amR5qAHMbStoor+ww/wM7QOzwC8Q9vz85J4xyGaWVr5YviT2LYdsShA8U+CYEsKy0TIZo0AYL7dLtm+6Z3Z3MYpCdsyGzmywXKu3M7iZ9i/DeJZE5LIjtaytYCUfrAAj4PpGfk9YMMBemOLGn5W46MjmpyFayHzkic9Fvu+R5lCO8CsJZVUwyoEfsXCpjygS54Vb5uPQ48NHxNKD+JwV8o8ZtklgicVj4TKgDmf8WupP4ovZe2qtOSssBU65NLVsseLadcX741Sx+GczTMLUaPYTcrrd7rOXjgos9R/b5qRgKElNxJYBB3p8GHOV4t5wohfSPYk40OF+gaONjxmaR+JicLLhscIxskbUDVJO+NhxELf0iJJV6CDhMrHLxUEwJgGh6bnQ1cWvOIBR8oRKM0b6I3aQkJXw7Gf3hjJbctcjyp9TgiR7zhFZIka3fElSDUkZyUCT8qaIC2BIzC/FRF9ilgxFb6e5YCsfSMK+gLyghbw1T6HQGYoswCWbsyfGdtVve82CnEpeaBql9Gq95dE5RlFgMbyuZpM2S8gTbgjmN9nZHJg1o5uDrl5VEpMVtZHcV47/mioIDqpiFvUQa6hKnbN7+lH+cs3r2xA9nM3Qb9Ix7i6uHubTlmQz/bb43vBZW6iUXGbFUZboXZyIQEkpP5fbe7mcFRKg7Xzc4HmL2/dS36jbroX7/Q+ifqsp34H6KMFsURo/iJzT1aU0SWQKSoWtFFpjozV+MlYeIsOpwsrQqv7U1Zqjjg0qE9aIUff4B4cU3Fu/Hbyu1zQRGBWO5EVcTHTiNGSGfAHQoqRZ4bel0phpalTMljP3X8PeVaniJKiq47HwRf3wGxLj1N/P36G1jleucLCS5y2PbHTgSCQOzBvCoZxY8lIttreW9Fr5rmemdy7nb8ppDepqIK8HpSA0W4LQ1PfH25uymtZg8/l/UiAaxfXUBULfmxhe2NjfvU8b/7X67DdxSTMOc6nX+0wymafpNri96HZRFkq7hFhsS6UP6FJkRsqq+KFwO1U3Awfv1M1otbZI0z+3m+l/WZ7pR/LspF2zXuOZ9lqeGfoBRa/mGReLb9Sy5cX3kvDsHw==
\ No newline at end of file
diff --git a/brainsteam/content/posts/2021-01-02-nlp-model-rationale/images/simple_classifier_with_generator.png b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/images/simple_classifier_with_generator.png
new file mode 100644
index 0000000..0cd512a
Binary files /dev/null and b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/images/simple_classifier_with_generator.png differ
diff --git a/brainsteam/content/posts/2021-01-02-nlp-model-rationale/images/yu_2019_figure_1.png b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/images/yu_2019_figure_1.png
new file mode 100644
index 0000000..6405ad1
Binary files /dev/null and b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/images/yu_2019_figure_1.png differ
diff --git a/brainsteam/content/posts/2021-01-02-nlp-model-rationale/index.md b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/index.md
index 9565bd0..b051407 100644
--- a/brainsteam/content/posts/2021-01-02-nlp-model-rationale/index.md
+++ b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/index.md
@@ -18,14 +18,13 @@ tags:
- python
- ai
- nlp
- - spacy
---
## Introduction
-The ability to understand and rationalise about automated decisions is becoming particularly important as more and more businesses adopt AI into their core processes. Particularly in light of legislation like GDPR requiring subjects of automated decisions to be given the right to an explanation as to why that decision was made. There have been a number of breakthroughs in explainable models in the last few years as academic teams in the machine learning space focus their attention on the why and the how.
+The ability to understand and rationalise about automated decisions is becoming particularly important as more and more businesses adopt AI into their core processes. This requirement is particularly pertinent in light of legislation like GDPR requiring subjects of automated decisions to be given the right to an explanation as to why that decision was made. There have been a number of breakthroughs in explainable models in the last few years as academic teams in the machine learning space focus their attention on the *why* rather than the *how* of machine learning architecture.
## Recent Progress in Model Explainability
@@ -33,21 +32,55 @@ Significant breakthroughs in model explainability were seen in the likes of [LIM
[Transformer](https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html)-based models like [BERT](https://github.com/google-research/bert) which use the concept of neural attention to learn contextual relationships between words can also be interrogated by [visualising attention patterns inside the model](https://towardsdatascience.com/deconstructing-bert-part-2-visualizing-the-inner-workings-of-attention-60a16d86b5c1)). However, these visualisations are still quite complex (especially for transformer-based models which typically have multiple parallel attention mechanisms to examine) and do not provide concise or intuitive rationalisation for model behaviour.
-## Rationalization of Neural Predictions
+So how can we generate a human-readable, intuitive justification or rationalization of our model's behaviour and how can we do it without further data labelling? In the remainder of this post we will explore a couple of solutions.
-In 2016, [Lei, Barzilay and Jaakola](https://people.csail.mit.edu/taolei/papers/emnlp16_rationale.pdf) wrote about a new architecture for rationale extraction from NLP models. The aim was to generate a new model that could extract a "short and coherent" justification for why the model made a particular prediction.
+## Solution 1: Rationalization of Neural Predictions (Lei, Barzilay and Jaakkola, 2016)
-{{