diff --git a/.drone.yml b/.drone.yml index 81bb142..1768a69 100644 --- a/.drone.yml +++ b/.drone.yml @@ -4,6 +4,9 @@ name: update_website steps: - name: hugo_build image: alombarte/hugo + when: + branch: + - main commands: - git submodule init - git submodule update @@ -11,6 +14,9 @@ steps: - hugo - name: hugo_publish image: alpine:3.12.3 + when: + branch: + - main environment: FTP_USERNAME: from_secret: FTP_USERNAME diff --git a/brainsteam/content/posts/2021-01-02-nlp-model-rationale/images/feature.jpg b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/images/feature.jpg new file mode 100644 index 0000000..e76732d Binary files /dev/null and b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/images/feature.jpg differ diff --git a/brainsteam/content/posts/2021-01-02-nlp-model-rationale/index.md b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/index.md new file mode 100644 index 0000000..17730ab --- /dev/null +++ b/brainsteam/content/posts/2021-01-02-nlp-model-rationale/index.md @@ -0,0 +1,27 @@ +--- +title: Explain Yourself! Self-Rationalizing NLP Models +author: James +type: post +draft: true +resources: + - name: feature + src: images/feature.jpg +date: 2021-01-02T16:47:02+00:00 +url: /2021/01/02/rationalizing-nlp-classifications/ +description: We examine a recent technique for enabling existing NLP models to provide human-readable rationales of why they made a decision. +categories: + - PhD + - Academia + - Open Source +tags: + - machine-learning + - python + - ai + - nlp + - spacy + +--- + +The ability to understand and rationalise about automated decisions is becoming particularly important as more and more businesses adopt AI into their core processes. Particularly in light of legislation like GDPR requiring subjects of automated decisions to be given the right to an explanation as to why that decision was made. There have been a number of breakthroughs in explainable models in the last few years as academic teams in the machine learning space focus their attention on the why and the how. + +Significant breakthroughs in model explainability were seen in the likes of [LIME](https://towardsdatascience.com/understanding-model-predictions-with-lime-a582fdff3a3b) and [SHAP](https://towardsdatascience.com/explain-your-model-with-the-shap-values-bc36aac4de3d) where local surrogate models, which are explainable but only for the small number of data samples under observation, are used to approximate the importance/contribution of features to a particular decision. \ No newline at end of file