Limit deployments to main
continuous-integration/drone/push Build is passing Details

This commit is contained in:
James Ravenscroft 2021-01-02 17:27:29 +00:00
parent e6aae355fc
commit 99b8dddf77
3 changed files with 33 additions and 0 deletions

View File

@ -4,6 +4,9 @@ name: update_website
steps: steps:
- name: hugo_build - name: hugo_build
image: alombarte/hugo image: alombarte/hugo
when:
branch:
- main
commands: commands:
- git submodule init - git submodule init
- git submodule update - git submodule update
@ -11,6 +14,9 @@ steps:
- hugo - hugo
- name: hugo_publish - name: hugo_publish
image: alpine:3.12.3 image: alpine:3.12.3
when:
branch:
- main
environment: environment:
FTP_USERNAME: FTP_USERNAME:
from_secret: FTP_USERNAME from_secret: FTP_USERNAME

Binary file not shown.

After

Width:  |  Height:  |  Size: 157 KiB

View File

@ -0,0 +1,27 @@
---
title: Explain Yourself! Self-Rationalizing NLP Models
author: James
type: post
draft: true
resources:
- name: feature
src: images/feature.jpg
date: 2021-01-02T16:47:02+00:00
url: /2021/01/02/rationalizing-nlp-classifications/
description: We examine a recent technique for enabling existing NLP models to provide human-readable rationales of why they made a decision.
categories:
- PhD
- Academia
- Open Source
tags:
- machine-learning
- python
- ai
- nlp
- spacy
---
The ability to understand and rationalise about automated decisions is becoming particularly important as more and more businesses adopt AI into their core processes. Particularly in light of legislation like GDPR requiring subjects of automated decisions to be given the right to an explanation as to why that decision was made. There have been a number of breakthroughs in explainable models in the last few years as academic teams in the machine learning space focus their attention on the why and the how.
Significant breakthroughs in model explainability were seen in the likes of [LIME](https://towardsdatascience.com/understanding-model-predictions-with-lime-a582fdff3a3b) and [SHAP](https://towardsdatascience.com/explain-your-model-with-the-shap-values-bc36aac4de3d) where local surrogate models, which are explainable but only for the small number of data samples under observation, are used to approximate the importance/contribution of features to a particular decision.