brainsteam.co.uk/brainsteam/content/legacy/posts/2020-09-11-.md

29 lines
2.7 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
title: Do more than kick the tires of your NLP model
author: James
type: post
date: -001-11-30T00:00:00+00:00
draft: true
url: /?p=498
medium_post:
- 'O:11:"Medium_Post":11:{s:16:"author_image_url";N;s:10:"author_url";N;s:11:"byline_name";N;s:12:"byline_email";N;s:10:"cross_link";N;s:2:"id";N;s:21:"follower_notification";N;s:7:"license";N;s:14:"publication_id";N;s:6:"status";N;s:3:"url";N;}'
categories:
- Uncategorized
---
### _We’ve known for a while that ‘accuracy’ doesn’t tell you much about your machine learning models but now we have a better alternative!_
“So how accurate is it?” – a phrase that many data scientists like myself fear and dread being asked by business stakeholders. It’s not that I fear I’ve done a bad job but that evaluation of model performance is complex and multi-faceted and that summarising it with a single number usually doesn’t do it justice. Accuracy can also be a communications hurdle – it is not an intuitive concept and it can lead to friction and misunderstanding if you’re not ‘in’ with the AI crowd. 50% model accuracy across a model that has 1500 possible answers could be considered pretty good. 80% accuracy in a task setting where data is split 80:10 across two classes is meaningless (that means that randomly guessing is more effective than the model).
I’ve written before about [how we can use finer-grained metrics like Recall, Precision and F1-score to evaluate machine learning models][1]. However, many of us in the AI/NLP community still feel that these metrics are too simplistic and do not adequately describe the characteristics of trained ML models. Unfortunately, we didn’t have many other options for evaluating model performance… until now that is…
## Checklist – When machine learning met test automation
At the Annual Meeting of the Association for Computational Linguistics 2020 – a very popular academic conference on NLP – [Ribeiro et al presented a new method for evaluating NLP models,][2] inspired by principles and techniques that software quality assurance (QA) specialists have been using for years.
The idea is that we should design and implement test cases for NLP models that reflect the tasks that the model will be required to perform “in the wild”. Like software QA, these test cases should include tricky edge cases that may trip the model up in order to understand the practical limitations of the model.
For example, we might train a named entity recognition model that
[1]: https://brainsteam.co.uk/2016/03/29/cognitive-quality-assurance-an-introduction/
[2]: https://www.aclweb.org/anthology/2020.acl-main.442.pdf