2.7 KiB
title | author | type | date | draft | url | medium_post | categories | ||
---|---|---|---|---|---|---|---|---|---|
Do more than ‘kick the tires’ of your NLP model | James | post | -001-11-30T00:00:00+00:00 | true | /?p=498 |
|
|
We’ve known for a while that ‘accuracy’ doesn’t tell you much about your machine learning models but now we have a better alternative!
“So how accurate is it?” – a phrase that many data scientists like myself fear and dread being asked by business stakeholders. It’s not that I fear I’ve done a bad job but that evaluation of model performance is complex and multi-faceted and that summarising it with a single number usually doesn’t do it justice. Accuracy can also be a communications hurdle – it is not an intuitive concept and it can lead to friction and misunderstanding if you’re not ‘in’ with the AI crowd. 50% model accuracy across a model that has 1500 possible answers could be considered pretty good. 80% accuracy in a task setting where data is split 80:10 across two classes is meaningless (that means that randomly guessing is more effective than the model).
I’ve written before about how we can use finer-grained metrics like Recall, Precision and F1-score to evaluate machine learning models. However, many of us in the AI/NLP community still feel that these metrics are too simplistic and do not adequately describe the characteristics of trained ML models. Unfortunately, we didn’t have many other options for evaluating model performance… until now that is…
Checklist – When machine learning met test automation
At the Annual Meeting of the Association for Computational Linguistics 2020 – a very popular academic conference on NLP – Ribeiro et al presented a new method for evaluating NLP models, inspired by principles and techniques that software quality assurance (QA) specialists have been using for years.
The idea is that we should design and implement test cases for NLP models that reflect the tasks that the model will be required to perform “in the wild”. Like software QA, these test cases should include tricky edge cases that may trip the model up in order to understand the practical limitations of the model.
For example, we might train a named entity recognition model that