brainsteam.co.uk/2016/05/29/cognitive-quality-assurance.../index.html

730 lines
29 KiB
HTML
Raw Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<meta http-equiv="X-UA-Compatible" content="IE=edge"><title>Cognitive Quality Assurance Pt 2: Performance Metrics - Brainsteam</title><meta name="viewport" content="width=device-width, initial-scale=1">
<meta itemprop="name" content="Cognitive Quality Assurance Pt 2: Performance Metrics">
<meta itemprop="description" content="EDIT: Hello readers, these articles are now 4 years old and many of the Watson services and APIs have moved or been changed. The concepts discussed in these articles are still relevant but I am working on 2nd editions of them.
Last time we discussed some good practices for collecting data and then splitting it into test and train in order to create a ground truth for your machine learning system."><meta itemprop="datePublished" content="2016-05-29T09:41:26&#43;00:00" />
<meta itemprop="dateModified" content="2016-05-29T09:41:26&#43;00:00" />
<meta itemprop="wordCount" content="2402">
<meta itemprop="keywords" content="cognitive,cqa,evaluation,learning,machine,rank,retrieval,retrieve,supervised,watson," /><meta property="og:title" content="Cognitive Quality Assurance Pt 2: Performance Metrics" />
<meta property="og:description" content="EDIT: Hello readers, these articles are now 4 years old and many of the Watson services and APIs have moved or been changed. The concepts discussed in these articles are still relevant but I am working on 2nd editions of them.
Last time we discussed some good practices for collecting data and then splitting it into test and train in order to create a ground truth for your machine learning system." />
<meta property="og:type" content="article" />
<meta property="og:url" content="https://brainsteam.co.uk/2016/05/29/cognitive-quality-assurance-pt-2-performance-metrics/" /><meta property="article:section" content="posts" />
<meta property="article:published_time" content="2016-05-29T09:41:26&#43;00:00" />
<meta property="article:modified_time" content="2016-05-29T09:41:26&#43;00:00" />
<meta name="twitter:card" content="summary"/>
<meta name="twitter:title" content="Cognitive Quality Assurance Pt 2: Performance Metrics"/>
<meta name="twitter:description" content="EDIT: Hello readers, these articles are now 4 years old and many of the Watson services and APIs have moved or been changed. The concepts discussed in these articles are still relevant but I am working on 2nd editions of them.
Last time we discussed some good practices for collecting data and then splitting it into test and train in order to create a ground truth for your machine learning system."/>
<link href='https://fonts.googleapis.com/css?family=Playfair+Display:700' rel='stylesheet' type='text/css'>
<link rel="stylesheet" type="text/css" media="screen" href="https://brainsteam.co.uk/css/normalize.css" />
<link rel="stylesheet" type="text/css" media="screen" href="https://brainsteam.co.uk/css/main.css" />
<link id="dark-scheme" rel="stylesheet" type="text/css" href="https://brainsteam.co.uk/css/dark.css" />
<script src="https://brainsteam.co.uk/js/feather.min.js"></script>
<script src="https://brainsteam.co.uk/js/main.js"></script>
</head>
<body>
<div class="container wrapper">
<div class="header">
<div class="avatar">
<a href="https://brainsteam.co.uk/">
<img src="/images/avatar.png" alt="Brainsteam" />
</a>
</div>
<h1 class="site-title"><a href="https://brainsteam.co.uk/">Brainsteam</a></h1>
<div class="site-description"><p>The irregular mental expulsions of a PhD student and CTO of Filament, my views are my own and do not represent my employers in any way.</p><nav class="nav social">
<ul class="flat"><li><a href="https://twitter.com/jamesravey/" title="Twitter" rel="me"><i data-feather="twitter"></i></a></li><li><a href="https://github.com/ravenscroftj" title="Github" rel="me"><i data-feather="github"></i></a></li><li><a href="/index.xml" title="RSS" rel="me"><i data-feather="rss"></i></a></li></ul>
</nav></div>
<nav class="nav">
<ul class="flat">
<li>
<a href="/">Home</a>
</li>
<li>
<a href="/tags">Tags</a>
</li>
<li>
<a href="https://jamesravey.me">About Me</a>
</li>
</ul>
</nav>
</div>
<div class="post">
<div class="post-header">
<div class="meta">
<div class="date">
<span class="day">29</span>
<span class="rest">May 2016</span>
</div>
</div>
<div class="matter">
<h1 class="title">Cognitive Quality Assurance Pt 2: Performance Metrics</h1>
</div>
</div>
<div class="markdown">
<p><em><strong>EDIT: Hello readers, these articles are now 4 years old and many of the Watson services and APIs have moved or been changed. The concepts discussed in these articles are still relevant but I am working on 2nd editions of them.</strong></em></p>
<p><a href="https://brainsteam.co.uk/2016/03/29/cognitive-quality-assurance-an-introduction/">Last time</a> we discussed some good practices for collecting data and then splitting it into test and train in order to create a ground truth for your machine learning system. We then talked about calculating accuracy using test and blind data sets.</p>
<p>In this post we will talk about some more metrics you can do on your machine learning system including <strong>Precision</strong>, <strong>Recall</strong>, <strong>F-measure</strong> and <strong>confusion matrices.</strong> These metrics give you a much deeper level of insight into how your system is performing and provide hints at how you could improve performance too!</p>
<h2 id="a-recap-8211-accuracy-calculation">A recap Accuracy calculation</h2>
<p>This is the most simple calculation but perhaps the least interesting. We are just looking at the percentage of times the classifier got it right versus the percentage of times it failed. Simply:</p>
<ol>
<li>sum up the number of results (count the rows),</li>
<li>sum up the number of rows where the predicted label and the actual label match.</li>
<li>Calculate percentage accuracy: correct / total * 100.</li>
</ol>
<p>This tells you how good the classifier is in general across all classes. It does not help you in understanding how that result is made up.</p>
<h2 id="going-above-and-beyond-accuracy-why-is-it-important">Going above and beyond accuracy: why is it important?</h2>
<p><img loading="lazy" class="alignleft" src="https://i1.wp.com/openclipart.org/image/2400px/svg_to_png/13234/Anonymous-target-with-arrow.png?resize=268%2C250&#038;ssl=1" alt="target with arrow by Anonymous" width="268" height="250" data-recalc-dims="1" />Imagine that you are a hospital and it is critically important to be able to predict different types of cancer and how urgently they should be treated. Your classifier is 73% accurate overall but that does not tell you anything about its ability to predict any one type of cancer. What if the 27% of the answers it got wrong were the cancers that need urgent treatment? We wouldnt know!</p>
<p>This is exactly why we need to use measurements like precision, recall and f-measure as well as confusion matrices in order to understand what is really going on inside the classifier and which particular classes (if any) it is really struggling with.</p>
<h2 id="precision-recall-and-f-measure-and-confusion-matrices-grandma8217s-memory-game">Precision, Recall and F-measure and confusion matrices (Grandmas Memory Game)</h2>
<p><img loading="lazy" class="alignright" src="https://i2.wp.com/openclipart.org/image/2400px/svg_to_png/213139/Oma-.png?resize=264%2C391&#038;ssl=1" alt="Grandma's face by frankes" width="264" height="391" data-recalc-dims="1" />Precision, Recall and F-measure are incredibly useful for getting a deeper understanding of which classes the classifier is struggling with. They can be a little bit tricky to get your head around so lets use a metaphor about Grandmas memory.</p>
<p>Imagine Grandma has 24 grandchildren. As you can understand it is particularly difficult to remember their names. Thankfully, her 6 children, the grandchildrens parents all had 4 kids and named them after themselves. Her son Steve has 3 sons: Steve I, Steve II, Steve III and so on.</p>
<p>This makes things much easier for Grandma, she now only has to remember 6 names: Brian, Steve, Eliza, Diana, Nick and Reggie. The children do not like being called the wrong name so it is vitally important that she correctly classifies the child into the right name group when she sees them at the family reunion every Christmas.</p>
<p>I will now describe Precision, Recall, F-Measure and confusion matrices in terms of Grandmas predicament.</p>
<h3 id="some-terminology">Some Terminology</h3>
<p>Before we get on to precision and recall, I need to introduce the concepts of true positive, false positive, true negative and false negative. Every time Grandma gets an answer wrong or right, we can talk about it in terms of these labels and this will also help us get to grips with precision and recall later.</p>
<p>These phrases are in terms of each class you have TP, FP, FN, TN for each class. In this case we can have TP,FP,FN,TN with respect to Brian, with respect to Steve, with respect to Eliza and so on.</p>
<p>This table shows how these four labels apply to the class “Brian” you can create a table will</p>
<table border="0" cellspacing="0">
<colgroup width="197"></colgroup> <colgroup span="2" width="85"></colgroup> <tr>
<td align="left" height="17">
</td>
<td align="left">
Brian
</td>
<td align="left">
Not Brian
</td>
</tr>
<tr>
<td align="left" height="17">
Grandma says “Brian”
</td>
<td align="left">
True Positive
</td>
<td align="left">
False Positive
</td>
</tr>
<tr>
<td align="left" height="17">
Grandma says <not brian>
</td>
<td align="left">
False Negative
</td>
<td align="left">
True Negative
</td>
</tr>
</table>
<ul>
<li>If Grandma calls a Brian, Brian then we have a true positive (with respect to the Brian class) the answer is true in both senses- Brians name is indeed Brian AND Grandma said Brian go Grandma!</li>
<li>If Grandma calls a Brian, Steve then we have a false negative (with respect to the Brian class). Brians name is Brian and Grandma said Steve. This is also a false positive with respect to the Steve Class.</li>
<li>If Grandma calls a Steve, Brian then we have a false positive (with respect to the Brian class). Steves name is Steve, Grandma wrongly said Brian (i.e. identified positively).</li>
<li>If Grandma calls an Eliza, Eliza, or Steve, or Diana, or Nick the result is the same we have a true negative (with respect to the Brian class). Eliza,Eliza would obviously be a true positive with respect to the Eliza class but because we are only interested in Brian and what is or isnt Brian at this point, we are not measuring this.</li>
</ul>
<p>When you are recording results, it is helpful to store them in terms of each of these labels where applicable. For example:</p>
<p>Steve,Steve (TP Steve, TN everything else)</p>
<p>Brian,Steve (FN Brian, FP Steve)</p>
<h3 id="precision-and-recall">Precision and Recall</h3>
<p>Grandma is in the kitchen, pouring herself a Christmas Sherry when three Brians and 2 Steves come in to top up their eggnogs.</p>
<p>Grandma correctly classifies 2 Brians but slips up and calls one of them Eliza. She only gets 1 of the Steve and calls the other Brian.</p>
<p>In terms of TP,FP,TN,FN we can say the following (true negative is the least interesting for us):</p>
<table border="0" cellspacing="0">
<colgroup width="197"></colgroup> <colgroup span="3" width="85"></colgroup> <tr>
<td align="left" height="17">
</td>
<td align="left">
TP
</td>
<td align="left">
FP
</td>
<td align="left">
FN
</td>
</tr>
<tr>
<td align="left" height="17">
Brian
</td>
<td align="right">
2
</td>
<td align="right">
1
</td>
<td align="right">
1
</td>
</tr>
<tr>
<td align="left" height="17">
Eliza
</td>
<td align="right">
</td>
<td align="right">
1
</td>
<td align="right">
</td>
</tr>
<tr>
<td align="left" height="17">
Steve
</td>
<td align="right">
1
</td>
<td align="right">
</td>
<td align="right">
1
</td>
</tr>
</table>
<ul>
<li>She has correctly identified 2 people who are truly called Brian as Brian (TP)</li>
<li>She has falsely named someone Eliza when their name is not Eliza (FP)</li>
<li>She has falsely named someone whose name is truly Steve something else (FN)</li>
</ul>
<p><strong>True Positive, False Positive, True Negative and False negative are crucial to understand before you look at precision and recall so make sure you have fully understood this section before you move on.</strong></p>
<h4 id="precision">Precision</h4>
<p>Precision, like our TP/FP labels, is expressed in terms of each class or name. It is the proportion of true positive name guesses divided by true positive + false positive guesses.</p>
<p>Put another way, precision is how many times Grandma correctly guessed Brian versus how many times she called other people (like Steve) Brian.</p>
<p>For Grandma to be precise, she needs to be very good at correctly guessing Brians <strong>and also</strong> never call anyone else (Elizas and Steves) Brian.</p>
<p><em><strong>Important: If Grandma came to the conclusion that 70% of her grandchildren were named Brian and decided to just randomly say “Brian” most of the time, she could still achieve a high overall accuracy. However, her Precision with respect to Brian would be poor because of all the Steves and Elizas she was mis-labelling. This is why precision is important.</strong></em></p>
<table border="0" cellspacing="0">
<colgroup width="197"></colgroup> <colgroup span="4" width="85"></colgroup> <tr>
<td align="left" height="17">
</td>
<td align="left">
TP
</td>
<td align="left">
FP
</td>
<td align="left">
FN
</td>
<td align="left">
Precision
</td>
</tr>
<tr>
<td align="left" height="17">
Brian
</td>
<td align="right">
2
</td>
<td align="right">
1
</td>
<td align="right">
1
</td>
<td align="right">
66%
</td>
</tr>
<tr>
<td align="left" height="17">
Eliza
</td>
<td align="right">
</td>
<td align="right">
1
</td>
<td align="right">
</td>
<td align="right">
N/A
</td>
</tr>
<tr>
<td align="left" height="17">
Steve
</td>
<td align="right">
1
</td>
<td align="right">
</td>
<td align="right">
1
</td>
<td align="right">
100%
</td>
</tr>
</table>
<p>The results from this case are displayed above. As you can see, Grandma uses Brian to incorrectly label Steve so precision is only 66%. Despite only getting one of the Steves correct, Grandma has 100% precision for Steve simply by never using the name incorrectly. We cant calculate for Eliza because there were no true positive guesses for that name ( 0 / 1 is still zero ).</p>
<p>So what about false negatives? Surely its important to note how often Grandma is inaccurately calling  Brian by other names? Well look at that now…</p>
<h4 id="recall">Recall</h4>
<p>Continuing the theme, Recall is also expressed in terms of each class. It is the proportion of true positive name guesses divided by true positive + false negative guesses.</p>
<p>Another way to look at it is given a population of Brians, how many does Grandma correctly identify and how many does she give another name (i.e. Eliza or Steve)?</p>
<p>This tells us how “confusing” Brian is as a class. If Recall is high then its likely that Brians all have a very distinctive feature that distinguishes them as Brians (maybe they all have the same nose). If Recall is low, maybe Brians are very varied in appearance and perhaps look a lot like Elizas or Steves (this presents a problem of its own, check out confusion matrices below for more on this).</p>
<table border="0" cellspacing="0">
<colgroup width="197"></colgroup> <colgroup span="4" width="85"></colgroup> <tr>
<td align="left" height="17">
</td>
<td align="left">
TP
</td>
<td align="left">
FP
</td>
<td align="left">
FN
</td>
<td align="left">
Recall
</td>
</tr>
<tr>
<td align="left" height="17">
Brian
</td>
<td align="right">
2
</td>
<td align="right">
1
</td>
<td align="right">
1
</td>
<td align="right">
66.6%
</td>
</tr>
<tr>
<td align="left" height="17">
Eliza
</td>
<td align="right">
</td>
<td align="right">
1
</td>
<td align="right">
</td>
<td align="right">
N/A
</td>
</tr>
<tr>
<td align="left" height="17">
Steve
</td>
<td align="right">
1
</td>
<td align="right">
</td>
<td align="right">
1
</td>
<td align="right">
50%
</td>
</tr>
</table>
<p>You can see that recall for Brian remains the same (of the 3 Brians Grandma named, she only guessed incorrectly for one). Recall for Steve is 50% because Grandma guessed correctly for 1 and incorrectly for the other Steve. Again Eliza cant be calculated because we end up trying to divide zero by zero.</p>
<p><strong>F-Measure</strong></p>
<p>F-measure effectively a measurement of how accurate the classifier is per class once you factor in both precision and recall. This gives you a wholistic view of your classifiers performance on a particular class.</p>
<p>In terms of Grandma, f-measure give us an aggregate metric of how good Grandma is at dealing with Brians in terms of both precision AND accuracy.</p>
<p>It is very simple to calculate if you already have precision and recall:</p>
<p><img src="https://upload.wikimedia.org/math/9/9/1/991d55cc29b4867c88c6c22d438265f9.png" alt="F_1 = 2 \cdot \frac{\mathrm{precision} \cdot \mathrm{recall}}{\mathrm{precision} + \mathrm{recall}}"></p>
<p>Here are the F-Measure results for Brian, Steve and Eliza from above.</p>
<table border="0" cellspacing="0">
<colgroup width="197"></colgroup> <colgroup span="6" width="85"></colgroup> <tr>
<td align="left" height="17">
</td>
<td align="left">
TP
</td>
<td align="left">
FP
</td>
<td align="left">
FN
</td>
<td align="left">
Precision
</td>
<td align="left">
Recall
</td>
<td align="left">
F-measure
</td>
</tr>
<tr>
<td align="left" height="17">
Brian
</td>
<td align="right">
2
</td>
<td align="right">
1
</td>
<td align="right">
1
</td>
<td align="right">
66.6%
</td>
<td align="right">
66.6%
</td>
<td align="right">
66.6%
</td>
</tr>
<tr>
<td align="left" height="17">
Eliza
</td>
<td align="right">
</td>
<td align="right">
1
</td>
<td align="right">
</td>
<td align="right">
N/A
</td>
<td align="right">
N/A
</td>
<td align="right">
N/A
</td>
</tr>
<tr>
<td align="left" height="17">
Steve
</td>
<td align="right">
1
</td>
<td align="right">
</td>
<td align="right">
1
</td>
<td align="right">
1
</td>
<td align="right">
0.5
</td>
<td align="right">
0.6666666667
</td>
</tr>
</table>
<p>As you can see the F-measure is the average (<a href="https://en.wikipedia.org/wiki/Harmonic_mean#Harmonic_mean_of_two_numbers">harmonic mean</a>) of the two values this can often give you a good overview of both precision and recall and is dramatically affected by one of the contributing measurements being poor.</p>
<h3 id="confusion-matrices">Confusion Matrices</h3>
<p>When a class has a particularly low Recall or Precision, the next question should be why? Often you can improve a classifiers performance by modifying  the data or (if you have control of the classifier) which features you are training on.</p>
<p>For example, what if we find out that Brians look a lot like Elizas? We could add a new feature (Grandma could start using their voice pitch to determine their gender and their gender to inform her name choice) or we could update the data (maybe we could make all Brians wear a blue jumper and all Elizas wear a green jumper).</p>
<p>Before we go down that road, we need to understand where there is confusion between classes  and where Grandma is doing well. This is where a confusion matrix helps.</p>
<p>A Confusion Matrix allows us to see which classes are being correctly predicted and which classes Grandma is struggling to predict and getting most confused about. It also crucially gives us insight into which classes Grandma is confusing as above. Here is an example of a confusion Matrix for Grandmas family.</p>
<table border="0" cellspacing="0">
<colgroup width="179"></colgroup> <colgroup span="7" width="85"></colgroup> <tr>
<td align="left" height="17">
</td>
<td align="left">
</td>
<td colspan="6" align="center" valign="middle">
<b>Predictions</b>
</td>
</tr>
<tr>
<td align="left" height="17">
</td>
<td align="left">
</td>
<td align="left">
Steve
</td>
<td align="left">
Brian
</td>
<td align="left">
Eliza
</td>
<td align="left">
Diana
</td>
<td align="left">
Nick
</td>
<td align="left">
Reggie
</td>
</tr>
<tr>
<td rowspan="6" align="center" valign="middle" height="102">
<b>Actual </b></p>
<p>
<b>Class</b></td>
<td align="left">
Steve
</td>
<td align="right">
<strong>4</strong>
</td>
<td align="right">
1
</td>
<td align="right">
</td>
<td align="right">
1
</td>
<td align="right">
</td>
<td align="right">
</td></tr>
<tr>
<td align="left">
Brian
</td>
<td align="right">
1
</td>
<td align="right">
<strong>3</strong>
</td>
<td align="right">
</td>
<td align="right">
</td>
<td align="right">
1
</td>
<td align="right">
1
</td>
</tr>
<tr>
<td align="left">
Eliza
</td>
<td align="right">
</td>
<td align="right">
</td>
<td align="right">
<strong>5</strong>
</td>
<td align="right">
1
</td>
<td align="right">
</td>
<td align="right">
</td>
</tr>
<tr>
<td align="left">
Diana
</td>
<td align="right">
</td>
<td align="right">
</td>
<td align="right">
5
</td>
<td align="right">
<strong>1</strong>
</td>
<td align="right">
</td>
<td align="right">
</td>
</tr>
<tr>
<td align="left">
Nick
</td>
<td align="right">
1
</td>
<td align="right">
</td>
<td align="right">
</td>
<td align="right">
</td>
<td align="right">
<strong>5</strong>
</td>
<td align="right">
</td>
</tr>
<tr>
<td align="left">
Reggie
</td>
<td align="right">
</td>
<td align="right">
</td>
<td align="right">
</td>
<td align="right">
</td>
<td align="right">
</td>
<td align="right">
<strong>6</strong>
</td>
</tr></tbody> </table>
<p>
Ok so lets have a closer look at the above.
</p>
<p>
Reading across the rows left to right these are the actual examples of each class &#8211; in this case there are 6 children with each name so if you sum over the row you will find that they each add up to 6.
</p>
<p>
Reading down the columns top-to-bottom you will find the predictions &#8211; i.e. what Grandma thought each child&#8217;s name was.  You will find that these columns may add up to more than or less than 6 because Grandma may overfit for one particular name. In this case she seems to think that all her female Grandchildren are called Eliza (she predicted 5/6 Elizas are called Eliza and 5/6 Dianas are also called Eliza).
</p>
<p>
Reading diagonally where I&#8217;ve shaded things in bold gives you the number of correctly predicted examples. In this case Reggie was 100% accurately predicted with 6/6 children called &#8220;Reggie&#8221; actually being predicted &#8220;Reggie&#8221;. Diana is the poorest performer with only 1/6 children being correctly identified. This can be explained as above with Grandma over-generalising and calling all female relatives &#8220;Eliza&#8221;.
</p>
<p>
<figure id="attachment_118" aria-describedby="caption-attachment-118" class="wp-caption alignright"><img loading="lazy" class="size-medium wp-image-118" src="https://i2.wp.com/brainsteam.co.uk/wp-content/uploads/2016/05/FEN-Ponytail-800px.png?resize=259%2C300&#038;ssl=1" alt="Steve sings for a Rush tribute band - his Geddy Lee is impeccable." width="259" height="300" srcset="https://i2.wp.com/brainsteam.co.uk/wp-content/uploads/2016/05/FEN-Ponytail-800px.png?resize=259%2C300&ssl=1 259w, https://i2.wp.com/brainsteam.co.uk/wp-content/uploads/2016/05/FEN-Ponytail-800px.png?w=690&ssl=1 690w" sizes="(max-width: 259px) 100vw, 259px" data-recalc-dims="1" /><figcaption id="caption-attachment-118" class="wp-caption-text">Steve sings for a Rush tribute band &#8211; his Geddy Lee is impeccable.</figcaption></figure>
</p>
<p>
Grandma seems to have gender nailed except in the case of one of the Steves (who in fairness does have a Pony Tail and can sing very high).  She is best at predicting Reggies and struggles with Brians (perhaps Brians have the most diverse appearance and look a lot like their respective male cousins). She is also pretty good at Nicks and Steves.
</p>
<p>
Grandma is terrible at female grandchildrens&#8217; names. If this was a machine learning problem we would need to find a way to make it easier to identify the difference between Dianas and Elizas through some kind of further feature extraction or weighting or through the gathering of additional training data.
</p>
<h2>
Conclusion
</h2>
<p>
Machine learning is definitely no walk in the park. There are a lot of intricacies involved in assessing the effectiveness of a classifier. Accuracy is a great start if until now you&#8217;ve been praying to the gods and carrying four-leaf-clovers around with you to improve your cognitive system performance.
</p>
<p>
However, Precision, Recall, F-Measure and Confusion Matrices really give you the insight you need into which classes your system is struggling with and which classes confuse it the most.
</p>
<h4>
A Note for Document Retrieval (Watson Retrieve & Rank) Users
</h4>
<p>
This example is probably directly relevant to those building classification systems (i.e. extracting intent from questions or revealing whether an image contains a particular company&#8217;s logo). However all of this stuff works directly for document retrieval use cases too. Consider true positive to be when the first document returned from the query is the correct answer and false negative is when the first document returned is the wrong answer.
</p>
<p>
There are also variants on this that consider the top 5 retrieved answer (Precision@N) that tell you whether your system can predict the correct answer in the top 1,3,5 or 10 answers by simply identifying &#8220;True Positive&#8221; as the document turning up in the top N answers returned by the query.
</p>
<h3>
Finally&#8230;
</h3>
<p>
Overall I hope this tutorial has helped you to understand the ins and outs of machine learning evaluation.
</p>
<p>
Next time we look at cross-validation techniques and how to assess small corpii where carving out a 30% chunk of the documents would seriously impact the learning. Stay tuned for more!
</p>
</div>
<div class="tags">
<ul class="flat">
<li><a href="/tags/cognitive">cognitive</a></li>
<li><a href="/tags/cqa">cqa</a></li>
<li><a href="/tags/evaluation">evaluation</a></li>
<li><a href="/tags/learning">learning</a></li>
<li><a href="/tags/machine">machine</a></li>
<li><a href="/tags/rank">rank</a></li>
<li><a href="/tags/retrieval">retrieval</a></li>
<li><a href="/tags/retrieve">retrieve</a></li>
<li><a href="/tags/supervised">supervised</a></li>
<li><a href="/tags/watson">watson</a></li>
</ul>
</div><div id="disqus_thread"></div>
<script type="text/javascript">
(function () {
if (window.location.hostname == "localhost")
return;
var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
var disqus_shortname = 'brainsteam';
dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
(document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
})();
</script>
<noscript>Please enable JavaScript to view the </a></noscript>
<a href="http://disqus.com/" class="dsq-brlink">comments powered by <span class="logo-disqus">Disqus</span></a>
</div>
</div>
<div class="footer wrapper">
<nav class="nav">
<div>2021 © James Ravenscroft 2020 | <a href="https://github.com/knadh/hugo-ink">Ink</a> theme on <a href="https://gohugo.io">Hugo</a></div>
</nav>
</div>
<script type="application/javascript">
var doNotTrack = false;
if (!doNotTrack) {
window.ga=window.ga||function(){(ga.q=ga.q||[]).push(arguments)};ga.l=+new Date;
ga('create', 'UA-186263385-1', 'auto');
ga('send', 'pageview');
}
</script>
<script async src='https://www.google-analytics.com/analytics.js'></script>
<script>feather.replace()</script>
</body>
</html>