Add 'brainsteam/content/replies/2022/11/23/1669233264.md'
continuous-integration/drone/push Build is passing Details

This commit is contained in:
ravenscroftj 2022-11-23 20:00:13 +00:00
parent b0f9fd185f
commit c88ee959f2
1 changed files with 72 additions and 0 deletions

View File

@ -0,0 +1,72 @@
---
date: '2022-11-23T19:54:24'
hypothesis-meta:
created: '2022-11-23T19:54:24.332809+00:00'
document:
title:
- 2210.07188.pdf
flagged: false
group: __world__
hidden: false
id: oTGKsmtoEe2RF0-NK45jew
links:
html: https://hypothes.is/a/oTGKsmtoEe2RF0-NK45jew
incontext: https://hyp.is/oTGKsmtoEe2RF0-NK45jew/arxiv.org/pdf/2210.07188.pdf
json: https://hypothes.is/api/annotations/oTGKsmtoEe2RF0-NK45jew
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- coreference
- NLProc
- data-annotation
target:
- selector:
- end: 5934
start: 5221
type: TextPositionSelector
- exact: Specifically, our work investigates the quality ofcrowdsourced coreference
annotations when anno-tators are taught only simple coreference cases thatare
treated uniformly across existing datasets (e.g.,pronouns). By providing only
these simple cases,we are able to teach the annotators the concept ofcoreference,
while allowing them to freely interpretcases treated differently across the
existing datasets.This setup allows us to identify cases where ourannotators
disagree among each other, but moreimportantly cases where they unanimously
agreewith each other but disagree with the expert, thussuggesting cases that
should be revisited by theresearch community when curating future unifiedannotation
guidelines
prefix: ficient payment-based platforms.
suffix: .Our main contributions are:1. W
type: TextQuoteSelector
source: https://arxiv.org/pdf/2210.07188.pdf
text: "The aim of the work is to examine a simplified subset of co-reference phenomena\
\ which are generally treated the same across different existing datasets. \n\n\
This makes spotting inter-annotator disagreement easier - presumably because for\
\ simpler cases there are fewer modes of failure?\n\n"
updated: '2022-11-23T19:54:24.332809+00:00'
uri: https://arxiv.org/pdf/2210.07188.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/2210.07188.pdf
tags:
- coreference
- NLProc
- data-annotation
- hypothesis
type: reply
url: /replies/2022/11/23/1669233264
---
<blockquote>Specifically, our work investigates the quality ofcrowdsourced coreference annotations when anno-tators are taught only simple coreference cases thatare treated uniformly across existing datasets (e.g.,pronouns). By providing only these simple cases,we are able to teach the annotators the concept ofcoreference, while allowing them to freely interpretcases treated differently across the existing datasets.This setup allows us to identify cases where ourannotators disagree among each other, but moreimportantly cases where they unanimously agreewith each other but disagree with the expert, thussuggesting cases that should be revisited by theresearch community when curating future unifiedannotation guidelines</blockquote>The aim of the work is to examine a simplified subset of co-reference phenomena which are generally treated the same across different existing datasets.
This makes spotting inter-annotator disagreement easier - presumably because for simpler cases there are fewer modes of failure?