brainsteam.co.uk/brainsteam/content/annotations/2022/11/23/1669233264.md

3.4 KiB

date hypothesis-meta in-reply-to tags type url
2022-11-23T19:54:24
created document flagged group hidden id links permissions tags target text updated uri user user_info
2022-11-23T19:54:24.332809+00:00
title
2210.07188.pdf
false __world__ false oTGKsmtoEe2RF0-NK45jew
html incontext json
https://hypothes.is/a/oTGKsmtoEe2RF0-NK45jew https://hyp.is/oTGKsmtoEe2RF0-NK45jew/arxiv.org/pdf/2210.07188.pdf https://hypothes.is/api/annotations/oTGKsmtoEe2RF0-NK45jew
admin delete read update
acct:ravenscroftj@hypothes.is
acct:ravenscroftj@hypothes.is
group:__world__
acct:ravenscroftj@hypothes.is
coreference
NLProc
data-annotation
selector source
end start type
5934 5221 TextPositionSelector
exact prefix suffix type
Specifically, our work investigates the quality ofcrowdsourced coreference annotations when anno-tators are taught only simple coreference cases thatare treated uniformly across existing datasets (e.g.,pronouns). By providing only these simple cases,we are able to teach the annotators the concept ofcoreference, while allowing them to freely interpretcases treated differently across the existing datasets.This setup allows us to identify cases where ourannotators disagree among each other, but moreimportantly cases where they unanimously agreewith each other but disagree with the expert, thussuggesting cases that should be revisited by theresearch community when curating future unifiedannotation guidelines ficient payment-based platforms. .Our main contributions are:1. W TextQuoteSelector
https://arxiv.org/pdf/2210.07188.pdf
The aim of the work is to examine a simplified subset of co-reference phenomena which are generally treated the same across different existing datasets. This makes spotting inter-annotator disagreement easier - presumably because for simpler cases there are fewer modes of failure? 2022-11-23T19:54:24.332809+00:00 https://arxiv.org/pdf/2210.07188.pdf acct:ravenscroftj@hypothes.is
display_name
James Ravenscroft
https://arxiv.org/pdf/2210.07188.pdf
coreference
NLProc
data-annotation
hypothesis
annotation /annotation/2022/11/23/1669233264
Specifically, our work investigates the quality ofcrowdsourced coreference annotations when anno-tators are taught only simple coreference cases thatare treated uniformly across existing datasets (e.g.,pronouns). By providing only these simple cases,we are able to teach the annotators the concept ofcoreference, while allowing them to freely interpretcases treated differently across the existing datasets.This setup allows us to identify cases where ourannotators disagree among each other, but moreimportantly cases where they unanimously agreewith each other but disagree with the expert, thussuggesting cases that should be revisited by theresearch community when curating future unifiedannotation guidelines
The aim of the work is to examine a simplified subset of co-reference phenomena which are generally treated the same across different existing datasets.

This makes spotting inter-annotator disagreement easier - presumably because for simpler cases there are fewer modes of failure?