brainsteam.co.uk/brainsteam/content/annotations/2022/11/23/1669233016.md

2.6 KiB

date hypothesis-meta in-reply-to tags type url
2022-11-23T19:50:16
created document flagged group hidden id links permissions tags target text updated uri user user_info
2022-11-23T19:50:16.484020+00:00
title
2210.07188.pdf
false __world__ false DXdcFmtoEe2_uNemAZII7w
html incontext json
https://hypothes.is/a/DXdcFmtoEe2_uNemAZII7w https://hyp.is/DXdcFmtoEe2_uNemAZII7w/arxiv.org/pdf/2210.07188.pdf https://hypothes.is/api/annotations/DXdcFmtoEe2_uNemAZII7w
admin delete read update
acct:ravenscroftj@hypothes.is
acct:ravenscroftj@hypothes.is
group:__world__
acct:ravenscroftj@hypothes.is
coreference
NLProc
data-annotation
selector source
end start type
3539 3191 TextPositionSelector
exact prefix suffix type
owever, these datasets vary widelyin their definitions of coreference (expressed viaannotation guidelines), resulting in inconsistent an-notations both within and across domains and lan-guages. For instance, as shown in Figure 1, whileARRAU (Uryupina et al., 2019) treats generic pro-nouns as non-referring, OntoNotes chooses not tomark them at all larly for “we”.et al., 2016a). H .It is thus unclear which guidel TextQuoteSelector
https://arxiv.org/pdf/2210.07188.pdf
One of the big issues is that different co-reference datasets have significant differences in annotation guidelines even within the coreference family of tasks - I found this quite shocking as one might expect coreference to be fairly well defined as a task. 2022-11-23T19:54:31.023210+00:00 https://arxiv.org/pdf/2210.07188.pdf acct:ravenscroftj@hypothes.is
display_name
James Ravenscroft
https://arxiv.org/pdf/2210.07188.pdf
coreference
NLProc
data-annotation
hypothesis
annotation /annotation/2022/11/23/1669233016
owever, these datasets vary widelyin their definitions of coreference (expressed viaannotation guidelines), resulting in inconsistent an-notations both within and across domains and lan-guages. For instance, as shown in Figure 1, whileARRAU (Uryupina et al., 2019) treats generic pro-nouns as non-referring, OntoNotes chooses not tomark them at all
One of the big issues is that different co-reference datasets have significant differences in annotation guidelines even within the coreference family of tasks - I found this quite shocking as one might expect coreference to be fairly well defined as a task.