brainsteam.co.uk/brainsteam/content/annotations/2022/11/23/1669233016.md

65 lines
2.6 KiB
Markdown
Raw Normal View History

---
date: '2022-11-23T19:50:16'
hypothesis-meta:
created: '2022-11-23T19:50:16.484020+00:00'
document:
title:
- 2210.07188.pdf
flagged: false
group: __world__
hidden: false
id: DXdcFmtoEe2_uNemAZII7w
links:
html: https://hypothes.is/a/DXdcFmtoEe2_uNemAZII7w
incontext: https://hyp.is/DXdcFmtoEe2_uNemAZII7w/arxiv.org/pdf/2210.07188.pdf
json: https://hypothes.is/api/annotations/DXdcFmtoEe2_uNemAZII7w
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- coreference
- NLProc
- data-annotation
target:
- selector:
- end: 3539
start: 3191
type: TextPositionSelector
- exact: owever, these datasets vary widelyin their definitions of coreference
(expressed viaannotation guidelines), resulting in inconsistent an-notations
both within and across domains and lan-guages. For instance, as shown in Figure
1, whileARRAU (Uryupina et al., 2019) treats generic pro-nouns as non-referring,
OntoNotes chooses not tomark them at all
prefix: "larly for \u201Cwe\u201D.et al., 2016a). H"
suffix: .It is thus unclear which guidel
type: TextQuoteSelector
source: https://arxiv.org/pdf/2210.07188.pdf
text: One of the big issues is that different co-reference datasets have significant
differences in annotation guidelines even within the coreference family of tasks
- I found this quite shocking as one might expect coreference to be fairly well
defined as a task.
updated: '2022-11-23T19:54:31.023210+00:00'
uri: https://arxiv.org/pdf/2210.07188.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/2210.07188.pdf
tags:
- coreference
- NLProc
- data-annotation
- hypothesis
type: annotation
url: /annotation/2022/11/23/1669233016
---
<blockquote>owever, these datasets vary widelyin their definitions of coreference (expressed viaannotation guidelines), resulting in inconsistent an-notations both within and across domains and lan-guages. For instance, as shown in Figure 1, whileARRAU (Uryupina et al., 2019) treats generic pro-nouns as non-referring, OntoNotes chooses not tomark them at all</blockquote>One of the big issues is that different co-reference datasets have significant differences in annotation guidelines even within the coreference family of tasks - I found this quite shocking as one might expect coreference to be fairly well defined as a task.