2022-11-23T19:50:16.484020+00:00 |
|
false |
__world__ |
false |
DXdcFmtoEe2_uNemAZII7w |
|
admin |
delete |
read |
update |
acct:ravenscroftj@hypothes.is |
|
acct:ravenscroftj@hypothes.is |
|
|
acct:ravenscroftj@hypothes.is |
|
|
coreference |
NLProc |
data-annotation |
|
selector |
source |
end |
start |
type |
3539 |
3191 |
TextPositionSelector |
|
exact |
prefix |
suffix |
type |
owever, these datasets vary widelyin their definitions of coreference (expressed viaannotation guidelines), resulting in inconsistent an-notations both within and across domains and lan-guages. For instance, as shown in Figure 1, whileARRAU (Uryupina et al., 2019) treats generic pro-nouns as non-referring, OntoNotes chooses not tomark them at all |
larly for “we”.et al., 2016a). H |
.It is thus unclear which guidel |
TextQuoteSelector |
|
|
https://arxiv.org/pdf/2210.07188.pdf |
|
|
One of the big issues is that different co-reference datasets have significant differences in annotation guidelines even within the coreference family of tasks - I found this quite shocking as one might expect coreference to be fairly well defined as a task. |
2022-11-23T19:54:31.023210+00:00 |
https://arxiv.org/pdf/2210.07188.pdf |
acct:ravenscroftj@hypothes.is |
display_name |
James Ravenscroft |
|