Add 'brainsteam/content/replies/2022/11/23/1669234701.md'
continuous-integration/drone/push Build is passing
Details
continuous-integration/drone/push Build is passing
Details
This commit is contained in:
parent
c33e214a05
commit
bd7a180ab0
|
@ -0,0 +1,62 @@
|
|||
---
|
||||
date: '2022-11-23T20:18:21'
|
||||
hypothesis-meta:
|
||||
created: '2022-11-23T20:18:21.503899+00:00'
|
||||
document:
|
||||
title:
|
||||
- 2210.07188.pdf
|
||||
flagged: false
|
||||
group: __world__
|
||||
hidden: false
|
||||
id: -dKc5GtrEe2QDyN0zg00rw
|
||||
links:
|
||||
html: https://hypothes.is/a/-dKc5GtrEe2QDyN0zg00rw
|
||||
incontext: https://hyp.is/-dKc5GtrEe2QDyN0zg00rw/arxiv.org/pdf/2210.07188.pdf
|
||||
json: https://hypothes.is/api/annotations/-dKc5GtrEe2QDyN0zg00rw
|
||||
permissions:
|
||||
admin:
|
||||
- acct:ravenscroftj@hypothes.is
|
||||
delete:
|
||||
- acct:ravenscroftj@hypothes.is
|
||||
read:
|
||||
- group:__world__
|
||||
update:
|
||||
- acct:ravenscroftj@hypothes.is
|
||||
tags:
|
||||
- data-annotation
|
||||
- coreference
|
||||
- NLProc
|
||||
target:
|
||||
- selector:
|
||||
- end: 28783
|
||||
start: 28631
|
||||
type: TextPositionSelector
|
||||
- exact: 'Our annotators achieve thehighest precision with OntoNotes, suggesting
|
||||
thatmost of the entities identified by crowdworkers arecorrect for this dataset. '
|
||||
prefix: 'ntoNotes, GUM, Lit-Bank, ARRAU: '
|
||||
suffix: In terms of F1 scores, thedatase
|
||||
type: TextQuoteSelector
|
||||
source: https://arxiv.org/pdf/2210.07188.pdf
|
||||
text: interesting that the mention detection algorithm gives poor precision on OntoNotes
|
||||
and the annotators get high precision. Does this imply that there are a lot of
|
||||
invalid mentions in this data and the guidelines for ontonotes are correct to
|
||||
ignore generic pronouns without pronominals?
|
||||
updated: '2022-11-23T20:18:21.503899+00:00'
|
||||
uri: https://arxiv.org/pdf/2210.07188.pdf
|
||||
user: acct:ravenscroftj@hypothes.is
|
||||
user_info:
|
||||
display_name: James Ravenscroft
|
||||
in-reply-to: https://arxiv.org/pdf/2210.07188.pdf
|
||||
tags:
|
||||
- data-annotation
|
||||
- coreference
|
||||
- NLProc
|
||||
- hypothesis
|
||||
type: reply
|
||||
url: /replies/2022/11/23/1669234701
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
<blockquote>Our annotators achieve thehighest precision with OntoNotes, suggesting thatmost of the entities identified by crowdworkers arecorrect for this dataset. </blockquote>interesting that the mention detection algorithm gives poor precision on OntoNotes and the annotators get high precision. Does this imply that there are a lot of invalid mentions in this data and the guidelines for ontonotes are correct to ignore generic pronouns without pronominals?
|
Loading…
Reference in New Issue