brainsteam.co.uk/brainsteam/content/annotations/2022/11/23/1669236730.md

61 lines
2.3 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

---
date: '2022-11-23T20:52:10'
hypothesis-meta:
created: '2022-11-23T20:52:10.292273+00:00'
document:
title:
- 2022.naacl-main.167.pdf
flagged: false
group: __world__
hidden: false
id: sxEWFGtwEe2_zFc3H2nb2Q
links:
html: https://hypothes.is/a/sxEWFGtwEe2_zFc3H2nb2Q
incontext: https://hyp.is/sxEWFGtwEe2_zFc3H2nb2Q/aclanthology.org/2022.naacl-main.167.pdf
json: https://hypothes.is/api/annotations/sxEWFGtwEe2_zFc3H2nb2Q
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- prompt-models
- NLProc
target:
- selector:
- end: 1663
start: 1398
type: TextPositionSelector
- exact: "Insum, notwithstanding prompt-based models\u2019impressive improvement,\
\ we find evidence ofserious limitations that question the degree towhich\
\ such improvement is derived from mod-els understanding task instructions\
\ in waysanalogous to humans\u2019 use of task instructions."
prefix: 'ing prompts even at zero shots. '
suffix: 1 IntroductionSuppose a human is
type: TextQuoteSelector
source: https://aclanthology.org/2022.naacl-main.167.pdf
text: although prompts seem to help NLP models improve their performance, the authors
find that this performance is still present even when prompts are deliberately
misleading which is a bit weird
updated: '2022-11-23T20:52:10.292273+00:00'
uri: https://aclanthology.org/2022.naacl-main.167.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://aclanthology.org/2022.naacl-main.167.pdf
tags:
- prompt-models
- NLProc
- hypothesis
type: annotation
url: /annotation/2022/11/23/1669236730
---
<blockquote>Insum, notwithstanding prompt-based modelsimpressive improvement, we find evidence ofserious limitations that question the degree towhich such improvement is derived from mod-els understanding task instructions in waysanalogous to humans use of task instructions.</blockquote>although prompts seem to help NLP models improve their performance, the authors find that this performance is still present even when prompts are deliberately misleading which is a bit weird