diff --git a/brainsteam/content/replies/2022/11/23/1669236730.md b/brainsteam/content/replies/2022/11/23/1669236730.md new file mode 100644 index 0000000..107e0f2 --- /dev/null +++ b/brainsteam/content/replies/2022/11/23/1669236730.md @@ -0,0 +1,61 @@ +--- +date: '2022-11-23T20:52:10' +hypothesis-meta: + created: '2022-11-23T20:52:10.292273+00:00' + document: + title: + - 2022.naacl-main.167.pdf + flagged: false + group: __world__ + hidden: false + id: sxEWFGtwEe2_zFc3H2nb2Q + links: + html: https://hypothes.is/a/sxEWFGtwEe2_zFc3H2nb2Q + incontext: https://hyp.is/sxEWFGtwEe2_zFc3H2nb2Q/aclanthology.org/2022.naacl-main.167.pdf + json: https://hypothes.is/api/annotations/sxEWFGtwEe2_zFc3H2nb2Q + permissions: + admin: + - acct:ravenscroftj@hypothes.is + delete: + - acct:ravenscroftj@hypothes.is + read: + - group:__world__ + update: + - acct:ravenscroftj@hypothes.is + tags: + - prompt-models + - NLProc + target: + - selector: + - end: 1663 + start: 1398 + type: TextPositionSelector + - exact: "Insum, notwithstanding prompt-based models\u2019impressive improvement,\ + \ we find evidence ofserious limitations that question the degree towhich\ + \ such improvement is derived from mod-els understanding task instructions\ + \ in waysanalogous to humans\u2019 use of task instructions." + prefix: 'ing prompts even at zero shots. ' + suffix: 1 IntroductionSuppose a human is + type: TextQuoteSelector + source: https://aclanthology.org/2022.naacl-main.167.pdf + text: although prompts seem to help NLP models improve their performance, the authors + find that this performance is still present even when prompts are deliberately + misleading which is a bit weird + updated: '2022-11-23T20:52:10.292273+00:00' + uri: https://aclanthology.org/2022.naacl-main.167.pdf + user: acct:ravenscroftj@hypothes.is + user_info: + display_name: James Ravenscroft +in-reply-to: https://aclanthology.org/2022.naacl-main.167.pdf +tags: +- prompt-models +- NLProc +- hypothesis +type: reply +url: /replies/2022/11/23/1669236730 + +--- + + + +
Insum, notwithstanding prompt-based models’impressive improvement, we find evidence ofserious limitations that question the degree towhich such improvement is derived from mod-els understanding task instructions in waysanalogous to humans’ use of task instructions.
although prompts seem to help NLP models improve their performance, the authors find that this performance is still present even when prompts are deliberately misleading which is a bit weird \ No newline at end of file