brainsteam.co.uk/brainsteam/content/annotations/2023/01/29/1674989844.md

2.1 KiB
Raw Blame History

date hypothesis-meta in-reply-to tags type url
2023-01-29T10:57:24
created document flagged group hidden id links permissions tags target text updated uri user user_info
2023-01-29T10:57:24.658922+00:00
title
2301.11305.pdf
false __world__ false tnNraJ_DEe2YBceDAVt0Uw
html incontext json
https://hypothes.is/a/tnNraJ_DEe2YBceDAVt0Uw https://hyp.is/tnNraJ_DEe2YBceDAVt0Uw/arxiv.org/pdf/2301.11305.pdf https://hypothes.is/api/annotations/tnNraJ_DEe2YBceDAVt0Uw
admin delete read update
acct:ravenscroftj@hypothes.is
acct:ravenscroftj@hypothes.is
group:__world__
acct:ravenscroftj@hypothes.is
chatgpt
detecting gpt
selector source
end start type
22349 22098 TextPositionSelector
exact prefix suffix type
Empirically, we find predictive entropy to be positively cor-related with passage fake-ness more often that not; there-fore, this baseline uses high average entropy in the modelspredictive distribution as a signal that a passage is machine-generated. tropy) predictive distributions. While our main focus is on zero TextQuoteSelector
https://arxiv.org/pdf/2301.11305.pdf
this makes sense and aligns with the [gltr](http://gltr.io) - humans add more entropy to sentences by making unusual choices in vocabulary that a model would not. 2023-01-29T10:57:24.658922+00:00 https://arxiv.org/pdf/2301.11305.pdf acct:ravenscroftj@hypothes.is
display_name
James Ravenscroft
https://arxiv.org/pdf/2301.11305.pdf
chatgpt
detecting gpt
hypothesis
annotation /annotations/2023/01/29/1674989844
Empirically, we find predictive entropy to be positively cor-related with passage fake-ness more often that not; there-fore, this baseline uses high average entropy in the modelspredictive distribution as a signal that a passage is machine-generated.
this makes sense and aligns with the [gltr](http://gltr.io) - humans add more entropy to sentences by making unusual choices in vocabulary that a model would not.