brainsteam.co.uk/brainsteam/content/annotations/2022/12/19/1671458692.md

3.0 KiB
Raw Blame History

date hypothesis-meta in-reply-to tags type url
2022-12-19T14:04:52
created document flagged group hidden id links permissions tags target text updated uri user user_info
2022-12-19T14:04:52.852856+00:00
title
My AI Safety Lecture for UT Effective Altruism
false __world__ false G_zRJH-mEe2Hz98VxKK5Gw
html incontext json
https://hypothes.is/a/G_zRJH-mEe2Hz98VxKK5Gw https://hyp.is/G_zRJH-mEe2Hz98VxKK5Gw/scottaaronson.blog/?p=6823 https://hypothes.is/api/annotations/G_zRJH-mEe2Hz98VxKK5Gw
admin delete read update
acct:ravenscroftj@hypothes.is
acct:ravenscroftj@hypothes.is
group:__world__
acct:ravenscroftj@hypothes.is
nlproc
selector source
endContainer endOffset startContainer startOffset type
/div[2]/div[2]/div[2]/div[1]/p[36] 642 /div[2]/div[2]/div[2]/div[1]/p[36] 0 RangeSelector
end start type
13632 12990 TextPositionSelector
exact prefix suffix type
Okay, but one thing thats been found empirically is that you take commonsense questions that are flubbed by GPT-2, lets say, and you try them on GPT-3, and very often now it gets them right. You take the things that the original GPT-3 flubbed, and you try them on the latest public model, which is sometimes called GPT-3.5 (incorporating an advance called InstructGPT), and again it often gets them right. So its extremely risky right now to pin your case against AI on these sorts of examples! Very plausibly, just one more order of magnitude of scale is all itll take to kick the ball in, and then youll have to move the goal again. Cheetahs are faster, right? A deeper objection is that t TextQuoteSelector
https://scottaaronson.blog/?p=6823
the stochastic parrots argument could be defeated as models get bigger and more complex 2022-12-19T14:04:52.852856+00:00 https://scottaaronson.blog/?p=6823 acct:ravenscroftj@hypothes.is
display_name
James Ravenscroft
https://scottaaronson.blog/?p=6823
nlproc
hypothesis
annotation /annotations/2022/12/19/1671458692
Okay, but one thing thats been found empirically is that you take commonsense questions that are flubbed by GPT-2, lets say, and you try them on GPT-3, and very often now it gets them right. You take the things that the original GPT-3 flubbed, and you try them on the latest public model, which is sometimes called GPT-3.5 (incorporating an advance called InstructGPT), and again it often gets them right. So its extremely risky right now to pin your case against AI on these sorts of examples! Very plausibly, just one more order of magnitude of scale is all itll take to kick the ball in, and then youll have to move the goal again.
the stochastic parrots argument could be defeated as models get bigger and more complex