Add 'brainsteam/content/annotations/2022/12/19/1671458692.md'
continuous-integration/drone/push Build is passing
Details
continuous-integration/drone/push Build is passing
Details
This commit is contained in:
parent
2acebe0350
commit
ae767ae103
|
@ -0,0 +1,78 @@
|
|||
---
|
||||
date: '2022-12-19T14:04:52'
|
||||
hypothesis-meta:
|
||||
created: '2022-12-19T14:04:52.852856+00:00'
|
||||
document:
|
||||
title:
|
||||
- My AI Safety Lecture for UT Effective Altruism
|
||||
flagged: false
|
||||
group: __world__
|
||||
hidden: false
|
||||
id: G_zRJH-mEe2Hz98VxKK5Gw
|
||||
links:
|
||||
html: https://hypothes.is/a/G_zRJH-mEe2Hz98VxKK5Gw
|
||||
incontext: https://hyp.is/G_zRJH-mEe2Hz98VxKK5Gw/scottaaronson.blog/?p=6823
|
||||
json: https://hypothes.is/api/annotations/G_zRJH-mEe2Hz98VxKK5Gw
|
||||
permissions:
|
||||
admin:
|
||||
- acct:ravenscroftj@hypothes.is
|
||||
delete:
|
||||
- acct:ravenscroftj@hypothes.is
|
||||
read:
|
||||
- group:__world__
|
||||
update:
|
||||
- acct:ravenscroftj@hypothes.is
|
||||
tags:
|
||||
- nlproc
|
||||
target:
|
||||
- selector:
|
||||
- endContainer: /div[2]/div[2]/div[2]/div[1]/p[36]
|
||||
endOffset: 642
|
||||
startContainer: /div[2]/div[2]/div[2]/div[1]/p[36]
|
||||
startOffset: 0
|
||||
type: RangeSelector
|
||||
- end: 13632
|
||||
start: 12990
|
||||
type: TextPositionSelector
|
||||
- exact: "Okay, but one thing that\u2019s been found empirically is that you take\
|
||||
\ commonsense questions that are flubbed by GPT-2, let\u2019s say, and you\
|
||||
\ try them on GPT-3, and very often now it gets them right. You take the\
|
||||
\ things that the original GPT-3 flubbed, and you try them on the latest public\
|
||||
\ model, which is sometimes called GPT-3.5 (incorporating an advance called\
|
||||
\ InstructGPT), and again it often gets them right. So it\u2019s extremely\
|
||||
\ risky right now to pin your case against AI on these sorts of examples!\
|
||||
\ Very plausibly, just one more order of magnitude of scale is all it\u2019\
|
||||
ll take to kick the ball in, and then you\u2019ll have to move the goal again."
|
||||
prefix: ' Cheetahs are faster, right?
|
||||
|
||||
|
||||
|
||||
|
||||
'
|
||||
suffix: '
|
||||
|
||||
|
||||
|
||||
|
||||
A deeper objection is that t'
|
||||
type: TextQuoteSelector
|
||||
source: https://scottaaronson.blog/?p=6823
|
||||
text: the stochastic parrots argument could be defeated as models get bigger and
|
||||
more complex
|
||||
updated: '2022-12-19T14:04:52.852856+00:00'
|
||||
uri: https://scottaaronson.blog/?p=6823
|
||||
user: acct:ravenscroftj@hypothes.is
|
||||
user_info:
|
||||
display_name: James Ravenscroft
|
||||
in-reply-to: https://scottaaronson.blog/?p=6823
|
||||
tags:
|
||||
- nlproc
|
||||
- hypothesis
|
||||
type: annotation
|
||||
url: /annotations/2022/12/19/1671458692
|
||||
|
||||
---
|
||||
|
||||
|
||||
|
||||
<blockquote>Okay, but one thing that’s been found empirically is that you take commonsense questions that are flubbed by GPT-2, let’s say, and you try them on GPT-3, and very often now it gets them right. You take the things that the original GPT-3 flubbed, and you try them on the latest public model, which is sometimes called GPT-3.5 (incorporating an advance called InstructGPT), and again it often gets them right. So it’s extremely risky right now to pin your case against AI on these sorts of examples! Very plausibly, just one more order of magnitude of scale is all it’ll take to kick the ball in, and then you’ll have to move the goal again.</blockquote>the stochastic parrots argument could be defeated as models get bigger and more complex
|
Loading…
Reference in New Issue