2022-12-19T14:04:52.852856+00:00 |
title |
My AI Safety Lecture for UT Effective Altruism |
|
|
false |
__world__ |
false |
G_zRJH-mEe2Hz98VxKK5Gw |
|
admin |
delete |
read |
update |
acct:ravenscroftj@hypothes.is |
|
acct:ravenscroftj@hypothes.is |
|
|
acct:ravenscroftj@hypothes.is |
|
|
|
selector |
source |
endContainer |
endOffset |
startContainer |
startOffset |
type |
/div[2]/div[2]/div[2]/div[1]/p[36] |
642 |
/div[2]/div[2]/div[2]/div[1]/p[36] |
0 |
RangeSelector |
|
end |
start |
type |
13632 |
12990 |
TextPositionSelector |
|
exact |
prefix |
suffix |
type |
Okay, but one thing that’s been found empirically is that you take commonsense questions that are flubbed by GPT-2, let’s say, and you try them on GPT-3, and very often now it gets them right. You take the things that the original GPT-3 flubbed, and you try them on the latest public model, which is sometimes called GPT-3.5 (incorporating an advance called InstructGPT), and again it often gets them right. So it’s extremely risky right now to pin your case against AI on these sorts of examples! Very plausibly, just one more order of magnitude of scale is all it’ll take to kick the ball in, and then you’ll have to move the goal again. |
Cheetahs are faster, right?
|
A deeper objection is that t |
TextQuoteSelector |
|
|
https://scottaaronson.blog/?p=6823 |
|
|
the stochastic parrots argument could be defeated as models get bigger and more complex |
2022-12-19T14:04:52.852856+00:00 |
https://scottaaronson.blog/?p=6823 |
acct:ravenscroftj@hypothes.is |
display_name |
James Ravenscroft |
|