2022-12-19T14:09:11.863238+00:00 |
title |
My AI Safety Lecture for UT Effective Altruism |
|
|
false |
__world__ |
false |
tmH8RH-mEe27ArstPwKXEA |
|
admin |
delete |
read |
update |
acct:ravenscroftj@hypothes.is |
|
acct:ravenscroftj@hypothes.is |
|
|
acct:ravenscroftj@hypothes.is |
|
|
|
selector |
source |
endContainer |
endOffset |
startContainer |
startOffset |
type |
/div[2]/div[2]/div[2]/div[1]/p[43] |
779 |
/div[2]/div[2]/div[2]/div[1]/p[43] |
174 |
RangeSelector |
|
end |
start |
type |
16443 |
15838 |
TextPositionSelector |
|
exact |
prefix |
suffix |
type |
And famously, self-driving cars have taken a lot longer than many people expected a decade ago. This is partly because of regulatory barriers and public relations: even if a self-driving car actually crashes less than a human does, that’s still not good enough, because when it does crash the circumstances are too weird. So, the AI is actually held to a higher standard. But it’s also partly just that there was a long tail of really weird events. A deer crosses the road, or you have some crazy lighting conditions—such things are really hard to get right, and of course 99% isn’t good enough here. |
the last jobs to be automated. |
We can maybe fuzzily see ahe |
TextQuoteSelector |
|
|
https://scottaaronson.blog/?p=6823 |
|
|
I think the emphasis is wrong here. The regulation is secondary. The long tail of weird events is the more important thing. |
2022-12-19T14:09:11.863238+00:00 |
https://scottaaronson.blog/?p=6823 |
acct:ravenscroftj@hypothes.is |
display_name |
James Ravenscroft |
|