Add 'brainsteam/content/annotations/2022/12/13/1670913121.md'
continuous-integration/drone/push Build is passing Details

This commit is contained in:
ravenscroftj 2022-12-13 06:45:05 +00:00
parent 44a86c9a53
commit 5a1986fadb
1 changed files with 67 additions and 0 deletions

View File

@ -0,0 +1,67 @@
---
date: '2022-12-13T06:32:01'
hypothesis-meta:
created: '2022-12-13T06:32:01.500506+00:00'
document:
title:
- "The viral AI avatar app Lensa undressed me\u2014without my consent"
flagged: false
group: __world__
hidden: false
id: 2iVhJnqvEe2HRauIjYpzBw
links:
html: https://hypothes.is/a/2iVhJnqvEe2HRauIjYpzBw
incontext: https://hyp.is/2iVhJnqvEe2HRauIjYpzBw/www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/
json: https://hypothes.is/api/annotations/2iVhJnqvEe2HRauIjYpzBw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- ml
- bias
target:
- selector:
- endContainer: /div[1]/div[1]/main[1]/div[1]/div[2]/div[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[6]/div[1]/p[6]
endOffset: 245
startContainer: /div[1]/div[1]/main[1]/div[1]/div[2]/div[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[6]/div[1]/p[6]
startOffset: 0
type: RangeSelector
- end: 3237
start: 2992
type: TextPositionSelector
- exact: AI training data is filled with racist stereotypes, pornography, and
explicit images of rape, researchers Abeba Birhane, Vinay Uday Prabhu, and
Emmanuel Kahembwe found after analyzing a data set similar to the one used
to build Stable Diffusion.
prefix: "n historically disadvantaged.\_ "
suffix: " It\u2019s notable that their finding"
type: TextQuoteSelector
source: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/
text: 'That is horrifying. You''d think that authors would attempt to remove or
filter this kind of material. There are, after all models out there that are
trained to find it. It makes me wonder what awful stuff is in the GPT-3 dataset
too. '
updated: '2022-12-13T06:43:06.391962+00:00'
uri: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/
tags:
- ml
- bias
- hypothesis
type: annotation
url: /annotations/2022/12/13/1670913121
---
<blockquote>AI training data is filled with racist stereotypes, pornography, and explicit images of rape, researchers Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe found after analyzing a data set similar to the one used to build Stable Diffusion.</blockquote>That is horrifying. You'd think that authors would attempt to remove or filter this kind of material. There are, after all models out there that are trained to find it. It makes me wonder what awful stuff is in the GPT-3 dataset too.