brainsteam.co.uk/brainsteam/content/annotations/2022/12/13/1670913121.md

2.9 KiB
Raw Blame History

date hypothesis-meta in-reply-to tags type url
2022-12-13T06:32:01
created document flagged group hidden id links permissions tags target text updated uri user user_info
2022-12-13T06:32:01.500506+00:00
title
The viral AI avatar app Lensa undressed me—without my consent
false __world__ false 2iVhJnqvEe2HRauIjYpzBw
html incontext json
https://hypothes.is/a/2iVhJnqvEe2HRauIjYpzBw https://hyp.is/2iVhJnqvEe2HRauIjYpzBw/www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ https://hypothes.is/api/annotations/2iVhJnqvEe2HRauIjYpzBw
admin delete read update
acct:ravenscroftj@hypothes.is
acct:ravenscroftj@hypothes.is
group:__world__
acct:ravenscroftj@hypothes.is
ml
bias
selector source
endContainer endOffset startContainer startOffset type
/div[1]/div[1]/main[1]/div[1]/div[2]/div[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[6]/div[1]/p[6] 245 /div[1]/div[1]/main[1]/div[1]/div[2]/div[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[6]/div[1]/p[6] 0 RangeSelector
end start type
3237 2992 TextPositionSelector
exact prefix suffix type
AI training data is filled with racist stereotypes, pornography, and explicit images of rape, researchers Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe found after analyzing a data set similar to the one used to build Stable Diffusion. n historically disadvantaged.  Its notable that their finding TextQuoteSelector
https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/
That is horrifying. You'd think that authors would attempt to remove or filter this kind of material. There are, after all models out there that are trained to find it. It makes me wonder what awful stuff is in the GPT-3 dataset too. 2022-12-13T06:43:06.391962+00:00 https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/ acct:ravenscroftj@hypothes.is
display_name
James Ravenscroft
https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/
ml
bias
hypothesis
annotation /annotations/2022/12/13/1670913121
AI training data is filled with racist stereotypes, pornography, and explicit images of rape, researchers Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe found after analyzing a data set similar to the one used to build Stable Diffusion.
That is horrifying. You'd think that authors would attempt to remove or filter this kind of material. There are, after all models out there that are trained to find it. It makes me wonder what awful stuff is in the GPT-3 dataset too.