---
date: '2022-12-19T14:55:52'
hypothesis-meta:
  created: '2022-12-19T14:55:52.384335+00:00'
  document:
    title:
    - My AI Safety Lecture for UT Effective Altruism
  flagged: false
  group: __world__
  hidden: false
  id: O7YUan-tEe29vjfmuBFMKQ
  links:
    html: https://hypothes.is/a/O7YUan-tEe29vjfmuBFMKQ
    incontext: https://hyp.is/O7YUan-tEe29vjfmuBFMKQ/scottaaronson.blog/?p=6823
    json: https://hypothes.is/api/annotations/O7YUan-tEe29vjfmuBFMKQ
  permissions:
    admin:
    - acct:ravenscroftj@hypothes.is
    delete:
    - acct:ravenscroftj@hypothes.is
    read:
    - group:__world__
    update:
    - acct:ravenscroftj@hypothes.is
  tags:
  - explainability
  - nlproc
  target:
  - selector:
    - endContainer: /div[2]/div[2]/div[2]/div[1]/p[95]
      endOffset: 193
      startContainer: /div[2]/div[2]/div[2]/div[1]/p[95]
      startOffset: 0
      type: RangeSelector
    - end: 38138
      start: 37945
      type: TextPositionSelector
    - exact: So then to watermark, instead of selecting the next token randomly, the
        idea will be to select it pseudorandomly, using a cryptographic pseudorandom
        function, whose key is known only to OpenAI.
      prefix: 'of output tokens) each time.




        '
      suffix: "  That won\u2019t make any detectable"
      type: TextQuoteSelector
    source: https://scottaaronson.blog/?p=6823
  text: Watermarking by applying cryptographic pseudorandom functions to the model
    output instead of true random (true pseudo-random)
  updated: '2022-12-19T14:55:52.384335+00:00'
  uri: https://scottaaronson.blog/?p=6823
  user: acct:ravenscroftj@hypothes.is
  user_info:
    display_name: James Ravenscroft
in-reply-to: https://scottaaronson.blog/?p=6823
tags:
- explainability
- nlproc
- hypothesis
type: annotation
url: /annotations/2022/12/19/1671461752

---



 <blockquote>So then to watermark, instead of selecting the next token randomly, the idea will be to select it pseudorandomly, using a cryptographic pseudorandom function, whose key is known only to OpenAI.</blockquote>Watermarking by applying cryptographic pseudorandom functions to the model output instead of true random (true pseudo-random)