newtheme #4

Merged
ravenscroftj merged 11 commits from newtheme into main 2023-07-09 14:49:04 +01:00
363 changed files with 1779 additions and 8911 deletions

View File

@ -1,5 +1,8 @@
name: Deploy Website name: Deploy Website
on: [push] on:
push:
branches:
- main
jobs: jobs:
build: build:

3
.gitmodules vendored
View File

@ -1,3 +1,6 @@
[submodule "brainsteam/themes/hugo-ink"] [submodule "brainsteam/themes/hugo-ink"]
path = brainsteam/themes/hugo-ink path = brainsteam/themes/hugo-ink
url = https://git.jamesravey.me/ravenscroftj/hugo-ink.git url = https://git.jamesravey.me/ravenscroftj/hugo-ink.git
[submodule "brainsteam/themes/Mainroad"]
path = brainsteam/themes/Mainroad
url = ssh://git@git.jamesravey.me:222/ravenscroftj/Mainroad.git

View File

@ -1,8 +1,9 @@
baseURL = "https://brainsteam.co.uk/" baseURL = "https://brainsteam.co.uk/"
languageCode = "en-us" languageCode = "en-us"
title = "Brainsteam" title = "Brainsteam"
theme='hugo-ink' #theme='hugo-ink'
paginate=5 theme='Mainroad'
paginate=10
disqusShortname = "brainsteam" disqusShortname = "brainsteam"
copyright = "© James Ravenscroft" copyright = "© James Ravenscroft"
@ -19,6 +20,8 @@ webMentionAPIKey = "f61bf-RG1k4uZT3fVLDoIw"
#googleAnalytics = "UA-186263385-1" #googleAnalytics = "UA-186263385-1"
post_meta = ["author", "date", "categories", "translations"] # Order of post meta information
[outputs] [outputs]
home = ["HTML", "RSS", "JSON"] home = ["HTML", "RSS", "JSON"]
@ -28,17 +31,31 @@ webMentionAPIKey = "f61bf-RG1k4uZT3fVLDoIw"
[markup.goldmark.renderer] [markup.goldmark.renderer]
unsafe= true unsafe= true
[params] [Params]
subtitle = "Digital Home of James Ravenscroft: CTO @ <a href=\"https://filament.ai\">Filament</a>, Machine Learning and NLP PhD (nerd)" authorbox= true
subtitle = "Digital Home of James Ravenscroft Machine Learning and NLP specialist and software generalist"
avatar = "/images/avatar_small.png" avatar = "/images/avatar_small.png"
favicon = "/images/favicon.png" favicon = "/images/favicon.png"
mainSections = ["post", "note"] mainSections = ["post","note","reply","like","repost","bookmark", "watch"]
indieWebSections = ["note","reply","like","repost","bookmark", "watch"]
[Author] # Used in authorbox
name = "James Ravenscroft"
bio = "James is an NLP and Machine Learning specialist and software generalist, currently CTO at Filament and previously an IBMer"
avatar = "img/avatar.png"
[Params.Logo]
image = "/images/avatar_small.png"
[Params.sidebar]
home = "right" # Configure layout for home page
list = "right" # Configure layout for list pages
single = false # Configure layout for single pages
# Enable widgets in given order
widgets = ["search", "recent", "categories", "taglist", "social", "languages"]
[[menu.main]] [[menu.main]]
name = "Home" name = "Home"

View File

@ -1,63 +0,0 @@
---
date: '2022-11-19T15:44:56'
hypothesis-meta:
created: '2022-11-19T15:44:56.849529+00:00'
document:
title:
- What if your Index Page was Smart?
flagged: false
group: __world__
hidden: false
id: HkU5GGghEe25tT_HONaiig
links:
html: https://hypothes.is/a/HkU5GGghEe25tT_HONaiig
incontext: https://hyp.is/HkU5GGghEe25tT_HONaiig/www.swyx.io/smart-indexes
json: https://hypothes.is/api/annotations/HkU5GGghEe25tT_HONaiig
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- productivity
target:
- selector:
- endContainer: /div[1]/main[1]/article[1]/div[3]/ol[1]/li[4]
endOffset: 151
startContainer: /div[1]/main[1]/article[1]/div[3]/ol[1]/li[4]
startOffset: 0
type: RangeSelector
- end: 1506
start: 1355
type: TextPositionSelector
- exact: "Many people report writers block with blogs, particularly after a big\
\ successful post, because it\u2019s almost impossible to consistently pump\
\ out bangers."
prefix: 'erthought and extremely manual.
'
suffix: ' So people invent other formats '
type: TextQuoteSelector
source: https://www.swyx.io/smart-indexes
text: Certainly true, people go through peaks and troughs of productivity like [seasons](https://herbertlui.net/seasons/)
updated: '2022-11-19T15:44:56.849529+00:00'
uri: https://www.swyx.io/smart-indexes
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.swyx.io/smart-indexes
tags:
- productivity
- hypothesis
type: annotation
url: /annotations/2022/11/19/1668872696
---
<blockquote>Many people report writers block with blogs, particularly after a big successful post, because its almost impossible to consistently pump out bangers.</blockquote>Certainly true, people go through peaks and troughs of productivity like [seasons](https://herbertlui.net/seasons/)

View File

@ -1,11 +0,0 @@
---
date: '2022-11-19T22:14:29'
in-reply-to: https://tomcritchlow.com/2018/02/23/small-b-blogging/
type: annotation
url: /annotations/2022/11/19/1668896069
---
<blockquote>But - as the overall network has grown exponentially the network topology has changed. Digg, Reddit, Hacker News etc all still exist but the audience you can reach with a “homepage” hit there has become much smaller relative to the overall size of the network. And getting a homepage hit there is harder than ever because the volume of content has increased exponentially</blockquote>A similar dynamic can now be observed in the mass migration from twitter to mastodon. People who were successful at using the big "homepage" of twitter are likely to be a bit thrown by the fediverse but it represents an opportunity to connect with a smaller but more specialised audience.

View File

@ -1,17 +0,0 @@
---
date: '2022-11-20T08:42:45.040182'
in-reply-to: https://www.zylstra.org/blog/2022/08/22036/
tags:
- hypothesis
- personal
- indieweb
type: annotation
url: /annotation/2022/11/20/1668933765
---
<blockquote>
Is it possible to annotate links in Hypothes.is that are in the Internet Archive? My browser bookmarklet for it doesnt work on such archived pages... in some cases this would be very useful to be able to do. For instance, Manfred Kuehns blog was discontinued in 2018, and more recently removed entirely from Blogspot where it was hosted. The archived versions are the only current source for those blogpostings. This means there is no original page online anymore to gather the annotations around.
</blockquote>
This is a great point and use case - I often worry about content I care about and have spent time thinking about disappearing. I run my own archive using <a href="https://github.com/ArchiveBox/ArchiveBox">ArchiveBox</a>. I see that the Hypothes.is bookmarklet seems to work for archive.org but only in chrome. Also, it doesn't play nice with archivebox yet. I might have to see if I can get it working at some point.

View File

@ -1,67 +0,0 @@
---
date: '2022-11-20T09:06:40'
hypothesis-meta:
created: '2022-11-20T09:06:40.315328+00:00'
document:
title:
- 'Learn In Public: The fastest way to learn'
flagged: false
group: __world__
hidden: false
id: pTTRNmiyEe24oqsbyV-35A
links:
html: https://hypothes.is/a/pTTRNmiyEe24oqsbyV-35A
incontext: https://hyp.is/pTTRNmiyEe24oqsbyV-35A/www.swyx.io/learn-in-public
json: https://hypothes.is/api/annotations/pTTRNmiyEe24oqsbyV-35A
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- pkm
- productivity
target:
- selector:
- endContainer: /div[1]/main[1]/article[1]/div[3]/p[3]/hypothesis-highlight[3]
endOffset: 98
startContainer: /div[1]/main[1]/article[1]/div[3]/p[3]/hypothesis-highlight[1]
startOffset: 0
type: RangeSelector
- end: 1299
start: 1104
type: TextPositionSelector
- exact: "Whatever your thing is, make the thing you wish you had found when you\
\ were learning. Don\u2019t judge your results by \u201Cclaps\u201D or retweets\
\ or stars or upvotes - just talk to yourself from 3 months ago"
prefix: 'ns (people loooove cartoons!).
'
suffix: . I keep an almost-daily dev blo
type: TextQuoteSelector
source: https://www.swyx.io/learn-in-public
text: 'Completely agree, this is a great intrinsic metric to measure the success
of your work by. '
updated: '2022-11-20T09:06:40.315328+00:00'
uri: https://www.swyx.io/learn-in-public
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.swyx.io/learn-in-public
tags:
- pkm
- productivity
- hypothesis
type: annotation
url: /annotation/2022/11/20/1668935200
---
<blockquote>Whatever your thing is, make the thing you wish you had found when you were learning. Dont judge your results by “claps” or retweets or stars or upvotes - just talk to yourself from 3 months ago</blockquote>Completely agree, this is a great intrinsic metric to measure the success of your work by.

View File

@ -1,81 +0,0 @@
---
date: '2022-11-20T11:18:31'
hypothesis-meta:
created: '2022-11-20T11:18:31.041323+00:00'
document:
title:
- 'Data Engineering in 2022: ELT tools'
flagged: false
group: __world__
hidden: false
id: EF4wWGjFEe2zrM9D4rCx-g
links:
html: https://hypothes.is/a/EF4wWGjFEe2zrM9D4rCx-g
incontext: https://hyp.is/EF4wWGjFEe2zrM9D4rCx-g/rmoff.net/2022/11/08/data-engineering-in-2022-elt-tools/
json: https://hypothes.is/api/annotations/EF4wWGjFEe2zrM9D4rCx-g
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- data-engineering
- data-science
- ELT
target:
- selector:
- endContainer: /main[1]/article[1]/div[3]/ul[1]/li[1]/div[2]/p[1]
endOffset: 383
startContainer: /main[1]/article[1]/div[3]/ul[1]/li[1]/div[2]/p[1]
startOffset: 0
type: RangeSelector
- end: 2093
start: 1710
type: TextPositionSelector
- exact: "Working with the raw data has lots of benefits, since at the point of\
\ ingest you don\u2019t know all of the possible uses for the data. If you\
\ rationalise that data down to just the set of fields and/or aggregate it\
\ up to fit just a specific use case then you lose the fidelity of the data\
\ that could be useful elsewhere. This is one of the premises and benefits\
\ of a data lake done well."
prefix: 'keep it at a manageable size.
'
suffix: '
Of course, despite what the'
type: TextQuoteSelector
source: https://rmoff.net/2022/11/08/data-engineering-in-2022-elt-tools/
text: absolutely right - there's also a data provenance angle here - it is useful
to be able to point to a data point that is 5 or 6 transformations from the raw
input and be able to say "yes I know exactly where this came from, here are all
the steps that came before"
updated: '2022-11-20T11:18:31.041323+00:00'
uri: https://rmoff.net/2022/11/08/data-engineering-in-2022-elt-tools/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://rmoff.net/2022/11/08/data-engineering-in-2022-elt-tools/
tags:
- data-engineering
- data-science
- ELT
- hypothesis
type: annotation
url: /annotation/2022/11/20/1668943111
---
<blockquote>Working with the raw data has lots of benefits, since at the point of ingest you dont know all of the possible uses for the data. If you rationalise that data down to just the set of fields and/or aggregate it up to fit just a specific use case then you lose the fidelity of the data that could be useful elsewhere. This is one of the premises and benefits of a data lake done well.</blockquote>absolutely right - there's also a data provenance angle here - it is useful to be able to point to a data point that is 5 or 6 transformations from the raw input and be able to say "yes I know exactly where this came from, here are all the steps that came before"

View File

@ -1,69 +0,0 @@
---
date: '2022-11-20T11:20:16'
hypothesis-meta:
created: '2022-11-20T11:20:16.520474+00:00'
document:
title:
- 'Data Engineering in 2022: ELT tools'
flagged: false
group: __world__
hidden: false
id: Tz7phGjFEe2Jr7uQLlnFiw
links:
html: https://hypothes.is/a/Tz7phGjFEe2Jr7uQLlnFiw
incontext: https://hyp.is/Tz7phGjFEe2Jr7uQLlnFiw/rmoff.net/2022/11/08/data-engineering-in-2022-elt-tools/
json: https://hypothes.is/api/annotations/Tz7phGjFEe2Jr7uQLlnFiw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- data-science
target:
- selector:
- endContainer: /main[1]/article[1]/div[3]/ul[1]/li[1]/div[3]/blockquote[1]/div[1]/p[1]/em[2]
endOffset: 96
startContainer: /main[1]/article[1]/div[3]/ul[1]/li[1]/div[3]/blockquote[1]/div[1]/p[1]/em[1]
startOffset: 0
type: RangeSelector
- end: 2293
start: 2098
type: TextPositionSelector
- exact: "Of course, despite what the \"data is the new oil\" vendors told you\
\ back in the day, you can\u2019t just chuck raw data in and assume that magic\
\ will happen on it, but that\u2019s a rant for another day ;-)"
prefix: 's of a data lake done well.
'
suffix: "\n\n\n\n\n\nThe second shift\u2014which is "
type: TextQuoteSelector
source: https://rmoff.net/2022/11/08/data-engineering-in-2022-elt-tools/
text: Love this analogy - imagine chucking some crude into a black box and hoping
for ethanol at the other end. Then, when you end up with diesel you have no idea
what happened.
updated: '2022-11-20T11:20:16.520474+00:00'
uri: https://rmoff.net/2022/11/08/data-engineering-in-2022-elt-tools/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://rmoff.net/2022/11/08/data-engineering-in-2022-elt-tools/
tags:
- data-science
- hypothesis
type: annotation
url: /annotation/2022/11/20/1668943216
---
<blockquote>Of course, despite what the "data is the new oil" vendors told you back in the day, you cant just chuck raw data in and assume that magic will happen on it, but thats a rant for another day ;-)</blockquote>Love this analogy - imagine chucking some crude into a black box and hoping for ethanol at the other end. Then, when you end up with diesel you have no idea what happened.

View File

@ -1,87 +0,0 @@
---
date: '2022-11-20T11:35:46'
hypothesis-meta:
created: '2022-11-20T11:35:46.410564+00:00'
document:
title:
- 'Data Engineering in 2022: ELT tools'
flagged: false
group: __world__
hidden: false
id: eYCrpGjHEe2hEkur1Ic5ww
links:
html: https://hypothes.is/a/eYCrpGjHEe2hEkur1Ic5ww
incontext: https://hyp.is/eYCrpGjHEe2hEkur1Ic5ww/rmoff.net/2022/11/08/data-engineering-in-2022-elt-tools/
json: https://hypothes.is/api/annotations/eYCrpGjHEe2hEkur1Ic5ww
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- ELT
- data-engineering
target:
- selector:
- endContainer: /main[1]/article[1]/div[5]/div[1]/div[4]/p[1]
endOffset: 521
startContainer: /main[1]/article[1]/div[5]/div[1]/div[4]/p[1]
startOffset: 0
type: RangeSelector
- end: 4166
start: 3645
type: TextPositionSelector
- exact: "It took me a while to grok where dbt comes in the stack but now that\
\ I (think) I have it, it makes a lot of sense. I can also see why, with my\
\ background, I had trouble doing so. Just as Apache Kafka isn\u2019t easily\
\ explained as simply another database, another message queue, etc, dbt isn\u2019\
t just another Informatica, another Oracle Data Integrator. It\u2019s not\
\ about ETL or ELT - it\u2019s about T alone. With that understood, things\
\ slot into place. This isn\u2019t just my take on it either - dbt themselves\
\ call it out on their blog:"
prefix: "t could fail\u2026but not for now.\n\n\n"
suffix: '
dbt is the T in ELT
'
type: TextQuoteSelector
source: https://rmoff.net/2022/11/08/data-engineering-in-2022-elt-tools/
text: Also - just because their "pricing" page caught me off guard and their website
isn't that clear (until you click through to the technical docs) - I thought it's
worth calling out that DBT appears to be an open-core platform. They have a SaaS
offering and also an open source python command-line tool - it seems that these
articles are about the latter
updated: '2022-11-20T11:35:46.410564+00:00'
uri: https://rmoff.net/2022/11/08/data-engineering-in-2022-elt-tools/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://rmoff.net/2022/11/08/data-engineering-in-2022-elt-tools/
tags:
- ELT
- data-engineering
- hypothesis
type: annotation
url: /annotation/2022/11/20/1668944146
---
<blockquote>It took me a while to grok where dbt comes in the stack but now that I (think) I have it, it makes a lot of sense. I can also see why, with my background, I had trouble doing so. Just as Apache Kafka isnt easily explained as simply another database, another message queue, etc, dbt isnt just another Informatica, another Oracle Data Integrator. Its not about ETL or ELT - its about T alone. With that understood, things slot into place. This isnt just my take on it either - dbt themselves call it out on their blog:</blockquote>Also - just because their "pricing" page caught me off guard and their website isn't that clear (until you click through to the technical docs) - I thought it's worth calling out that DBT appears to be an open-core platform. They have a SaaS offering and also an open source python command-line tool - it seems that these articles are about the latter

View File

@ -1,74 +0,0 @@
---
date: '2022-11-20T16:47:28'
hypothesis-meta:
created: '2022-11-20T16:47:28.055472+00:00'
document:
title:
- Rest in motion
flagged: false
group: __world__
hidden: false
id: BJBM9mjzEe25nGNVhh66wA
links:
html: https://hypothes.is/a/BJBM9mjzEe25nGNVhh66wA
incontext: https://hyp.is/BJBM9mjzEe25nGNVhh66wA/mindingourway.com/rest-in-motion/
json: https://hypothes.is/api/annotations/BJBM9mjzEe25nGNVhh66wA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- productivity
- mental health
- resting
target:
- selector:
- endContainer: /main[1]/article[1]/section[1]/p[4]
endOffset: 394
startContainer: /main[1]/article[1]/section[1]/p[4]
startOffset: 4
type: RangeSelector
- end: 1546
start: 1156
type: TextPositionSelector
- exact: the work that needs to be done is not a finite list of tasks, it is a
neverending stream. Clothes are always getting worn down, food is always getting
eaten, code is always in motion. The goal is not to finish all the work before
you; for that is impossible. The goal is simply to move through the work.
Instead of struggling to reach the end of the stream, simply focus on moving
along it.
prefix: 'est state and wear me down.
But '
suffix: '
Advertisements and media often'
type: TextQuoteSelector
source: https://mindingourway.com/rest-in-motion/
text: 'This is true and worth remembering. It is very easy to fall into the mindset
of "I''ll rest when I''m finished" '
updated: '2022-11-20T16:48:53.423785+00:00'
uri: https://mindingourway.com/rest-in-motion/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://mindingourway.com/rest-in-motion/
tags:
- productivity
- mental health
- resting
- hypothesis
type: annotation
url: /annotation/2022/11/20/1668962848
---
<blockquote>the work that needs to be done is not a finite list of tasks, it is a neverending stream. Clothes are always getting worn down, food is always getting eaten, code is always in motion. The goal is not to finish all the work before you; for that is impossible. The goal is simply to move through the work. Instead of struggling to reach the end of the stream, simply focus on moving along it.</blockquote>This is true and worth remembering. It is very easy to fall into the mindset of "I'll rest when I'm finished"

View File

@ -1,74 +0,0 @@
---
date: '2022-11-20T16:53:00'
hypothesis-meta:
created: '2022-11-20T16:53:00.027245+00:00'
document:
title:
- Rest in motion
flagged: false
group: __world__
hidden: false
id: ynPUBmjzEe2xBVPY9eauIQ
links:
html: https://hypothes.is/a/ynPUBmjzEe2xBVPY9eauIQ
incontext: https://hyp.is/ynPUBmjzEe2xBVPY9eauIQ/mindingourway.com/rest-in-motion/
json: https://hypothes.is/api/annotations/ynPUBmjzEe2xBVPY9eauIQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- mental health
- productivity
- resting
target:
- selector:
- endContainer: /main[1]/article[1]/section[1]/p[7]
endOffset: 463
startContainer: /main[1]/article[1]/section[1]/p[7]
startOffset: 0
type: RangeSelector
- end: 2547
start: 2084
type: TextPositionSelector
- exact: 'The actual reward state is not one where you''re lazing around doing
nothing. It''s one where you''re keeping busy, where you''re doing things
that stimulate you, and where you''re resting only a fraction of the time.
The preferred ground state is not one where you have no activity to partake
in, it''s one where you''re managing the streams of activity precisely, and
moving through them at the right pace: not too fast, but also not too slow.
For that would be boring'
prefix: 'ctive state, not a passive one.
'
suffix: '.
And yet, most people have this'
type: TextQuoteSelector
source: https://mindingourway.com/rest-in-motion/
text: Doing nothing at all is boring. When we "rest" we are actually just doing
activities that we find interesting rather than those we find dull or stressful.
updated: '2022-11-20T16:53:00.027245+00:00'
uri: https://mindingourway.com/rest-in-motion/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://mindingourway.com/rest-in-motion/
tags:
- mental health
- productivity
- resting
- hypothesis
type: annotation
url: /annotation/2022/11/20/1668963180
---
<blockquote>The actual reward state is not one where you're lazing around doing nothing. It's one where you're keeping busy, where you're doing things that stimulate you, and where you're resting only a fraction of the time. The preferred ground state is not one where you have no activity to partake in, it's one where you're managing the streams of activity precisely, and moving through them at the right pace: not too fast, but also not too slow. For that would be boring</blockquote>Doing nothing at all is boring. When we "rest" we are actually just doing activities that we find interesting rather than those we find dull or stressful.

View File

@ -1,62 +0,0 @@
---
date: '2022-11-21T06:28:39'
hypothesis-meta:
created: '2022-11-21T06:28:39.144038+00:00'
document:
title:
- 8 Years on the Road
flagged: false
group: __world__
hidden: false
id: vG24tGllEe20EGNfsOhnSQ
links:
html: https://hypothes.is/a/vG24tGllEe20EGNfsOhnSQ
incontext: https://hyp.is/vG24tGllEe20EGNfsOhnSQ/tomcritchlow.com/2022/11/10/8-years-on-the-road/
json: https://hypothes.is/api/annotations/vG24tGllEe20EGNfsOhnSQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- self-employed
target:
- selector:
- endContainer: /div[1]/div[3]/p[1]/em[1]
endOffset: 168
startContainer: /div[1]/div[3]/p[1]/em[1]
startOffset: 0
type: RangeSelector
- end: 1842
start: 1674
type: TextPositionSelector
- exact: Being self-employed feels a bit like being on an extended road trip.
Untethered and free, but lonely and unsupported too. Ultimate freedoms combined
with shallow roots.
prefix: "mpass\n \n \n \n \n \n \n\n\n "
suffix: ' Every year I write a recap, aro'
type: TextQuoteSelector
source: https://tomcritchlow.com/2022/11/10/8-years-on-the-road/
text: That's a super insightful take on the self employment thing that people probably
don't consider that much when deciding whether to take the leap
updated: '2022-11-21T06:28:39.144038+00:00'
uri: https://tomcritchlow.com/2022/11/10/8-years-on-the-road/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://tomcritchlow.com/2022/11/10/8-years-on-the-road/
tags:
- self-employed
- hypothesis
type: annotation
url: /annotation/2022/11/21/1669012119
---
<blockquote>Being self-employed feels a bit like being on an extended road trip. Untethered and free, but lonely and unsupported too. Ultimate freedoms combined with shallow roots.</blockquote>That's a super insightful take on the self employment thing that people probably don't consider that much when deciding whether to take the leap

View File

@ -1,76 +0,0 @@
---
date: '2022-11-21T06:31:05'
hypothesis-meta:
created: '2022-11-21T06:31:05.094140+00:00'
document:
title:
- 8 Years on the Road
flagged: false
group: __world__
hidden: false
id: E2tl0GlmEe2cZXOxi8VhuA
links:
html: https://hypothes.is/a/E2tl0GlmEe2cZXOxi8VhuA
incontext: https://hyp.is/E2tl0GlmEe2cZXOxi8VhuA/tomcritchlow.com/2022/11/10/8-years-on-the-road/
json: https://hypothes.is/api/annotations/E2tl0GlmEe2cZXOxi8VhuA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- self-employed
- consulting
target:
- selector:
- endContainer: /div[1]/div[3]/p[22]
endOffset: 562
startContainer: /div[1]/div[3]/p[22]
startOffset: 0
type: RangeSelector
- end: 8573
start: 8011
type: TextPositionSelector
- exact: "I\u2019ve been using this phrase \u201Cthe next most useful thing\u201D\
\ as a guiding light for my consulting work - I\u2019m obsessed with being\
\ useful not just right. I\u2019ve always rejected the fancy presentation\
\ in favor of the next most useful thing, and I simply took my eye off the\
\ ball with this one. I\u2019m not even sure the client views this project\
\ as a real disappointment, there was still some value in it, but I\u2019\
m mad at myself personally for this one. A good reminder not to take your\
\ eye off the ball. And to push your clients beyond what they tell you the\
\ right answer is."
prefix: ' hiring their marketing team).
'
suffix: '
Anyway, while consulting work '
type: TextQuoteSelector
source: https://tomcritchlow.com/2022/11/10/8-years-on-the-road/
text: The customer is not always right (just in matters of taste). Part of consultancy
is providing stewardship and pushing back, just like any role I guess
updated: '2022-11-21T06:31:05.094140+00:00'
uri: https://tomcritchlow.com/2022/11/10/8-years-on-the-road/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://tomcritchlow.com/2022/11/10/8-years-on-the-road/
tags:
- self-employed
- consulting
- hypothesis
type: annotation
url: /annotation/2022/11/21/1669012265
---
<blockquote>Ive been using this phrase “the next most useful thing” as a guiding light for my consulting work - Im obsessed with being useful not just right. Ive always rejected the fancy presentation in favor of the next most useful thing, and I simply took my eye off the ball with this one. Im not even sure the client views this project as a real disappointment, there was still some value in it, but Im mad at myself personally for this one. A good reminder not to take your eye off the ball. And to push your clients beyond what they tell you the right answer is.</blockquote>The customer is not always right (just in matters of taste). Part of consultancy is providing stewardship and pushing back, just like any role I guess

View File

@ -1,79 +0,0 @@
---
date: '2022-11-21T06:37:23'
hypothesis-meta:
created: '2022-11-21T06:37:23.130029+00:00'
document:
title:
- Generating Agency Through Blogging
flagged: false
group: __world__
hidden: false
id: 9L5WKGlmEe2_Xs-Alhi35w
links:
html: https://hypothes.is/a/9L5WKGlmEe2_Xs-Alhi35w
incontext: https://hyp.is/9L5WKGlmEe2_Xs-Alhi35w/tomcritchlow.com/2022/08/29/blogging-agency/
json: https://hypothes.is/api/annotations/9L5WKGlmEe2_Xs-Alhi35w
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- monetization
- self-employed
target:
- selector:
- endContainer: /div[1]/div[2]/p[6]
endOffset: 136
startContainer: /div[1]/div[2]/p[5]
startOffset: 0
type: RangeSelector
- end: 2446
start: 1954
type: TextPositionSelector
- exact: "I only know a handful of people directly making money from blogging\
\ (via ads, subscriptions etc) but I know many more who:\n\n\n Got a better\
\ career because of blogging (new job, better pay etc)\n Negotiated better\
\ contracts (e.g. with a publisher or platform) because they had \u201Can\
\ audience\u201D\n Sold their own courses / ebooks / books / merchandise\
\ / music\n\n\nBlogging is this kind of engine that opens up economic opportunity\
\ and advantage. Being visible in the networked economy has real value."
prefix: "unities than those that don\u2019t.\n\n"
suffix: '
Blogging as social opportunity'
type: TextQuoteSelector
source: https://tomcritchlow.com/2022/08/29/blogging-agency/
text: 'Making money from blogging isn''t just about selling ads or subscriptions
a direct thing. It can be indirect too. Eg selling courses or books. '
updated: '2022-11-21T06:37:23.130029+00:00'
uri: https://tomcritchlow.com/2022/08/29/blogging-agency/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://tomcritchlow.com/2022/08/29/blogging-agency/
tags:
- monetization
- self-employed
- hypothesis
type: annotation
url: /annotation/2022/11/21/1669012643
---
<blockquote>I only know a handful of people directly making money from blogging (via ads, subscriptions etc) but I know many more who:
Got a better career because of blogging (new job, better pay etc)
Negotiated better contracts (e.g. with a publisher or platform) because they had “an audience”
Sold their own courses / ebooks / books / merchandise / music
Blogging is this kind of engine that opens up economic opportunity and advantage. Being visible in the networked economy has real value.</blockquote>Making money from blogging isn't just about selling ads or subscriptions a direct thing. It can be indirect too. Eg selling courses or books.

View File

@ -1,64 +0,0 @@
---
date: '2022-11-21T06:42:45'
hypothesis-meta:
created: '2022-11-21T06:42:45.359084+00:00'
document:
title:
- Using GPT-3 to augment human intelligence
flagged: false
group: __world__
hidden: false
id: tMpexmlnEe2wMitrya3q6Q
links:
html: https://hypothes.is/a/tMpexmlnEe2wMitrya3q6Q
incontext: https://hyp.is/tMpexmlnEe2wMitrya3q6Q/escapingflatland.substack.com/p/gpt-3
json: https://hypothes.is/api/annotations/tMpexmlnEe2wMitrya3q6Q
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- learn-in-public
- productivity
target:
- selector:
- endContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[5]/div[1]/div[1]/p[1]/span[2]
endOffset: 1
startContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[5]/div[1]/div[1]/p[1]/span[1]
startOffset: 0
type: RangeSelector
- end: 4556
start: 4425
type: TextPositionSelector
- exact: A blog post is a very long and complex search query to find fascinating
people and make them route interesting stuff to your inbox.
prefix: comCopy linkTwitterFacebookEmail
suffix: It is like summoning an alien in
type: TextQuoteSelector
source: https://escapingflatland.substack.com/p/gpt-3
text: This is a really cool take on blogging. By writing about interesting people
and stuff you are increasing your chances of meeting someone cool and indeed [increasing
your luck](https://github.com/readme/guides/publishing-your-work )
updated: '2022-11-21T06:43:14.669327+00:00'
uri: https://escapingflatland.substack.com/p/gpt-3
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://escapingflatland.substack.com/p/gpt-3
tags:
- learn-in-public
- productivity
- hypothesis
type: annotation
url: /annotation/2022/11/21/1669012965
---
<blockquote>A blog post is a very long and complex search query to find fascinating people and make them route interesting stuff to your inbox.</blockquote>This is a really cool take on blogging. By writing about interesting people and stuff you are increasing your chances of meeting someone cool and indeed [increasing your luck](https://github.com/readme/guides/publishing-your-work )

View File

@ -1,63 +0,0 @@
---
date: '2022-11-21T12:59:21'
hypothesis-meta:
created: '2022-11-21T12:59:21.592621+00:00'
document:
title:
- "\U0001F52E Azeem's commentary: On the generative wave (Part 1)"
flagged: false
group: __world__
hidden: false
id: UTYjsGmcEe2lSwP-tU-8Wg
links:
html: https://hypothes.is/a/UTYjsGmcEe2lSwP-tU-8Wg
incontext: https://hyp.is/UTYjsGmcEe2lSwP-tU-8Wg/www.exponentialview.co/p/azeems-commentary-on-the-generative?utm_medium=email
json: https://hypothes.is/api/annotations/UTYjsGmcEe2lSwP-tU-8Wg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- academic-search
target:
- selector:
- endContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[4]/div[1]/div[1]/p[6]/span[2]
endOffset: 20
startContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[4]/div[1]/div[1]/p[6]/span[1]
startOffset: 0
type: RangeSelector
- end: 5808
start: 5462
type: TextPositionSelector
- exact: "Elicit is really impressive. It searches academic papers, providing\
\ summary abstracts as well as structured analyses of papers. For example,\
\ it tries to identify the outcomes analysed in the paper or the conflicts\
\ of interest of the authors, as well as easily tracks citations. (See a similar\
\ search on \u201Ctechnology transitions\u201D. Log in required.)"
prefix: " transitions\u201D. Log in required.)"
suffix: 'But I have nerdy research needs '
type: TextQuoteSelector
source: https://www.exponentialview.co/p/azeems-commentary-on-the-generative?utm_medium=email
text: https://elicit.org/ - another academic search engine
updated: '2022-11-21T12:59:21.592621+00:00'
uri: https://www.exponentialview.co/p/azeems-commentary-on-the-generative?utm_medium=email
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.exponentialview.co/p/azeems-commentary-on-the-generative?utm_medium=email
tags:
- academic-search
- hypothesis
type: annotation
url: /annotation/2022/11/21/1669035561
---
<blockquote>Elicit is really impressive. It searches academic papers, providing summary abstracts as well as structured analyses of papers. For example, it tries to identify the outcomes analysed in the paper or the conflicts of interest of the authors, as well as easily tracks citations. (See a similar search on “technology transitions”. Log in required.)</blockquote>https://elicit.org/ - another academic search engine

View File

@ -1,66 +0,0 @@
---
date: '2022-11-21T13:02:06'
hypothesis-meta:
created: '2022-11-21T13:02:06.220445+00:00'
document:
title:
- "\U0001F52E Azeem's commentary: On the generative wave (Part 1)"
flagged: false
group: __world__
hidden: false
id: s1Ab0mmcEe2nEJ8RSwgyfA
links:
html: https://hypothes.is/a/s1Ab0mmcEe2nEJ8RSwgyfA
incontext: https://hyp.is/s1Ab0mmcEe2nEJ8RSwgyfA/www.exponentialview.co/p/azeems-commentary-on-the-generative?utm_medium=email
json: https://hypothes.is/api/annotations/s1Ab0mmcEe2nEJ8RSwgyfA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- generative models
- machine learning
- ml explainability
target:
- selector:
- endContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[4]/div[1]/div[1]/p[21]/span[3]
endOffset: 254
startContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[4]/div[1]/div[1]/p[21]/span[3]
startOffset: 121
type: RangeSelector
- end: 9218
start: 9085
type: TextPositionSelector
- exact: "\u201CThe metaphor is that the machine understands what I\u2019m saying\
\ and so I\u2019m going to interpret the machine\u2019s responses in that\
\ context.\u201D"
prefix: 'ng the context of the research. '
suffix: Meta has since pulled Galactica.
type: TextQuoteSelector
source: https://www.exponentialview.co/p/azeems-commentary-on-the-generative?utm_medium=email
text: Interesting metaphor for why humans are happy to trust outputs from generative
models
updated: '2022-11-21T13:02:06.220445+00:00'
uri: https://www.exponentialview.co/p/azeems-commentary-on-the-generative?utm_medium=email
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.exponentialview.co/p/azeems-commentary-on-the-generative?utm_medium=email
tags:
- generative models
- machine learning
- ml explainability
- hypothesis
type: annotation
url: /annotation/2022/11/21/1669035726
---
<blockquote>“The metaphor is that the machine understands what Im saying and so Im going to interpret the machines responses in that context.”</blockquote>Interesting metaphor for why humans are happy to trust outputs from generative models

View File

@ -1,62 +0,0 @@
---
date: '2022-11-21T20:07:22'
hypothesis-meta:
created: '2022-11-21T20:07:22.691275+00:00'
document:
title:
- IEEEtran-7.pdf
flagged: false
group: __world__
hidden: false
id: HE4vnmnYEe2__KMWJ8Dgcg
links:
html: https://hypothes.is/a/HE4vnmnYEe2__KMWJ8Dgcg
incontext: https://hyp.is/HE4vnmnYEe2__KMWJ8Dgcg/www.researchgate.net/profile/Lin-Gui-5/publication/342058196_Multi-Task_Learning_with_Mutual_Learning_for_Joint_Sentiment_Classification_and_Topic_Detection/links/5f96fd48458515b7cf9f3abd/Multi-Task-Learning-with-Mutual-Learning-for-Joint-Sentiment-Classification-and-Topic-Detection.pdf
json: https://hypothes.is/api/annotations/HE4vnmnYEe2__KMWJ8Dgcg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- NLProc
- multi-task learning
- topic modelling
target:
- selector:
- end: 5426
start: 5187
type: TextPositionSelector
- exact: . However, such a framework is not applicablehere since the learned latent
topic representations in topicmodels can not be shared directly with word
or sentencerepresentations learned in classifiers, due to their differentinherent
meanings
prefix: n task-relevant rep-resentations
suffix: .We instead propose a new MTL fr
type: TextQuoteSelector
source: https://www.researchgate.net/profile/Lin-Gui-5/publication/342058196_Multi-Task_Learning_with_Mutual_Learning_for_Joint_Sentiment_Classification_and_Topic_Detection/links/5f96fd48458515b7cf9f3abd/Multi-Task-Learning-with-Mutual-Learning-for-Joint-Sentiment-Classification-and-Topic-Detection.pdf
text: Latent word vectors and topic models learn different and entirely unrelated
representations
updated: '2022-11-21T20:07:22.691275+00:00'
uri: https://www.researchgate.net/profile/Lin-Gui-5/publication/342058196_Multi-Task_Learning_with_Mutual_Learning_for_Joint_Sentiment_Classification_and_Topic_Detection/links/5f96fd48458515b7cf9f3abd/Multi-Task-Learning-with-Mutual-Learning-for-Joint-Sentiment-Classification-and-Topic-Detection.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.researchgate.net/profile/Lin-Gui-5/publication/342058196_Multi-Task_Learning_with_Mutual_Learning_for_Joint_Sentiment_Classification_and_Topic_Detection/links/5f96fd48458515b7cf9f3abd/Multi-Task-Learning-with-Mutual-Learning-for-Joint-Sentiment-Classification-and-Topic-Detection.pdf
tags:
- NLProc
- multi-task learning
- topic modelling
- hypothesis
type: annotation
url: /annotation/2022/11/21/1669061242
---
<blockquote>. However, such a framework is not applicablehere since the learned latent topic representations in topicmodels can not be shared directly with word or sentencerepresentations learned in classifiers, due to their differentinherent meanings</blockquote>Latent word vectors and topic models learn different and entirely unrelated representations

View File

@ -1,63 +0,0 @@
---
date: '2022-11-21T20:09:49'
hypothesis-meta:
created: '2022-11-21T20:09:49.369906+00:00'
document:
title:
- IEEEtran-7.pdf
flagged: false
group: __world__
hidden: false
id: c71vQmnYEe2a7ffZp-667A
links:
html: https://hypothes.is/a/c71vQmnYEe2a7ffZp-667A
incontext: https://hyp.is/c71vQmnYEe2a7ffZp-667A/www.researchgate.net/profile/Lin-Gui-5/publication/342058196_Multi-Task_Learning_with_Mutual_Learning_for_Joint_Sentiment_Classification_and_Topic_Detection/links/5f96fd48458515b7cf9f3abd/Multi-Task-Learning-with-Mutual-Learning-for-Joint-Sentiment-Classification-and-Topic-Detection.pdf
json: https://hypothes.is/api/annotations/c71vQmnYEe2a7ffZp-667A
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- multi-task learning
- topic modelling
- NLProc
target:
- selector:
- end: 8039
start: 7778
type: TextPositionSelector
- exact: "e argue that mutual learningwould benefit sentiment classification since\
\ it enriches theinformation required for the training of the sentiment clas-sifier\
\ (e.g., when the word \u201Cincredible\u201D is used to describe\u201Cacting\u201D\
\ or \u201Cmovie\u201D, the polarity should be positive)"
prefix: "thewords \u201Cacting\u201D and \u201Cmovie\u201D. W"
suffix: . At thesame time, mutual learni
type: TextQuoteSelector
source: https://www.researchgate.net/profile/Lin-Gui-5/publication/342058196_Multi-Task_Learning_with_Mutual_Learning_for_Joint_Sentiment_Classification_and_Topic_Detection/links/5f96fd48458515b7cf9f3abd/Multi-Task-Learning-with-Mutual-Learning-for-Joint-Sentiment-Classification-and-Topic-Detection.pdf
text: By training a topic model that has "similar" weights to the word vector model
the sentiment task can also be improved (as per the example "incredible" should
be positive when used to describe "acting" or "movie" in this context
updated: '2022-11-21T20:09:49.369906+00:00'
uri: https://www.researchgate.net/profile/Lin-Gui-5/publication/342058196_Multi-Task_Learning_with_Mutual_Learning_for_Joint_Sentiment_Classification_and_Topic_Detection/links/5f96fd48458515b7cf9f3abd/Multi-Task-Learning-with-Mutual-Learning-for-Joint-Sentiment-Classification-and-Topic-Detection.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.researchgate.net/profile/Lin-Gui-5/publication/342058196_Multi-Task_Learning_with_Mutual_Learning_for_Joint_Sentiment_Classification_and_Topic_Detection/links/5f96fd48458515b7cf9f3abd/Multi-Task-Learning-with-Mutual-Learning-for-Joint-Sentiment-Classification-and-Topic-Detection.pdf
tags:
- multi-task learning
- topic modelling
- NLProc
- hypothesis
type: annotation
url: /annotation/2022/11/21/1669061389
---
<blockquote>e argue that mutual learningwould benefit sentiment classification since it enriches theinformation required for the training of the sentiment clas-sifier (e.g., when the word “incredible” is used to describe“acting” or “movie”, the polarity should be positive)</blockquote>By training a topic model that has "similar" weights to the word vector model the sentiment task can also be improved (as per the example "incredible" should be positive when used to describe "acting" or "movie" in this context

View File

@ -1,73 +0,0 @@
---
date: '2022-11-21T20:13:05'
hypothesis-meta:
created: '2022-11-21T20:13:05.556810+00:00'
document:
title:
- IEEEtran-7.pdf
flagged: false
group: __world__
hidden: false
id: 6KsbkmnYEe2Y3g9fobLUFA
links:
html: https://hypothes.is/a/6KsbkmnYEe2Y3g9fobLUFA
incontext: https://hyp.is/6KsbkmnYEe2Y3g9fobLUFA/www.researchgate.net/profile/Lin-Gui-5/publication/342058196_Multi-Task_Learning_with_Mutual_Learning_for_Joint_Sentiment_Classification_and_Topic_Detection/links/5f96fd48458515b7cf9f3abd/Multi-Task-Learning-with-Mutual-Learning-for-Joint-Sentiment-Classification-and-Topic-Detection.pdf
json: https://hypothes.is/api/annotations/6KsbkmnYEe2Y3g9fobLUFA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- NLProc
- topic modelling
- neural networks
target:
- selector:
- end: 11125
start: 10591
type: TextPositionSelector
- exact: n recent years, the neural network based topic modelshave been proposed
for many NLP tasks, such as infor-mation retrieval [11], aspect extraction
[12] and sentimentclassification [13]. The basic idea is to construct a neuralnetwork
which aims to approximate the topic-word distri-bution in probabilistic topic
models. Additional constraints,such as incorporating prior distribution [14],
enforcing di-versity among topics [15] or encouraging topic sparsity [16],have
been explored for neural topic model learning andproved effective.
prefix: ' word embeddings[8], [9], [10].I'
suffix: ' However, most of these algorith'
type: TextQuoteSelector
source: https://www.researchgate.net/profile/Lin-Gui-5/publication/342058196_Multi-Task_Learning_with_Mutual_Learning_for_Joint_Sentiment_Classification_and_Topic_Detection/links/5f96fd48458515b7cf9f3abd/Multi-Task-Learning-with-Mutual-Learning-for-Joint-Sentiment-Classification-and-Topic-Detection.pdf
text: "Neural topic models are often trained to mimic the behaviours of probabilistic\
\ topic models - I should come back and look at some of the works:\n\n* R. Das,\
\ M. Zaheer, and C. Dyer, \u201CGaussian LDA for topic models with word embeddings,\u201D\
\ \n* P. Xie, J. Zhu, and E. P. Xing, \u201CDiversity-promoting bayesian learning\
\ of latent variable models,\u201D\n * M. Peng, Q. Xie, H. Wang, Y. Zhang, X.\
\ Zhang, J. Huang, and G. Tian, \u201CNeural sparse topical coding,\u201D"
updated: '2022-11-21T20:13:05.556810+00:00'
uri: https://www.researchgate.net/profile/Lin-Gui-5/publication/342058196_Multi-Task_Learning_with_Mutual_Learning_for_Joint_Sentiment_Classification_and_Topic_Detection/links/5f96fd48458515b7cf9f3abd/Multi-Task-Learning-with-Mutual-Learning-for-Joint-Sentiment-Classification-and-Topic-Detection.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.researchgate.net/profile/Lin-Gui-5/publication/342058196_Multi-Task_Learning_with_Mutual_Learning_for_Joint_Sentiment_Classification_and_Topic_Detection/links/5f96fd48458515b7cf9f3abd/Multi-Task-Learning-with-Mutual-Learning-for-Joint-Sentiment-Classification-and-Topic-Detection.pdf
tags:
- NLProc
- topic modelling
- neural networks
- hypothesis
type: annotation
url: /annotation/2022/11/21/1669061585
---
<blockquote>n recent years, the neural network based topic modelshave been proposed for many NLP tasks, such as infor-mation retrieval [11], aspect extraction [12] and sentimentclassification [13]. The basic idea is to construct a neuralnetwork which aims to approximate the topic-word distri-bution in probabilistic topic models. Additional constraints,such as incorporating prior distribution [14], enforcing di-versity among topics [15] or encouraging topic sparsity [16],have been explored for neural topic model learning andproved effective.</blockquote>Neural topic models are often trained to mimic the behaviours of probabilistic topic models - I should come back and look at some of the works:
* R. Das, M. Zaheer, and C. Dyer, “Gaussian LDA for topic models with word embeddings,”
* P. Xie, J. Zhu, and E. P. Xing, “Diversity-promoting bayesian learning of latent variable models,”
* M. Peng, Q. Xie, H. Wang, Y. Zhang, X. Zhang, J. Huang, and G. Tian, “Neural sparse topical coding,”

View File

@ -1,64 +0,0 @@
---
date: '2022-11-23T10:27:40'
hypothesis-meta:
created: '2022-11-23T10:27:40.587505+00:00'
document:
title:
- How to architect the perfect Data Warehouse
flagged: false
group: __world__
hidden: false
id: dWHZzGsZEe25P7NsRpHeAg
links:
html: https://hypothes.is/a/dWHZzGsZEe25P7NsRpHeAg
incontext: https://hyp.is/dWHZzGsZEe25P7NsRpHeAg/scribe.rip/how-to-architect-the-perfect-data-warehouse-b3af2e01342e
json: https://hypothes.is/api/annotations/dWHZzGsZEe25P7NsRpHeAg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- ELT
- data-engineering
target:
- selector:
- endContainer: /article[1]/section[1]/p[7]
endOffset: 216
startContainer: /article[1]/section[1]/p[7]
startOffset: 0
type: RangeSelector
- end: 1816
start: 1600
type: TextPositionSelector
- exact: "One example could be putting all files into an Amazon S3 bucket. It\u2019\
s versatile, cheap and integrates with many technologies. If you are using\
\ Redshift for your data warehouse, it has great integration with that too."
prefix: loading into the data warehouse.
suffix: StagingThe staging area is the b
type: TextQuoteSelector
source: https://scribe.rip/how-to-architect-the-perfect-data-warehouse-b3af2e01342e
text: Essentially the raw data needs to be vaguely homogenised and put into a single
place
updated: '2022-11-23T10:27:40.587505+00:00'
uri: https://scribe.rip/how-to-architect-the-perfect-data-warehouse-b3af2e01342e
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://scribe.rip/how-to-architect-the-perfect-data-warehouse-b3af2e01342e
tags:
- ELT
- data-engineering
- hypothesis
type: annotation
url: /annotations/2022/11/23/1669199260
---
<blockquote>One example could be putting all files into an Amazon S3 bucket. Its versatile, cheap and integrates with many technologies. If you are using Redshift for your data warehouse, it has great integration with that too.</blockquote>Essentially the raw data needs to be vaguely homogenised and put into a single place

View File

@ -1,64 +0,0 @@
---
date: '2022-11-23T19:48:10'
hypothesis-meta:
created: '2022-11-23T19:48:10.551681+00:00'
document:
title:
- 2210.07188.pdf
flagged: false
group: __world__
hidden: false
id: wmnTPmtnEe2grmOa8_XKwA
links:
html: https://hypothes.is/a/wmnTPmtnEe2grmOa8_XKwA
incontext: https://hyp.is/wmnTPmtnEe2grmOa8_XKwA/arxiv.org/pdf/2210.07188.pdf
json: https://hypothes.is/api/annotations/wmnTPmtnEe2grmOa8_XKwA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- coreference
- NLProc
- data-annotation
target:
- selector:
- end: 1153
start: 772
type: TextPositionSelector
- exact: ' this work, we developa crowdsourcing-friendly coreference annota-tion
methodology, ezCoref, consisting of anannotation tool and an interactive tutorial.
Weuse ezCoref to re-annotate 240 passages fromseven existing English coreference
datasets(spanning fiction, news, and multiple other do-mains) while teaching
annotators only casesthat are treated similarly across these datasets'
prefix: rs with vari-ous backgrounds. In
suffix: .1Surprisingly, we find that rea
type: TextQuoteSelector
source: https://arxiv.org/pdf/2210.07188.pdf
text: this paper describes a new efficient coreference annotation tool which simplifies
co-reference annotation. They use their tool to re-annotate passages from widely
used coreference datasets.
updated: '2022-11-23T19:48:10.551681+00:00'
uri: https://arxiv.org/pdf/2210.07188.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/2210.07188.pdf
tags:
- coreference
- NLProc
- data-annotation
- hypothesis
type: annotation
url: /annotation/2022/11/23/1669232890
---
<blockquote> this work, we developa crowdsourcing-friendly coreference annota-tion methodology, ezCoref, consisting of anannotation tool and an interactive tutorial. Weuse ezCoref to re-annotate 240 passages fromseven existing English coreference datasets(spanning fiction, news, and multiple other do-mains) while teaching annotators only casesthat are treated similarly across these datasets</blockquote>this paper describes a new efficient coreference annotation tool which simplifies co-reference annotation. They use their tool to re-annotate passages from widely used coreference datasets.

View File

@ -1,65 +0,0 @@
---
date: '2022-11-23T19:50:16'
hypothesis-meta:
created: '2022-11-23T19:50:16.484020+00:00'
document:
title:
- 2210.07188.pdf
flagged: false
group: __world__
hidden: false
id: DXdcFmtoEe2_uNemAZII7w
links:
html: https://hypothes.is/a/DXdcFmtoEe2_uNemAZII7w
incontext: https://hyp.is/DXdcFmtoEe2_uNemAZII7w/arxiv.org/pdf/2210.07188.pdf
json: https://hypothes.is/api/annotations/DXdcFmtoEe2_uNemAZII7w
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- coreference
- NLProc
- data-annotation
target:
- selector:
- end: 3539
start: 3191
type: TextPositionSelector
- exact: owever, these datasets vary widelyin their definitions of coreference
(expressed viaannotation guidelines), resulting in inconsistent an-notations
both within and across domains and lan-guages. For instance, as shown in Figure
1, whileARRAU (Uryupina et al., 2019) treats generic pro-nouns as non-referring,
OntoNotes chooses not tomark them at all
prefix: "larly for \u201Cwe\u201D.et al., 2016a). H"
suffix: .It is thus unclear which guidel
type: TextQuoteSelector
source: https://arxiv.org/pdf/2210.07188.pdf
text: One of the big issues is that different co-reference datasets have significant
differences in annotation guidelines even within the coreference family of tasks
- I found this quite shocking as one might expect coreference to be fairly well
defined as a task.
updated: '2022-11-23T19:54:31.023210+00:00'
uri: https://arxiv.org/pdf/2210.07188.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/2210.07188.pdf
tags:
- coreference
- NLProc
- data-annotation
- hypothesis
type: annotation
url: /annotation/2022/11/23/1669233016
---
<blockquote>owever, these datasets vary widelyin their definitions of coreference (expressed viaannotation guidelines), resulting in inconsistent an-notations both within and across domains and lan-guages. For instance, as shown in Figure 1, whileARRAU (Uryupina et al., 2019) treats generic pro-nouns as non-referring, OntoNotes chooses not tomark them at all</blockquote>One of the big issues is that different co-reference datasets have significant differences in annotation guidelines even within the coreference family of tasks - I found this quite shocking as one might expect coreference to be fairly well defined as a task.

View File

@ -1,72 +0,0 @@
---
date: '2022-11-23T19:54:24'
hypothesis-meta:
created: '2022-11-23T19:54:24.332809+00:00'
document:
title:
- 2210.07188.pdf
flagged: false
group: __world__
hidden: false
id: oTGKsmtoEe2RF0-NK45jew
links:
html: https://hypothes.is/a/oTGKsmtoEe2RF0-NK45jew
incontext: https://hyp.is/oTGKsmtoEe2RF0-NK45jew/arxiv.org/pdf/2210.07188.pdf
json: https://hypothes.is/api/annotations/oTGKsmtoEe2RF0-NK45jew
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- coreference
- NLProc
- data-annotation
target:
- selector:
- end: 5934
start: 5221
type: TextPositionSelector
- exact: Specifically, our work investigates the quality ofcrowdsourced coreference
annotations when anno-tators are taught only simple coreference cases thatare
treated uniformly across existing datasets (e.g.,pronouns). By providing only
these simple cases,we are able to teach the annotators the concept ofcoreference,
while allowing them to freely interpretcases treated differently across the
existing datasets.This setup allows us to identify cases where ourannotators
disagree among each other, but moreimportantly cases where they unanimously
agreewith each other but disagree with the expert, thussuggesting cases that
should be revisited by theresearch community when curating future unifiedannotation
guidelines
prefix: ficient payment-based platforms.
suffix: .Our main contributions are:1. W
type: TextQuoteSelector
source: https://arxiv.org/pdf/2210.07188.pdf
text: "The aim of the work is to examine a simplified subset of co-reference phenomena\
\ which are generally treated the same across different existing datasets. \n\n\
This makes spotting inter-annotator disagreement easier - presumably because for\
\ simpler cases there are fewer modes of failure?\n\n"
updated: '2022-11-23T19:54:24.332809+00:00'
uri: https://arxiv.org/pdf/2210.07188.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/2210.07188.pdf
tags:
- coreference
- NLProc
- data-annotation
- hypothesis
type: annotation
url: /annotation/2022/11/23/1669233264
---
<blockquote>Specifically, our work investigates the quality ofcrowdsourced coreference annotations when anno-tators are taught only simple coreference cases thatare treated uniformly across existing datasets (e.g.,pronouns). By providing only these simple cases,we are able to teach the annotators the concept ofcoreference, while allowing them to freely interpretcases treated differently across the existing datasets.This setup allows us to identify cases where ourannotators disagree among each other, but moreimportantly cases where they unanimously agreewith each other but disagree with the expert, thussuggesting cases that should be revisited by theresearch community when curating future unifiedannotation guidelines</blockquote>The aim of the work is to examine a simplified subset of co-reference phenomena which are generally treated the same across different existing datasets.
This makes spotting inter-annotator disagreement easier - presumably because for simpler cases there are fewer modes of failure?

View File

@ -1,68 +0,0 @@
---
date: '2022-11-23T19:56:25'
hypothesis-meta:
created: '2022-11-23T19:56:25.933796+00:00'
document:
title:
- 2210.07188.pdf
flagged: false
group: __world__
hidden: false
id: 6ayOcmtoEe2a2efc7jjUHQ
links:
html: https://hypothes.is/a/6ayOcmtoEe2a2efc7jjUHQ
incontext: https://hyp.is/6ayOcmtoEe2a2efc7jjUHQ/arxiv.org/pdf/2210.07188.pdf
json: https://hypothes.is/api/annotations/6ayOcmtoEe2a2efc7jjUHQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- coreference
- data-annotation
- NLProc
target:
- selector:
- end: 13153
start: 12521
type: TextPositionSelector
- exact: "Annotation structure: Two annotation ap-proaches are prominent in the\
\ literature: (1) a localpairwise approach, annotators are shown a pairof\
\ mentions and asked whether they refer to thesame entity (Hladk\xE1 et al.,\
\ 2009; Chamberlain et al.,2016a; Li et al., 2020; Ravenscroft et al., 2021),which\
\ is time-consuming; or (2) a cluster-basedapproach (Reiter, 2018; Oberle,\
\ 2018; Bornsteinet al., 2020), in which annotators group all men-tions of\
\ the same entity into a single cluster. InezCoref we use the latter approach,\
\ which can befaster but requires the UI to support more complexactions for\
\ creating and editing cluster structures."
prefix: n detection and AMT Integration.
suffix: 'User interface: We spent two yea'
type: TextQuoteSelector
source: https://arxiv.org/pdf/2210.07188.pdf
text: ezCoref presents clusters of coreferences all at the same time - this is a
nice efficient way to do annotation versus pairwise annotation (like we did for
CD^2CR)
updated: '2022-11-23T19:56:25.933796+00:00'
uri: https://arxiv.org/pdf/2210.07188.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/2210.07188.pdf
tags:
- coreference
- data-annotation
- NLProc
- hypothesis
type: annotation
url: /annotation/2022/11/23/1669233385
---
<blockquote>Annotation structure: Two annotation ap-proaches are prominent in the literature: (1) a localpairwise approach, annotators are shown a pairof mentions and asked whether they refer to thesame entity (Hladká et al., 2009; Chamberlain et al.,2016a; Li et al., 2020; Ravenscroft et al., 2021),which is time-consuming; or (2) a cluster-basedapproach (Reiter, 2018; Oberle, 2018; Bornsteinet al., 2020), in which annotators group all men-tions of the same entity into a single cluster. InezCoref we use the latter approach, which can befaster but requires the UI to support more complexactions for creating and editing cluster structures.</blockquote>ezCoref presents clusters of coreferences all at the same time - this is a nice efficient way to do annotation versus pairwise annotation (like we did for CD^2CR)

View File

@ -1,62 +0,0 @@
---
date: '2022-11-23T20:01:42'
hypothesis-meta:
created: '2022-11-23T20:01:42.722732+00:00'
document:
title:
- 2210.07188.pdf
flagged: false
group: __world__
hidden: false
id: pn7SimtpEe2Rlr_TC_SBhg
links:
html: https://hypothes.is/a/pn7SimtpEe2Rlr_TC_SBhg
incontext: https://hyp.is/pn7SimtpEe2Rlr_TC_SBhg/arxiv.org/pdf/2210.07188.pdf
json: https://hypothes.is/api/annotations/pn7SimtpEe2Rlr_TC_SBhg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- coreference
- data-annotation
- NLProc
target:
- selector:
- end: 20125
start: 19879
type: TextPositionSelector
- exact: 'Procedure: We first launch an annotation tutorial(paid $4.50) and recruit
the annotators on the AMTplatform.9 At the end of the tutorial, each annotatoris
asked to annotate a short passage (around 150words). Only annotators with
a B3 score (Bagga'
prefix: asure inter-annotator agreement.
suffix: 8The PreCo dataset is interestin
type: TextQuoteSelector
source: https://arxiv.org/pdf/2210.07188.pdf
text: Annotators are asked to complete a quality control exercise and only annotators
who achieve a B3 score of 0.9 or higher are invited to do more annotation
updated: '2022-11-23T20:01:42.722732+00:00'
uri: https://arxiv.org/pdf/2210.07188.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/2210.07188.pdf
tags:
- coreference
- data-annotation
- NLProc
- hypothesis
type: annotation
url: /annotation/2022/11/23/1669233702
---
<blockquote>Procedure: We first launch an annotation tutorial(paid $4.50) and recruit the annotators on the AMTplatform.9 At the end of the tutorial, each annotatoris asked to annotate a short passage (around 150words). Only annotators with a B3 score (Bagga</blockquote>Annotators are asked to complete a quality control exercise and only annotators who achieve a B3 score of 0.9 or higher are invited to do more annotation

View File

@ -1,62 +0,0 @@
---
date: '2022-11-23T20:12:31'
hypothesis-meta:
created: '2022-11-23T20:12:31.341810+00:00'
document:
title:
- 2210.07188.pdf
flagged: false
group: __world__
hidden: false
id: KRvuAmtrEe26TOOrc3o_zA
links:
html: https://hypothes.is/a/KRvuAmtrEe26TOOrc3o_zA
incontext: https://hyp.is/KRvuAmtrEe26TOOrc3o_zA/arxiv.org/pdf/2210.07188.pdf
json: https://hypothes.is/api/annotations/KRvuAmtrEe26TOOrc3o_zA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- data-annotation
- coreference
- NLProc
target:
- selector:
- end: 26459
start: 26292
type: TextPositionSelector
- exact: 'an algorithm with high precision on LitBank orOntoNotes would miss a
huge percentage of rele-vant mentions and entities on other datasets (con-straining
our analysis) '
prefix: re mentions of differentlengths.
suffix: and when annotating newtexts and
type: TextQuoteSelector
source: https://arxiv.org/pdf/2210.07188.pdf
text: these datasets have the most limited/constrained definitions for co-reference
and what should be marked up so it makes sense that precision is poor in these
datasets
updated: '2022-11-23T20:12:31.341810+00:00'
uri: https://arxiv.org/pdf/2210.07188.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/2210.07188.pdf
tags:
- data-annotation
- coreference
- NLProc
- hypothesis
type: annotation
url: /annotation/2022/11/23/1669234351
---
<blockquote>an algorithm with high precision on LitBank orOntoNotes would miss a huge percentage of rele-vant mentions and entities on other datasets (con-straining our analysis) </blockquote>these datasets have the most limited/constrained definitions for co-reference and what should be marked up so it makes sense that precision is poor in these datasets

View File

@ -1,62 +0,0 @@
---
date: '2022-11-23T20:18:21'
hypothesis-meta:
created: '2022-11-23T20:18:21.503899+00:00'
document:
title:
- 2210.07188.pdf
flagged: false
group: __world__
hidden: false
id: -dKc5GtrEe2QDyN0zg00rw
links:
html: https://hypothes.is/a/-dKc5GtrEe2QDyN0zg00rw
incontext: https://hyp.is/-dKc5GtrEe2QDyN0zg00rw/arxiv.org/pdf/2210.07188.pdf
json: https://hypothes.is/api/annotations/-dKc5GtrEe2QDyN0zg00rw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- data-annotation
- coreference
- NLProc
target:
- selector:
- end: 28783
start: 28631
type: TextPositionSelector
- exact: 'Our annotators achieve thehighest precision with OntoNotes, suggesting
thatmost of the entities identified by crowdworkers arecorrect for this dataset. '
prefix: 'ntoNotes, GUM, Lit-Bank, ARRAU: '
suffix: In terms of F1 scores, thedatase
type: TextQuoteSelector
source: https://arxiv.org/pdf/2210.07188.pdf
text: interesting that the mention detection algorithm gives poor precision on OntoNotes
and the annotators get high precision. Does this imply that there are a lot of
invalid mentions in this data and the guidelines for ontonotes are correct to
ignore generic pronouns without pronominals?
updated: '2022-11-23T20:18:21.503899+00:00'
uri: https://arxiv.org/pdf/2210.07188.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/2210.07188.pdf
tags:
- data-annotation
- coreference
- NLProc
- hypothesis
type: annotation
url: /annotation/2022/11/23/1669234701
---
<blockquote>Our annotators achieve thehighest precision with OntoNotes, suggesting thatmost of the entities identified by crowdworkers arecorrect for this dataset. </blockquote>interesting that the mention detection algorithm gives poor precision on OntoNotes and the annotators get high precision. Does this imply that there are a lot of invalid mentions in this data and the guidelines for ontonotes are correct to ignore generic pronouns without pronominals?

View File

@ -1,72 +0,0 @@
---
date: '2022-11-23T20:47:05'
hypothesis-meta:
created: '2022-11-23T20:47:05.414293+00:00'
document:
title:
- 'Towards Automatic Curation of Antibiotic Resistance Genes via Statement Extraction
from Scientific Papers: A Benchmark Dataset and Models'
flagged: false
group: __world__
hidden: false
id: _Vj2omtvEe2z-rfNY4eZiw
links:
html: https://hypothes.is/a/_Vj2omtvEe2z-rfNY4eZiw
incontext: https://hyp.is/_Vj2omtvEe2z-rfNY4eZiw/aclanthology.org/2022.bionlp-1.40.pdf
json: https://hypothes.is/api/annotations/_Vj2omtvEe2z-rfNY4eZiw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- prompt-models
- NLProc
target:
- selector:
- end: 1532
start: 444
type: TextPositionSelector
- exact: "Antibiotic resistance has become a growingworldwide concern as new resistance\
\ mech-anisms are emerging and spreading globally,and thus detecting and collecting\
\ the cause\u2013 Antibiotic Resistance Genes (ARGs), havebeen more critical\
\ than ever. In this work,we aim to automate the curation of ARGs byextracting\
\ ARG-related assertive statementsfrom scientific papers. To support the researchtowards\
\ this direction, we build SCIARG, anew benchmark dataset containing 2,000\
\ man-ually annotated statements as the evaluationset and 12,516 silver-standard\
\ training state-ments that are automatically created from sci-entific papers\
\ by a set of rules. To set upthe baseline performance on SCIARG, weexploit\
\ three state-of-the-art neural architec-tures based on pre-trained language\
\ modelsand prompt tuning, and further ensemble themto attain the highest\
\ 77.0% F-score. To the bestof our knowledge, we are the first to leveragenatural\
\ language processing techniques to cu-rate all validated ARGs from scientific\
\ papers.Both the code and data are publicly availableat https://github.com/VT-NLP/SciARG."
prefix: g,clb21565,lifuh}@vt.eduAbstract
suffix: 1 IntroductionAntibiotic resista
type: TextQuoteSelector
source: https://aclanthology.org/2022.bionlp-1.40.pdf
text: The authors use prompt training on LLMs to build a classifier that can identify
statements that describe whether or not micro-organisms have antibiotic resistant
genes in scientific papers.
updated: '2022-11-23T20:47:05.414293+00:00'
uri: https://aclanthology.org/2022.bionlp-1.40.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://aclanthology.org/2022.bionlp-1.40.pdf
tags:
- prompt-models
- NLProc
- hypothesis
type: annotation
url: /annotation/2022/11/23/1669236425
---
<blockquote>Antibiotic resistance has become a growingworldwide concern as new resistance mech-anisms are emerging and spreading globally,and thus detecting and collecting the cause Antibiotic Resistance Genes (ARGs), havebeen more critical than ever. In this work,we aim to automate the curation of ARGs byextracting ARG-related assertive statementsfrom scientific papers. To support the researchtowards this direction, we build SCIARG, anew benchmark dataset containing 2,000 man-ually annotated statements as the evaluationset and 12,516 silver-standard training state-ments that are automatically created from sci-entific papers by a set of rules. To set upthe baseline performance on SCIARG, weexploit three state-of-the-art neural architec-tures based on pre-trained language modelsand prompt tuning, and further ensemble themto attain the highest 77.0% F-score. To the bestof our knowledge, we are the first to leveragenatural language processing techniques to cu-rate all validated ARGs from scientific papers.Both the code and data are publicly availableat https://github.com/VT-NLP/SciARG.</blockquote>The authors use prompt training on LLMs to build a classifier that can identify statements that describe whether or not micro-organisms have antibiotic resistant genes in scientific papers.

View File

@ -1,64 +0,0 @@
---
date: '2022-11-23T20:50:17'
hypothesis-meta:
created: '2022-11-23T20:50:17.668925+00:00'
document:
title:
- 2022.naacl-main.167.pdf
flagged: false
group: __world__
hidden: false
id: b_EbpGtwEe2m8tfhSKM2EQ
links:
html: https://hypothes.is/a/b_EbpGtwEe2m8tfhSKM2EQ
incontext: https://hyp.is/b_EbpGtwEe2m8tfhSKM2EQ/aclanthology.org/2022.naacl-main.167.pdf
json: https://hypothes.is/api/annotations/b_EbpGtwEe2m8tfhSKM2EQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- prompt-models
- NLProc
target:
- selector:
- end: 2221
start: 1677
type: TextPositionSelector
- exact: "Suppose a human is given two sentences: \u201CNoweapons of mass destruction\
\ found in Iraq yet.\u201Dand \u201CWeapons of mass destruction found in Iraq.\u201D\
They are then asked to respond 0 or 1 and receive areward if they are correct.\
\ In this setup, they wouldlikely need a large number of trials and errors\
\ be-fore figuring out what they are really being re-warded to do. This setup\
\ is akin to the pretrain-and-fine-tune setup which has dominated NLP in recentyears,\
\ in which models are asked to classify a sen-tence representation (e.g.,\
\ a CLS token) into some"
prefix: task instructions.1 Introduction
suffix: "\u2217Unabridged version available on"
type: TextQuoteSelector
source: https://aclanthology.org/2022.naacl-main.167.pdf
text: This is a really excellent illustration of the difference in paradigm between
"normal" text model fine tuning and prompt-based modelling
updated: '2022-11-23T20:50:17.668925+00:00'
uri: https://aclanthology.org/2022.naacl-main.167.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://aclanthology.org/2022.naacl-main.167.pdf
tags:
- prompt-models
- NLProc
- hypothesis
type: annotation
url: /annotation/2022/11/23/1669236617
---
<blockquote>Suppose a human is given two sentences: “Noweapons of mass destruction found in Iraq yet.”and “Weapons of mass destruction found in Iraq.”They are then asked to respond 0 or 1 and receive areward if they are correct. In this setup, they wouldlikely need a large number of trials and errors be-fore figuring out what they are really being re-warded to do. This setup is akin to the pretrain-and-fine-tune setup which has dominated NLP in recentyears, in which models are asked to classify a sen-tence representation (e.g., a CLS token) into some</blockquote>This is a really excellent illustration of the difference in paradigm between "normal" text model fine tuning and prompt-based modelling

View File

@ -1,61 +0,0 @@
---
date: '2022-11-23T20:52:10'
hypothesis-meta:
created: '2022-11-23T20:52:10.292273+00:00'
document:
title:
- 2022.naacl-main.167.pdf
flagged: false
group: __world__
hidden: false
id: sxEWFGtwEe2_zFc3H2nb2Q
links:
html: https://hypothes.is/a/sxEWFGtwEe2_zFc3H2nb2Q
incontext: https://hyp.is/sxEWFGtwEe2_zFc3H2nb2Q/aclanthology.org/2022.naacl-main.167.pdf
json: https://hypothes.is/api/annotations/sxEWFGtwEe2_zFc3H2nb2Q
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- prompt-models
- NLProc
target:
- selector:
- end: 1663
start: 1398
type: TextPositionSelector
- exact: "Insum, notwithstanding prompt-based models\u2019impressive improvement,\
\ we find evidence ofserious limitations that question the degree towhich\
\ such improvement is derived from mod-els understanding task instructions\
\ in waysanalogous to humans\u2019 use of task instructions."
prefix: 'ing prompts even at zero shots. '
suffix: 1 IntroductionSuppose a human is
type: TextQuoteSelector
source: https://aclanthology.org/2022.naacl-main.167.pdf
text: although prompts seem to help NLP models improve their performance, the authors
find that this performance is still present even when prompts are deliberately
misleading which is a bit weird
updated: '2022-11-23T20:52:10.292273+00:00'
uri: https://aclanthology.org/2022.naacl-main.167.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://aclanthology.org/2022.naacl-main.167.pdf
tags:
- prompt-models
- NLProc
- hypothesis
type: annotation
url: /annotation/2022/11/23/1669236730
---
<blockquote>Insum, notwithstanding prompt-based modelsimpressive improvement, we find evidence ofserious limitations that question the degree towhich such improvement is derived from mod-els understanding task instructions in waysanalogous to humans use of task instructions.</blockquote>although prompts seem to help NLP models improve their performance, the authors find that this performance is still present even when prompts are deliberately misleading which is a bit weird

View File

@ -1,68 +0,0 @@
---
date: '2022-11-23T20:55:44'
hypothesis-meta:
created: '2022-11-23T20:55:44.414977+00:00'
document:
title:
- 2022.naacl-main.167.pdf
flagged: false
group: __world__
hidden: false
id: MrGLumtxEe21b1OADBLmyg
links:
html: https://hypothes.is/a/MrGLumtxEe21b1OADBLmyg
incontext: https://hyp.is/MrGLumtxEe21b1OADBLmyg/aclanthology.org/2022.naacl-main.167.pdf
json: https://hypothes.is/api/annotations/MrGLumtxEe21b1OADBLmyg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- prompt-models
- NLProc
target:
- selector:
- end: 20146
start: 19539
type: TextPositionSelector
- exact: Misleading Templates There is no consistent re-lation between the performance
of models trainedwith templates that are moderately misleading (e.g.{premise}
Can that be paraphrasedas "{hypothesis}"?) vs. templates that areextremely
misleading (e.g., {premise} Isthis a sports news? {hypothesis}).T0 (both 3B
and 11B) perform better givenmisleading-moderate (Figure 3), ALBERT andT5
3B perform better given misleading-extreme(Appendices E and G.4), whereas
T5 11B andGPT-3 perform comparably on both sets (Figure 2;also see Table 2
for a summary of statisticalsignificances.) Despite a lack of pattern between
prefix: structiveand misleading-extreme.
suffix: 4 8 16 32 64 128 2560.50.550.60.
type: TextQuoteSelector
source: https://aclanthology.org/2022.naacl-main.167.pdf
text: "Their misleading templates really are misleading \n\n{premise} Can that be\
\ paraphrased as \"{hypothesis}\" \n\n{premise} Is this a sports news? {hypothesis}"
updated: '2022-11-23T20:55:44.414977+00:00'
uri: https://aclanthology.org/2022.naacl-main.167.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://aclanthology.org/2022.naacl-main.167.pdf
tags:
- prompt-models
- NLProc
- hypothesis
type: annotation
url: /annotation/2022/11/23/1669236944
---
<blockquote>Misleading Templates There is no consistent re-lation between the performance of models trainedwith templates that are moderately misleading (e.g.{premise} Can that be paraphrasedas "{hypothesis}"?) vs. templates that areextremely misleading (e.g., {premise} Isthis a sports news? {hypothesis}).T0 (both 3B and 11B) perform better givenmisleading-moderate (Figure 3), ALBERT andT5 3B perform better given misleading-extreme(Appendices E and G.4), whereas T5 11B andGPT-3 perform comparably on both sets (Figure 2;also see Table 2 for a summary of statisticalsignificances.) Despite a lack of pattern between</blockquote>Their misleading templates really are misleading
{premise} Can that be paraphrased as "{hypothesis}"
{premise} Is this a sports news? {hypothesis}

View File

@ -1,70 +0,0 @@
---
date: '2022-11-25T21:24:12'
hypothesis-meta:
created: '2022-11-25T21:24:12.642368+00:00'
document:
title:
- The Pattern Language of Project Xanadu
flagged: false
group: __world__
hidden: false
id: gcBATG0HEe2JruvqpY0-Jg
links:
html: https://hypothes.is/a/gcBATG0HEe2JruvqpY0-Jg
incontext: https://hyp.is/gcBATG0HEe2JruvqpY0-Jg/maggieappleton.com/xanadu-patterns
json: https://hypothes.is/api/annotations/gcBATG0HEe2JruvqpY0-Jg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- design patterns
target:
- selector:
- endContainer: /div[1]/container[1]/main[1]/article[1]/p[14]
endOffset: 535
startContainer: /div[1]/container[1]/main[1]/article[1]/p[14]
startOffset: 0
type: RangeSelector
- end: 4993
start: 4458
type: TextPositionSelector
- exact: For example, the design pattern A Place to Wait asks that we create comfortable
accommodation and ambient activity whenever someone needs to wait; benches,
cafes, reading rooms, miniature playgrounds, three-reel slot machines (if
we happen to be in the Las Vegas airport). This solves the problem of huddles
of people awkwardly hovering in liminal space; near doorways, taking up sidewalks,
anxiously waiting for delayed flights or dental operations or immigration
investigations without anything to distract them from uncertain fates.
prefix: 'ly) taken on a life of its own.
'
suffix: '
Others like Light on Two Sides '
type: TextQuoteSelector
source: https://maggieappleton.com/xanadu-patterns
text: 'Amazing to think how ubiquitous waiting rooms are and how we take them for
granted '
updated: '2022-11-25T21:24:12.642368+00:00'
uri: https://maggieappleton.com/xanadu-patterns
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://maggieappleton.com/xanadu-patterns
tags:
- design patterns
- hypothesis
type: annotation
url: /annotations/2022/11/25/1669411452
---
<blockquote>For example, the design pattern A Place to Wait asks that we create comfortable accommodation and ambient activity whenever someone needs to wait; benches, cafes, reading rooms, miniature playgrounds, three-reel slot machines (if we happen to be in the Las Vegas airport). This solves the problem of huddles of people awkwardly hovering in liminal space; near doorways, taking up sidewalks, anxiously waiting for delayed flights or dental operations or immigration investigations without anything to distract them from uncertain fates.</blockquote>Amazing to think how ubiquitous waiting rooms are and how we take them for granted

View File

@ -1,74 +0,0 @@
---
date: '2022-11-25T22:11:43'
hypothesis-meta:
created: '2022-11-25T22:11:43.927502+00:00'
document:
title:
- "Mastodon Timeline Fatigue While AP and Governance Energy \u2013 Interdependent\
\ Thoughts"
flagged: false
group: __world__
hidden: false
id: JTwsbG0OEe2GoW-5VPl3tA
links:
html: https://hypothes.is/a/JTwsbG0OEe2GoW-5VPl3tA
incontext: https://hyp.is/JTwsbG0OEe2GoW-5VPl3tA/www.zylstra.org/blog/2022/11/mastodon-timeline-fatigue-while-ap-and-governance-energy/
json: https://hypothes.is/api/annotations/JTwsbG0OEe2GoW-5VPl3tA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- Activitypub
- Indieweb
target:
- selector:
- endContainer: /div[1]/div[1]/section[1]/main[1]/article[1]/div[2]/p[5]
endOffset: 444
startContainer: /div[1]/div[1]/section[1]/main[1]/article[1]/div[2]/p[5]
startOffset: 0
type: RangeSelector
- end: 3603
start: 3159
type: TextPositionSelector
- exact: "First, to experiment personally with AP itself, and if possible with\
\ the less known Activities that AP could support, e.g. travel and check-ins.\
\ This as an extension of my personal site in areas that WordPress, OPML and\
\ RSS currently can\u2019t provide to me. This increases my own agency, by\
\ adding affordances to my site. This in time may mean I won\u2019t be hosting\
\ or self-hosting my personal Mastodon instance. (See my current fediverse\
\ activities)"
prefix: 'than I was before this started.
'
suffix: '
Second, to volunteer for govern'
type: TextQuoteSelector
source: https://www.zylstra.org/blog/2022/11/mastodon-timeline-fatigue-while-ap-and-governance-energy/
text: 'Interesting for me to explore and understand too. How does AP compare to
micropub which can be used for similar purposes? As far as I can tell it is much
more heavyweight '
updated: '2022-11-25T22:11:43.927502+00:00'
uri: https://www.zylstra.org/blog/2022/11/mastodon-timeline-fatigue-while-ap-and-governance-energy/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.zylstra.org/blog/2022/11/mastodon-timeline-fatigue-while-ap-and-governance-energy/
tags:
- Activitypub
- Indieweb
- hypothesis
type: annotation
url: /annotations/2022/11/25/1669414303
---
<blockquote>First, to experiment personally with AP itself, and if possible with the less known Activities that AP could support, e.g. travel and check-ins. This as an extension of my personal site in areas that WordPress, OPML and RSS currently cant provide to me. This increases my own agency, by adding affordances to my site. This in time may mean I wont be hosting or self-hosting my personal Mastodon instance. (See my current fediverse activities)</blockquote>Interesting for me to explore and understand too. How does AP compare to micropub which can be used for similar purposes? As far as I can tell it is much more heavyweight

View File

@ -1,63 +0,0 @@
---
date: '2022-11-26T09:20:24'
hypothesis-meta:
created: '2022-11-26T09:20:24.106910+00:00'
document:
title:
- The IndieWeb for Everyone
flagged: false
group: __world__
hidden: false
id: jrrh0G1rEe2SI2OlykvzjQ
links:
html: https://hypothes.is/a/jrrh0G1rEe2SI2OlykvzjQ
incontext: https://hyp.is/jrrh0G1rEe2SI2OlykvzjQ/mxb.dev/blog/the-indieweb-for-everyone/
json: https://hypothes.is/api/annotations/jrrh0G1rEe2SI2OlykvzjQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- indieweb
- micropub
target:
- selector:
- endContainer: /div[2]/main[1]/article[1]/div[1]/p[22]
endOffset: 147
startContainer: /div[2]/main[1]/article[1]/div[1]/p[22]
startOffset: 0
type: RangeSelector
- end: 4298
start: 4151
type: TextPositionSelector
- exact: I love the IndieWeb and its tools, but it has always bothered me that
at some point they basically require you to have a webdevelopment background.
prefix: higher its barrier for adoption.
suffix: How many of your non-tech friend
type: TextQuoteSelector
source: https://mxb.dev/blog/the-indieweb-for-everyone/
text: Yeah this is definitely a concern and a major barrier for adoption at the
moment.
updated: '2022-11-26T09:20:24.106910+00:00'
uri: https://mxb.dev/blog/the-indieweb-for-everyone/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://mxb.dev/blog/the-indieweb-for-everyone/
tags:
- indieweb
- micropub
- hypothesis
type: annotation
url: /annotations/2022/11/26/1669454424
---
<blockquote>I love the IndieWeb and its tools, but it has always bothered me that at some point they basically require you to have a webdevelopment background.</blockquote>Yeah this is definitely a concern and a major barrier for adoption at the moment.

View File

@ -1,66 +0,0 @@
---
date: '2022-11-26T09:22:48'
hypothesis-meta:
created: '2022-11-26T09:22:48.255100+00:00'
document:
title:
- "Mastodon Timeline Fatigue While AP and Governance Energy \u2013 Interdependent\
\ Thoughts"
flagged: false
group: __world__
hidden: false
id: 5JwdbG1rEe2T86uE40VZ0Q
links:
html: https://hypothes.is/a/5JwdbG1rEe2T86uE40VZ0Q
incontext: https://hyp.is/JTwsbG0OEe2GoW-5VPl3tA/www.zylstra.org/blog/2022/11/mastodon-timeline-fatigue-while-ap-and-governance-energy/
json: https://hypothes.is/api/annotations/5JwdbG1rEe2T86uE40VZ0Q
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
references:
- JTwsbG0OEe2GoW-5VPl3tA
- z7Qrlm1nEe2yDGOpn1MgTQ
tags:
- activitypub
- indieweb
target:
- source: https://www.zylstra.org/blog/2022/11/mastodon-timeline-fatigue-while-ap-and-governance-energy/
text: "Yeah totally! In my opinion AP is a really nicely designed protocol that\
\ does what it is designed to do very well. I've been playing with indieweb tech\
\ (micropub/sub/webmentions etc) to get a similar suite of behaviour but I guess\
\ these technologies are even less widely supported than AP and of course there\
\ are more moving parts to configure. Essentially, given that [you basically have\
\ to be a web developer to use them](https://hyp.is/jrrh0G1rEe2SI2OlykvzjQ/mxb.dev/blog/the-indieweb-for-everyone/),\
\ they're probably not gonna see mass adoption outside of webdev/programmer circles\
\ I think (maybe \"mass adoption\" isn't what we want but it'd be nice to see\
\ more niche bloggers with other interests getting involved I guess). \n\nTo be\
\ honest I am just curious to see what an AP implementation would look like for\
\ my current web setup and how it would compare in terms of a) code complexity\
\ and b) performance/compute intensity - might make for a fun weekend project\
\ and blog post!"
updated: '2022-11-26T09:29:33.929864+00:00'
uri: https://www.zylstra.org/blog/2022/11/mastodon-timeline-fatigue-while-ap-and-governance-energy/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.zylstra.org/blog/2022/11/mastodon-timeline-fatigue-while-ap-and-governance-energy/
tags:
- activitypub
- indieweb
- hypothesis
type: annotation
url: /annotations/2022/11/26/1669454568
---
Yeah totally! In my opinion AP is a really nicely designed protocol that does what it is designed to do very well. I've been playing with indieweb tech (micropub/sub/webmentions etc) to get a similar suite of behaviour but I guess these technologies are even less widely supported than AP and of course there are more moving parts to configure. Essentially, given that [you basically have to be a web developer to use them](https://hyp.is/jrrh0G1rEe2SI2OlykvzjQ/mxb.dev/blog/the-indieweb-for-everyone/), they're probably not gonna see mass adoption outside of webdev/programmer circles I think (maybe "mass adoption" isn't what we want but it'd be nice to see more niche bloggers with other interests getting involved I guess).
To be honest I am just curious to see what an AP implementation would look like for my current web setup and how it would compare in terms of a) code complexity and b) performance/compute intensity - might make for a fun weekend project and blog post!

View File

@ -1,65 +0,0 @@
---
date: '2022-11-26T20:14:15'
hypothesis-meta:
created: '2022-11-26T20:14:15.917077+00:00'
document:
title:
- 'Learn In Public: The fastest way to learn'
flagged: false
group: __world__
hidden: false
id: 5rDQdG3GEe2P6Ifsw_uZ3g
links:
html: https://hypothes.is/a/5rDQdG3GEe2P6Ifsw_uZ3g
incontext: https://hyp.is/5rDQdG3GEe2P6Ifsw_uZ3g/www.swyx.io/learn-in-public
json: https://hypothes.is/api/annotations/5rDQdG3GEe2P6Ifsw_uZ3g
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- pkm
- learning-in-public
target:
- selector:
- endContainer: /div[1]/main[1]/article[1]/div[3]/p[2]
endOffset: 296
startContainer: /div[1]/main[1]/article[1]/div[3]/p[2]/strong[1]
startOffset: 0
type: RangeSelector
- end: 741
start: 704
type: TextPositionSelector
- exact: 'a habit of creating learning exhaust:'
prefix: 'le. What you do here is to have '
suffix: '
Write blogs and tutorials and '
type: TextQuoteSelector
source: https://www.swyx.io/learn-in-public
text: not sure I love the metaphor but I can definitely see the advantages of leaving
your learnings "out there" for others to see and benefit from
updated: '2022-11-26T20:14:15.917077+00:00'
uri: https://www.swyx.io/learn-in-public
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.swyx.io/learn-in-public
tags:
- pkm
- learning-in-public
- hypothesis
type: annotation
url: /annotations/2022/11/26/1669493655
---
<blockquote>a habit of creating learning exhaust:</blockquote>not sure I love the metaphor but I can definitely see the advantages of leaving your learnings "out there" for others to see and benefit from

View File

@ -1,53 +0,0 @@
---
date: '2022-11-26T21:44:24'
hypothesis-meta:
created: '2022-11-26T21:44:24.512171+00:00'
document:
title:
- "10 Thoughts After 100 Annotations in Hypothes.is \u2013 Interdependent Thoughts"
flagged: false
group: __world__
hidden: false
id: fnDuTG3TEe2QFZ9V6pBC5w
links:
html: https://hypothes.is/a/fnDuTG3TEe2QFZ9V6pBC5w
incontext: https://hyp.is/LZ9t5DSBEe23Vb_TYZMNlg/www.zylstra.org/blog/2022/09/10-thoughts-after-100-annotations-in-hypothes-is/
json: https://hypothes.is/api/annotations/fnDuTG3TEe2QFZ9V6pBC5w
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
references:
- LZ9t5DSBEe23Vb_TYZMNlg
tags:
- pkm
- learning-in-public
target:
- source: https://www.zylstra.org/blog/2022/09/10-thoughts-after-100-annotations-in-hypothes-is/
text: I've started using [archivebox](https://archivebox.io/) to take copies of
pages I want to archive. The h plugin works well with it but you lose the social
side of things when you annotate docs in your personal archive. It's even less
likely you'll encounter others there!
updated: '2022-11-26T21:44:24.512171+00:00'
uri: https://www.zylstra.org/blog/2022/09/10-thoughts-after-100-annotations-in-hypothes-is/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.zylstra.org/blog/2022/09/10-thoughts-after-100-annotations-in-hypothes-is/
tags:
- pkm
- learning-in-public
- hypothesis
type: annotation
url: /annotations/2022/11/26/1669499064
---
I've started using [archivebox](https://archivebox.io/) to take copies of pages I want to archive. The h plugin works well with it but you lose the social side of things when you annotate docs in your personal archive. It's even less likely you'll encounter others there!

View File

@ -1,72 +0,0 @@
---
date: '2022-11-26T21:46:51'
hypothesis-meta:
created: '2022-11-26T21:46:51.818278+00:00'
document:
title:
- "10 Thoughts After 100 Annotations in Hypothes.is \u2013 Interdependent Thoughts"
flagged: false
group: __world__
hidden: false
id: 1kMnKm3TEe2VL_9cyVUnUg
links:
html: https://hypothes.is/a/1kMnKm3TEe2VL_9cyVUnUg
incontext: https://hyp.is/1kMnKm3TEe2VL_9cyVUnUg/www.zylstra.org/blog/2022/09/10-thoughts-after-100-annotations-in-hypothes-is/
json: https://hypothes.is/api/annotations/1kMnKm3TEe2VL_9cyVUnUg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- ideas
- pkm
- learning-in-public
target:
- selector:
- endContainer: /div[1]/div[1]/section[1]/main[1]/article[1]/div[2]/ol[1]/li[8]
endOffset: 415
startContainer: /div[1]/div[1]/section[1]/main[1]/article[1]/div[2]/ol[1]/li[8]
startOffset: 0
type: RangeSelector
- end: 7296
start: 6881
type: TextPositionSelector
- exact: "In the same category of integrating h. into my pkm workflows, falls\
\ the interaction between h. and Zotero, especially now that Zotero has its\
\ own storage of annotations of PDFs in my library. It might be of interest\
\ to be able to share those annotations, for a more complete overview of what\
\ I\u2019m annotating. Either directly from Zotero, or by way of my notes\
\ in Obsidian (Zotero annotatins end up there in the end)"
prefix: " I pull in to my local system. \n"
suffix: '
These first 100 annotations I m'
type: TextQuoteSelector
source: https://www.zylstra.org/blog/2022/09/10-thoughts-after-100-annotations-in-hypothes-is/
text: I've been thinking about this exact same flow. Given that I'm mostly annotating
scientific papers I got from open access journals I was wondering whether there
might be some way to syndicate my zotero annotations back to h via a script.
updated: '2022-11-26T21:46:51.818278+00:00'
uri: https://www.zylstra.org/blog/2022/09/10-thoughts-after-100-annotations-in-hypothes-is/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.zylstra.org/blog/2022/09/10-thoughts-after-100-annotations-in-hypothes-is/
tags:
- ideas
- pkm
- learning-in-public
- hypothesis
type: annotation
url: /annotations/2022/11/26/1669499211
---
<blockquote>In the same category of integrating h. into my pkm workflows, falls the interaction between h. and Zotero, especially now that Zotero has its own storage of annotations of PDFs in my library. It might be of interest to be able to share those annotations, for a more complete overview of what Im annotating. Either directly from Zotero, or by way of my notes in Obsidian (Zotero annotatins end up there in the end)</blockquote>I've been thinking about this exact same flow. Given that I'm mostly annotating scientific papers I got from open access journals I was wondering whether there might be some way to syndicate my zotero annotations back to h via a script.

View File

@ -1,69 +0,0 @@
---
date: '2022-11-26T22:27:31'
hypothesis-meta:
created: '2022-11-26T22:27:31.224344+00:00'
document:
title:
- "10 Thoughts After 100 Annotations in Hypothes.is \u2013 Interdependent Thoughts"
flagged: false
group: __world__
hidden: false
id: hEcxuG3ZEe2TbIMGmu3CHQ
links:
html: https://hypothes.is/a/hEcxuG3ZEe2TbIMGmu3CHQ
incontext: https://hyp.is/hEcxuG3ZEe2TbIMGmu3CHQ/www.zylstra.org/blog/2022/09/10-thoughts-after-100-annotations-in-hypothes-is/
json: https://hypothes.is/api/annotations/hEcxuG3ZEe2TbIMGmu3CHQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- pkm
- learning-in-public
target:
- selector:
- endContainer: /div[1]/div[1]/section[1]/main[1]/article[1]/div[2]/ol[1]/li[5]
endOffset: 386
startContainer: /div[1]/div[1]/section[1]/main[1]/article[1]/div[2]/ol[1]/li[5]
startOffset: 0
type: RangeSelector
- end: 4892
start: 4506
type: TextPositionSelector
- exact: "Annotations are the first step of getting useful insights into my notes.\
\ This makes it a prerequisite to be able to capture annotations in my note\
\ making tool Obsidian, otherwise Hypothes.is is just another silo you\u2019\
re wasting time on. Luckily h. isn\u2019t meant as a silo and has an API.\
\ Using the API and the Hypothes.is-to-Obsidian plugin all my annotations\
\ are available to me locally. "
prefix: 'be interested to hear about it.
'
suffix: 'However, what I do locally with '
type: TextQuoteSelector
source: https://www.zylstra.org/blog/2022/09/10-thoughts-after-100-annotations-in-hypothes-is/
text: This is key - exporting annotations via the API to either public commonplace
books (Chris A Style) or to a private knowledge store seems to be pretty common.
updated: '2022-11-26T22:27:31.224344+00:00'
uri: https://www.zylstra.org/blog/2022/09/10-thoughts-after-100-annotations-in-hypothes-is/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.zylstra.org/blog/2022/09/10-thoughts-after-100-annotations-in-hypothes-is/
tags:
- pkm
- learning-in-public
- hypothesis
type: annotation
url: /annotations/2022/11/26/1669501651
---
<blockquote>Annotations are the first step of getting useful insights into my notes. This makes it a prerequisite to be able to capture annotations in my note making tool Obsidian, otherwise Hypothes.is is just another silo youre wasting time on. Luckily h. isnt meant as a silo and has an API. Using the API and the Hypothes.is-to-Obsidian plugin all my annotations are available to me locally. </blockquote>This is key - exporting annotations via the API to either public commonplace books (Chris A Style) or to a private knowledge store seems to be pretty common.

View File

@ -1,66 +0,0 @@
---
date: '2022-11-27T07:31:23'
hypothesis-meta:
created: '2022-11-27T07:31:23.014876+00:00'
document:
title:
- Choosing Nim out of a crowded market for systems programming languages - Nim
forum
flagged: false
group: __world__
hidden: false
id: flHUzm4lEe202OvHzorTWg
links:
html: https://hypothes.is/a/flHUzm4lEe202OvHzorTWg
incontext: https://hyp.is/flHUzm4lEe202OvHzorTWg/forum.nim-lang.org/t/9655
json: https://hypothes.is/api/annotations/flHUzm4lEe202OvHzorTWg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- tobuy
- books
- nimlang
target:
- selector:
- endContainer: /div[1]/section[1]/div[2]/div[1]/div[2]/div[2]/div[1]/div[1]/p[4]/span[1]
endOffset: 44
startContainer: /div[1]/section[1]/div[2]/div[1]/div[2]/div[2]/div[1]/div[1]/p[4]/span[1]
startOffset: 26
type: RangeSelector
- end: 1910
start: 1892
type: TextPositionSelector
- exact: Nim in Action book
prefix: 'ions.
I purchased the excellent '
suffix: ' when it first came out and wrot'
type: TextQuoteSelector
source: https://forum.nim-lang.org/t/9655
text: 'todo: procure this'
updated: '2022-11-27T07:33:28.384089+00:00'
uri: https://forum.nim-lang.org/t/9655
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://forum.nim-lang.org/t/9655
tags:
- tobuy
- books
- nimlang
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669534283
---
<blockquote>Nim in Action book</blockquote>todo: procure this

View File

@ -1,65 +0,0 @@
---
date: '2022-11-27T07:33:19'
hypothesis-meta:
created: '2022-11-27T07:33:19.517608+00:00'
document:
title:
- Choosing Nim out of a crowded market for systems programming languages - Nim
forum
flagged: false
group: __world__
hidden: false
id: w8IOjm4lEe2SEfPz5HdULg
links:
html: https://hypothes.is/a/w8IOjm4lEe2SEfPz5HdULg
incontext: https://hyp.is/w8IOjm4lEe2SEfPz5HdULg/forum.nim-lang.org/t/9655
json: https://hypothes.is/api/annotations/w8IOjm4lEe2SEfPz5HdULg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- nimlang
target:
- selector:
- endContainer: /div[1]/section[1]/div[2]/div[1]/div[2]/div[2]/div[1]/div[1]/p[5]/span[1]
endOffset: 459
startContainer: /div[1]/section[1]/div[2]/div[1]/div[2]/div[2]/div[1]/div[1]/p[5]/span[1]
startOffset: 245
type: RangeSelector
- end: 3114
start: 2900
type: TextPositionSelector
- exact: This isn't a highly scientific post full of esoteric details and language
feature matrices. It's about making the best choice for what I can be the
most productive in for my target market and product requirements.
prefix: 'ners up: Rust, Zig, and Dart. '
suffix: '
Some specific nits to get out o'
type: TextQuoteSelector
source: https://forum.nim-lang.org/t/9655
text: this post is more about the author's needs and requirements. It does not attempt
to be objective
updated: '2022-11-27T07:33:19.517608+00:00'
uri: https://forum.nim-lang.org/t/9655
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://forum.nim-lang.org/t/9655
tags:
- nimlang
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669534399
---
<blockquote>This isn't a highly scientific post full of esoteric details and language feature matrices. It's about making the best choice for what I can be the most productive in for my target market and product requirements.</blockquote>this post is more about the author's needs and requirements. It does not attempt to be objective

View File

@ -1,67 +0,0 @@
---
date: '2022-11-27T08:56:00'
hypothesis-meta:
created: '2022-11-27T08:56:00.887754+00:00'
document:
title:
- Josh Braun (@josh@sciences.social)
flagged: false
group: __world__
hidden: false
id: UP1vDm4xEe20MPtCpjCH3Q
links:
html: https://hypothes.is/a/UP1vDm4xEe20MPtCpjCH3Q
incontext: https://hyp.is/UP1vDm4xEe20MPtCpjCH3Q/sciences.social/@josh/109410562571794917
json: https://hypothes.is/api/annotations/UP1vDm4xEe20MPtCpjCH3Q
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- indieweb
- federation
target:
- selector:
- endContainer: /div[1]/div[1]/div[1]/div[2]/div[2]/div[2]/div[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[2]/div[1]/p[1]
endOffset: 404
startContainer: /div[1]/div[1]/div[1]/div[2]/div[2]/div[2]/div[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[2]/div[1]/p[1]
startOffset: 0
type: RangeSelector
- end: 1303
start: 899
type: TextPositionSelector
- exact: Matthew Hindman, in his book "The Internet Trap" <http://assets.press.princeton.edu/chapters/s13236.pdf>,
notes that most research on the internet has focused on its supposedly decentralized
nature, leaving us with little language to really grapple with the concentrated,
oligopolistic state of today's online economy, where the vast majority of
attention and revenue accrue to a tiny number of companies
prefix: "hread\u2026 1+ 18hJosh Braun @josh"
suffix: . 1/ 1 18hJosh Braun @joshThi
type: TextQuoteSelector
source: https://sciences.social/@josh/109410562571794917
text: This is a really nice summary - "the internet" is still talked about as if
it is still 1999 whereas in reality today's internet can be equated to "where
I consume services from FAANG" for most people
updated: '2022-11-27T08:56:00.887754+00:00'
uri: https://sciences.social/@josh/109410562571794917
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://sciences.social/@josh/109410562571794917
tags:
- indieweb
- federation
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669539360
---
<blockquote>Matthew Hindman, in his book "The Internet Trap" <http://assets.press.princeton.edu/chapters/s13236.pdf>, notes that most research on the internet has focused on its supposedly decentralized nature, leaving us with little language to really grapple with the concentrated, oligopolistic state of today's online economy, where the vast majority of attention and revenue accrue to a tiny number of companies</blockquote>This is a really nice summary - "the internet" is still talked about as if it is still 1999 whereas in reality today's internet can be equated to "where I consume services from FAANG" for most people

View File

@ -1,48 +0,0 @@
---
date: '2022-11-27T08:56:45'
hypothesis-meta:
created: '2022-11-27T08:56:45.491930+00:00'
document:
title:
- Josh Braun (@josh@sciences.social)
flagged: false
group: __world__
hidden: false
id: a4ojgG4xEe2SJ8N3_bOYVQ
links:
html: https://hypothes.is/a/a4ojgG4xEe2SJ8N3_bOYVQ
incontext: https://hyp.is/UP1vDm4xEe20MPtCpjCH3Q/sciences.social/@josh/109410562571794917
json: https://hypothes.is/api/annotations/a4ojgG4xEe2SJ8N3_bOYVQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
references:
- UP1vDm4xEe20MPtCpjCH3Q
tags:
- toread
target:
- source: https://sciences.social/@josh/109410562571794917
text: 'To read: The Internet Trap http://assets.press.princeton.edu/chapters/s13236.pdf'
updated: '2022-11-27T08:56:45.491930+00:00'
uri: https://sciences.social/@josh/109410562571794917
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://sciences.social/@josh/109410562571794917
tags:
- toread
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669539405
---
To read: The Internet Trap http://assets.press.princeton.edu/chapters/s13236.pdf

View File

@ -1,58 +0,0 @@
---
date: '2022-11-27T12:34:59'
hypothesis-meta:
created: '2022-11-27T12:34:59.316135+00:00'
document:
title:
- Kings+College+Report.pdf
flagged: false
group: __world__
hidden: false
id: 6BPsGm5PEe2stsftlk5_Lw
links:
html: https://hypothes.is/a/6BPsGm5PEe2stsftlk5_Lw
incontext: https://hyp.is/6BPsGm5PEe2stsftlk5_Lw/rf-downloads.s3.amazonaws.com/Kings+College+Report.pdf
json: https://hypothes.is/api/annotations/6BPsGm5PEe2stsftlk5_Lw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- scientometrics
- comprehensive impact
target:
- selector:
- end: 17993
start: 17828
type: TextPositionSelector
- exact: "The term \u2018impact\u2019 is currently used widely in research, especially\
\ with the inclusion ofnon-academic impact as part of the latest Research\
\ Excellence Framework (REF)"
prefix: ' the public, and final outcomes.'
suffix: .aa REF 2014 is a process for as
type: TextQuoteSelector
source: http://rf-downloads.s3.amazonaws.com/Kings+College+Report.pdf
text: RF use similar definition of impact to that of REF
updated: '2022-11-27T12:34:59.316135+00:00'
uri: http://rf-downloads.s3.amazonaws.com/Kings+College+Report.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: http://rf-downloads.s3.amazonaws.com/Kings+College+Report.pdf
tags:
- scientometrics
- comprehensive impact
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669552499
---
<blockquote>The term impact is currently used widely in research, especially with the inclusion ofnon-academic impact as part of the latest Research Excellence Framework (REF)</blockquote>RF use similar definition of impact to that of REF

View File

@ -1,60 +0,0 @@
---
date: '2022-11-27T12:46:13'
hypothesis-meta:
created: '2022-11-27T12:46:13.123111+00:00'
document:
title:
- 'Mapping the impact: Exploring the payback of arthritis research'
flagged: false
group: __world__
hidden: false
id: ebKibm5REe2ShxtULMIpFw
links:
html: https://hypothes.is/a/ebKibm5REe2ShxtULMIpFw
incontext: https://hyp.is/ebKibm5REe2ShxtULMIpFw/www.rand.org/content/dam/rand/pubs/monographs/2009/RAND_MG862.pdf
json: https://hypothes.is/api/annotations/ebKibm5REe2ShxtULMIpFw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- scientometrics
- academic impact
target:
- selector:
- end: 46769
start: 46609
type: TextPositionSelector
- exact: Much broad and shallow evaluation is based onbibliometrics (examining
the quality of researchpublications) to assess the amount and quality ofknowledge
produced
prefix: ' from discovery to application).'
suffix: ". For example, David King\u2019sThe S"
type: TextQuoteSelector
source: https://www.rand.org/content/dam/rand/pubs/monographs/2009/RAND_MG862.pdf
text: here the authors are discussing the fact that a lot of analysis/evaluation
of research is done via bibliometrics (citation-based impact metrics) and they
consider this kind of evaluation to be "broad and shallow"
updated: '2022-11-27T12:46:13.123111+00:00'
uri: https://www.rand.org/content/dam/rand/pubs/monographs/2009/RAND_MG862.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.rand.org/content/dam/rand/pubs/monographs/2009/RAND_MG862.pdf
tags:
- scientometrics
- academic impact
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669553173
---
<blockquote>Much broad and shallow evaluation is based onbibliometrics (examining the quality of researchpublications) to assess the amount and quality ofknowledge produced</blockquote>here the authors are discussing the fact that a lot of analysis/evaluation of research is done via bibliometrics (citation-based impact metrics) and they consider this kind of evaluation to be "broad and shallow"

View File

@ -1,58 +0,0 @@
---
date: '2022-11-27T12:47:21'
hypothesis-meta:
created: '2022-11-27T12:47:21.490579+00:00'
document:
title:
- 'Mapping the impact: Exploring the payback of arthritis research'
flagged: false
group: __world__
hidden: false
id: onHYKG5REe2spIueHnmPLQ
links:
html: https://hypothes.is/a/onHYKG5REe2spIueHnmPLQ
incontext: https://hyp.is/onHYKG5REe2spIueHnmPLQ/www.rand.org/content/dam/rand/pubs/monographs/2009/RAND_MG862.pdf
json: https://hypothes.is/api/annotations/onHYKG5REe2spIueHnmPLQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- scientometrics
- academic impact
target:
- selector:
- end: 47058
start: 46908
type: TextPositionSelector
- exact: 'However, knowledge production isnormally only an intermediate aim: the
ultimateobjective of most medical research is to improvehealth and prosperity. '
prefix: 'h that of other majoreconomies. '
suffix: Another approach to broad19 OPSI
type: TextQuoteSelector
source: https://www.rand.org/content/dam/rand/pubs/monographs/2009/RAND_MG862.pdf
text: Exactly! Measuring citation counts doesn't help us understand whether research
actually helped people
updated: '2022-11-27T12:47:21.490579+00:00'
uri: https://www.rand.org/content/dam/rand/pubs/monographs/2009/RAND_MG862.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.rand.org/content/dam/rand/pubs/monographs/2009/RAND_MG862.pdf
tags:
- scientometrics
- academic impact
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669553241
---
<blockquote>However, knowledge production isnormally only an intermediate aim: the ultimateobjective of most medical research is to improvehealth and prosperity. </blockquote>Exactly! Measuring citation counts doesn't help us understand whether research actually helped people

View File

@ -1,61 +0,0 @@
---
date: '2022-11-27T12:49:28'
hypothesis-meta:
created: '2022-11-27T12:49:28.655770+00:00'
document:
title:
- 'Mapping the impact: Exploring the payback of arthritis research'
flagged: false
group: __world__
hidden: false
id: 7jxitG5REe23GnN5P7ET5w
links:
html: https://hypothes.is/a/7jxitG5REe23GnN5P7ET5w
incontext: https://hyp.is/7jxitG5REe23GnN5P7ET5w/www.rand.org/content/dam/rand/pubs/monographs/2009/RAND_MG862.pdf
json: https://hypothes.is/api/annotations/7jxitG5REe23GnN5P7ET5w
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- scientometrics
- comprehensive impact
target:
- selector:
- end: 43054
start: 42808
type: TextPositionSelector
- exact: "look at the economicimpact of research \u2013 taking an area of research(often\
\ cardiovascular disease), calculating thetotal investment in research and\
\ comparing it tothe total payback in terms of monetarised healthbenefit and\
\ other economic effects. "
prefix: 'actand shallow evaluation is to '
suffix: An early high-profile study in t
type: TextQuoteSelector
source: https://www.rand.org/content/dam/rand/pubs/monographs/2009/RAND_MG862.pdf
text: Interesting to see that the authors considers these macro level economic indicators
"broad and shallow" but it does make sense. Ideally we want to understand individual
contributions of works to economic impact.
updated: '2022-11-27T12:49:28.655770+00:00'
uri: https://www.rand.org/content/dam/rand/pubs/monographs/2009/RAND_MG862.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.rand.org/content/dam/rand/pubs/monographs/2009/RAND_MG862.pdf
tags:
- scientometrics
- comprehensive impact
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669553368
---
<blockquote>look at the economicimpact of research taking an area of research(often cardiovascular disease), calculating thetotal investment in research and comparing it tothe total payback in terms of monetarised healthbenefit and other economic effects. </blockquote>Interesting to see that the authors considers these macro level economic indicators "broad and shallow" but it does make sense. Ideally we want to understand individual contributions of works to economic impact.

View File

@ -1,62 +0,0 @@
---
date: '2022-11-27T12:52:57'
hypothesis-meta:
created: '2022-11-27T12:52:57.384544+00:00'
document:
title:
- Kings+College+Report.pdf
flagged: false
group: __world__
hidden: false
id: aqWy9m5SEe2UlLdfYMDduw
links:
html: https://hypothes.is/a/aqWy9m5SEe2UlLdfYMDduw
incontext: https://hyp.is/aqWy9m5SEe2UlLdfYMDduw/rf-downloads.s3.amazonaws.com/Kings+College+Report.pdf
json: https://hypothes.is/api/annotations/aqWy9m5SEe2UlLdfYMDduw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- scientometrics
- comprehensive impact
target:
- selector:
- end: 21000
start: 20587
type: TextPositionSelector
- exact: "Research outputs (and outcomes and impact) are gathered through a \u2018\
questionset\u2019 developed by funding institutions through a consultative\
\ process. This set of16 questions contains 175 sub-questions as illustrated\
\ in Figure 3 (the full set ofquestions are available in Annex A). A researcher,\
\ or one of their delegates, can add,edit and delete entries, and crucially,\
\ attribute entries to research grants and awards"
prefix: utcomes and impact of research11
suffix: '.This collation and attribution '
type: TextQuoteSelector
source: http://rf-downloads.s3.amazonaws.com/Kings+College+Report.pdf
text: RF allows researchers to input fine-grained information about the research
that they have done and this information is passed back to the funding bodies.
updated: '2022-11-27T12:52:57.384544+00:00'
uri: http://rf-downloads.s3.amazonaws.com/Kings+College+Report.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: http://rf-downloads.s3.amazonaws.com/Kings+College+Report.pdf
tags:
- scientometrics
- comprehensive impact
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669553577
---
<blockquote>Research outputs (and outcomes and impact) are gathered through a questionset developed by funding institutions through a consultative process. This set of16 questions contains 175 sub-questions as illustrated in Figure 3 (the full set ofquestions are available in Annex A). A researcher, or one of their delegates, can add,edit and delete entries, and crucially, attribute entries to research grants and awards</blockquote>RF allows researchers to input fine-grained information about the research that they have done and this information is passed back to the funding bodies.

View File

@ -1,68 +0,0 @@
---
date: '2022-11-27T12:59:03'
hypothesis-meta:
created: '2022-11-27T12:59:03.290348+00:00'
document:
title:
- Kings+College+Report.pdf
flagged: false
group: __world__
hidden: false
id: RL6rHm5TEe23HrM6ODp8xw
links:
html: https://hypothes.is/a/RL6rHm5TEe23HrM6ODp8xw
incontext: https://hyp.is/RL6rHm5TEe23HrM6ODp8xw/rf-downloads.s3.amazonaws.com/Kings+College+Report.pdf
json: https://hypothes.is/api/annotations/RL6rHm5TEe23HrM6ODp8xw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- scientometrics
- comprehensive impact
- research funders
target:
- selector:
- end: 34475
start: 33860
type: TextPositionSelector
- exact: "Research funders and providers are having to compete with other public\
\ services, and,as such, must be able to advocate the need for funding of\
\ research. Leaders within thesector must have compelling arguments to \u2018\
make the case\u2019 for research. For example,the Research Councils each publish\
\ an annual impact report which describe the waysin which they are maximising\
\ the impacts of their investments. These reports includeillustrations of\
\ how their research and training has made a contribution to the economyand\
\ society.10 The analysis of Researchfish and other similar data can support\
\ thedevelopment of these cases"
prefix: pact might be evidenced.Advocacy
suffix: .AccountabilityRelated to advoca
type: TextQuoteSelector
source: http://rf-downloads.s3.amazonaws.com/Kings+College+Report.pdf
text: For research councils, being able to illustrate how their research impacts
the economy and society helps them to compete for and justify their continued
funding.
updated: '2022-11-27T12:59:03.290348+00:00'
uri: http://rf-downloads.s3.amazonaws.com/Kings+College+Report.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: http://rf-downloads.s3.amazonaws.com/Kings+College+Report.pdf
tags:
- scientometrics
- comprehensive impact
- research funders
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669553943
---
<blockquote>Research funders and providers are having to compete with other public services, and,as such, must be able to advocate the need for funding of research. Leaders within thesector must have compelling arguments to make the case for research. For example,the Research Councils each publish an annual impact report which describe the waysin which they are maximising the impacts of their investments. These reports includeillustrations of how their research and training has made a contribution to the economyand society.10 The analysis of Researchfish and other similar data can support thedevelopment of these cases</blockquote>For research councils, being able to illustrate how their research impacts the economy and society helps them to compete for and justify their continued funding.

View File

@ -1,57 +0,0 @@
---
date: '2022-11-27T13:06:01'
hypothesis-meta:
created: '2022-11-27T13:06:01.886391+00:00'
document:
title:
- Analysis_of_REF_impact.pdf
flagged: false
group: __world__
hidden: false
id: PkA9Qm5UEe2Lp3fXlfJ5qQ
links:
html: https://hypothes.is/a/PkA9Qm5UEe2Lp3fXlfJ5qQ
incontext: https://hyp.is/PkA9Qm5UEe2Lp3fXlfJ5qQ/webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
json: https://hypothes.is/api/annotations/PkA9Qm5UEe2Lp3fXlfJ5qQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- comprehensive impact
target:
- selector:
- end: 30890
start: 30725
type: TextPositionSelector
- exact: "any effect on, change or benefit to the economy, society,culture, public\
\ policy or services, health, the environment or quality of life, beyondacademia\u2019\
\ (REF, 2011)."
prefix: "ng bodies.Impact is defined as \u2018"
suffix: ' An impact case study is a short'
type: TextQuoteSelector
source: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
text: the REF definition of impact as it pertains to comprehensive impact (and as
opposed to academic impact)
updated: '2022-11-27T13:06:01.886391+00:00'
uri: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
tags:
- comprehensive impact
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669554361
---
<blockquote>any effect on, change or benefit to the economy, society,culture, public policy or services, health, the environment or quality of life, beyondacademia (REF, 2011).</blockquote>the REF definition of impact as it pertains to comprehensive impact (and as opposed to academic impact)

View File

@ -1,48 +0,0 @@
---
date: '2022-11-27T13:08:56'
hypothesis-meta:
created: '2022-11-27T13:08:56.843673+00:00'
document:
title:
- Analysis_of_REF_impact.pdf
flagged: false
group: __world__
hidden: false
id: podh-m5UEe2XXZ_9q_5DcA
links:
html: https://hypothes.is/a/podh-m5UEe2XXZ_9q_5DcA
incontext: https://hyp.is/PkA9Qm5UEe2Lp3fXlfJ5qQ/webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
json: https://hypothes.is/api/annotations/podh-m5UEe2XXZ_9q_5DcA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
references:
- PkA9Qm5UEe2Lp3fXlfJ5qQ
tags:
- comprehensive impact
target:
- source: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
text: there's actually a typo here - it is "an" not "any" in the [original document](https://www.ref.ac.uk/2014/media/ref/content/pub/assessmentframeworkandguidanceonsubmissions/GOS%20including%20addendum.pdf)
updated: '2022-11-27T13:08:56.843673+00:00'
uri: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
tags:
- comprehensive impact
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669554536
---
there's actually a typo here - it is "an" not "any" in the [original document](https://www.ref.ac.uk/2014/media/ref/content/pub/assessmentframeworkandguidanceonsubmissions/GOS%20including%20addendum.pdf)

View File

@ -1,59 +0,0 @@
---
date: '2022-11-27T13:14:43'
hypothesis-meta:
created: '2022-11-27T13:14:43.604240+00:00'
document:
title:
- Analysis_of_REF_impact.pdf
flagged: false
group: __world__
hidden: false
id: dTjBdG5VEe2sq0cgSuwbjw
links:
html: https://hypothes.is/a/dTjBdG5VEe2sq0cgSuwbjw
incontext: https://hyp.is/dTjBdG5VEe2sq0cgSuwbjw/webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
json: https://hypothes.is/api/annotations/dTjBdG5VEe2sq0cgSuwbjw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- lda
- comprehensive impact
target:
- selector:
- end: 39662
start: 39458
type: TextPositionSelector
- exact: Topic modelling was used to determine common topics across the wholecorpus.
Sixty-five topics were found (of which 60 were used) using theApache Mallet
Toolkit Latent Dirichlet Allocation (LDA) algorithm.
prefix: s to answer specific challenges.
suffix: 12Topics are based on the freque
type: TextQuoteSelector
source: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
text: The authors used LDA with k=60 across full text case studies. The Apache Mallet
implementation was used.
updated: '2022-11-27T13:14:43.604240+00:00'
uri: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
tags:
- lda
- comprehensive impact
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669554883
---
<blockquote>Topic modelling was used to determine common topics across the wholecorpus. Sixty-five topics were found (of which 60 were used) using theApache Mallet Toolkit Latent Dirichlet Allocation (LDA) algorithm.</blockquote>The authors used LDA with k=60 across full text case studies. The Apache Mallet implementation was used.

View File

@ -1,62 +0,0 @@
---
date: '2022-11-27T13:17:16'
hypothesis-meta:
created: '2022-11-27T13:17:16.223069+00:00'
document:
title:
- Analysis_of_REF_impact.pdf
flagged: false
group: __world__
hidden: false
id: 0DBd7m5VEe2h3X_s5iCY9A
links:
html: https://hypothes.is/a/0DBd7m5VEe2h3X_s5iCY9A
incontext: https://hyp.is/0DBd7m5VEe2h3X_s5iCY9A/webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
json: https://hypothes.is/api/annotations/0DBd7m5VEe2h3X_s5iCY9A
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- comprehensive impact
- bag of words
target:
- selector:
- end: 43177
start: 42849
type: TextPositionSelector
- exact: 'With the benefit of hindsight, our analysis would have been much easierif
the case studies had greater structure and used standardized definitions.
Giventhat the case studies spanned a 20-year period, organization names have
changed inthat time and keyword searches were not sophisticated enough to
capture some keyinformation. '
prefix: 'he case studies werestructured. '
suffix: For example, a drop-down list of
type: TextQuoteSelector
source: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
text: I found similar in my [2017 work](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0173152).
I'd guess that modern vector-based analyses and entity linking approaches could
help a lot with reconciling these issues now.
updated: '2022-11-27T13:17:16.223069+00:00'
uri: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
tags:
- comprehensive impact
- bag of words
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669555036
---
<blockquote>With the benefit of hindsight, our analysis would have been much easierif the case studies had greater structure and used standardized definitions. Giventhat the case studies spanned a 20-year period, organization names have changed inthat time and keyword searches were not sophisticated enough to capture some keyinformation. </blockquote>I found similar in my [2017 work](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0173152). I'd guess that modern vector-based analyses and entity linking approaches could help a lot with reconciling these issues now.

View File

@ -1,60 +0,0 @@
---
date: '2022-11-27T13:23:58'
hypothesis-meta:
created: '2022-11-27T13:23:58.799954+00:00'
document:
title:
- Analysis_of_REF_impact.pdf
flagged: false
group: __world__
hidden: false
id: wCWRSG5WEe2stD8QQgklZw
links:
html: https://hypothes.is/a/wCWRSG5WEe2stD8QQgklZw
incontext: https://hyp.is/wCWRSG5WEe2stD8QQgklZw/webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
json: https://hypothes.is/api/annotations/wCWRSG5WEe2stD8QQgklZw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- research funders
- comprehensive impact
target:
- selector:
- end: 88026
start: 87757
type: TextPositionSelector
- exact: while there aregroups potentially benefiting from the case studies relating
to their field of research (egwriters benefiting from studies in Panel D,
engineers benefiting from studies in PanelB), there are mentions of these
potential beneficiaries across all the panels
prefix: 'is text-mining exercise is that '
suffix: '. Althoughthis would have to be '
type: TextQuoteSelector
source: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
text: The beneficiaries of research named by REF impact case studies are heterogeneous
across all UOAs
updated: '2022-11-27T13:23:58.799954+00:00'
uri: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://webarchive.nationalarchives.gov.uk/ukgwa/20170712131025mp_/http://www.hefce.ac.uk/media/HEFCE,2014/Content/Pubs/Independentresearch/2015/Analysis,of,REF,impact/Analysis_of_REF_impact.pdf
tags:
- research funders
- comprehensive impact
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669555438
---
<blockquote>while there aregroups potentially benefiting from the case studies relating to their field of research (egwriters benefiting from studies in Panel D, engineers benefiting from studies in PanelB), there are mentions of these potential beneficiaries across all the panels</blockquote>The beneficiaries of research named by REF impact case studies are heterogeneous across all UOAs

View File

@ -1,55 +0,0 @@
---
date: '2022-11-27T13:29:00'
hypothesis-meta:
created: '2022-11-27T13:29:00.711577+00:00'
document:
title:
- 'Governing by narratives: REF impact case studies and restrictive storytelling
in performance measure'
flagged: false
group: __world__
hidden: false
id: dBiS9G5XEe24dF_i73WudA
links:
html: https://hypothes.is/a/dBiS9G5XEe24dF_i73WudA
incontext: https://hyp.is/dBiS9G5XEe24dF_i73WudA/www.tandfonline.com/doi/pdf/10.1080/03075079.2021.1978965?needAccess=true
json: https://hypothes.is/api/annotations/dBiS9G5XEe24dF_i73WudA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- comprehensive impact
target:
- selector:
- end: 6834
start: 6767
type: TextPositionSelector
- exact: ' RANDreport that had been commissioned by HEFCE (Grant et al. 2010)'
prefix: e were considered, informed by a
suffix: '. The report recommended thatan '
type: TextQuoteSelector
source: https://www.tandfonline.com/doi/pdf/10.1080/03075079.2021.1978965?needAccess=true
text: interesting ties here between REF and ResearchFish - both came out of RAND
updated: '2022-11-27T13:29:00.711577+00:00'
uri: https://www.tandfonline.com/doi/pdf/10.1080/03075079.2021.1978965?needAccess=true
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.tandfonline.com/doi/pdf/10.1080/03075079.2021.1978965?needAccess=true
tags:
- comprehensive impact
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669555740
---
<blockquote> RANDreport that had been commissioned by HEFCE (Grant et al. 2010)</blockquote>interesting ties here between REF and ResearchFish - both came out of RAND

View File

@ -1,60 +0,0 @@
---
date: '2022-11-27T13:30:51'
hypothesis-meta:
created: '2022-11-27T13:30:51.847672+00:00'
document:
title:
- 'Governing by narratives: REF impact case studies and restrictive storytelling
in performance measure'
flagged: false
group: __world__
hidden: false
id: tlbyjG5XEe2O-DP1BH1rig
links:
html: https://hypothes.is/a/tlbyjG5XEe2O-DP1BH1rig
incontext: https://hyp.is/tlbyjG5XEe2O-DP1BH1rig/www.tandfonline.com/doi/pdf/10.1080/03075079.2021.1978965?needAccess=true
json: https://hypothes.is/api/annotations/tlbyjG5XEe2O-DP1BH1rig
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- comprehensive impact
- researchers
target:
- selector:
- end: 10868
start: 10652
type: TextPositionSelector
- exact: Unsur-prisingly, therefore, existing research documents various ways
in which REF impact has becomeembedded within university governance, including
via the broadening of career progression criteria(Bandola-Gill 2019)
prefix: '018; Espeland and Sauder 2007). '
suffix: ', changes to internal managerial'
type: TextQuoteSelector
source: https://www.tandfonline.com/doi/pdf/10.1080/03075079.2021.1978965?needAccess=true
text: REF has become embedded within university governance - including career progression
criteria (for researchers presumably)
updated: '2022-11-27T13:30:51.847672+00:00'
uri: https://www.tandfonline.com/doi/pdf/10.1080/03075079.2021.1978965?needAccess=true
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.tandfonline.com/doi/pdf/10.1080/03075079.2021.1978965?needAccess=true
tags:
- comprehensive impact
- researchers
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669555851
---
<blockquote>Unsur-prisingly, therefore, existing research documents various ways in which REF impact has becomeembedded within university governance, including via the broadening of career progression criteria(Bandola-Gill 2019)</blockquote>REF has become embedded within university governance - including career progression criteria (for researchers presumably)

View File

@ -1,63 +0,0 @@
---
date: '2022-11-27T13:34:08'
hypothesis-meta:
created: '2022-11-27T13:34:08.086172+00:00'
document:
title:
- OP-SCIP190035 895..905
flagged: false
group: __world__
hidden: false
id: K04A5G5YEe2Wv4uzp_WNKQ
links:
html: https://hypothes.is/a/K04A5G5YEe2Wv4uzp_WNKQ
incontext: https://hyp.is/K04A5G5YEe2Wv4uzp_WNKQ/viduketha.nsf.gov.lk:8585/FJDB_NSF/Science_and_Public_Policy/Vol.46(6)-2019/scz037.pdf
json: https://hypothes.is/api/annotations/K04A5G5YEe2Wv4uzp_WNKQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- comprehensive impact
- researchers
target:
- selector:
- end: 3492
start: 3029
type: TextPositionSelector
- exact: ' increasing body of research analytically exploresthe consequences of
the research impact agenda on academic work,including the risks posed to research
quality (Chubb and Reed2018), prioritising of short-term impacts rather than
more concep-tual impacts (Greenhalgh and Fahy 2015; Meagher and Martin2017),
ethical risks (Smith and Stewart 2017), and a focus on indi-vidual academics
rather than on the broader context of research-based policy change (Dunlop
2018)'
prefix: ge production(Phillips 2010). An
suffix: .The sources of tension embedded
type: TextQuoteSelector
source: http://viduketha.nsf.gov.lk:8585/FJDB_NSF/Science_and_Public_Policy/Vol.46(6)-2019/scz037.pdf
text: Lots of papers write about the effect that the UK's focus on comprehensive
impact affects the quality of research and individual researchers
updated: '2022-11-27T13:34:08.086172+00:00'
uri: http://viduketha.nsf.gov.lk:8585/FJDB_NSF/Science_and_Public_Policy/Vol.46(6)-2019/scz037.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: http://viduketha.nsf.gov.lk:8585/FJDB_NSF/Science_and_Public_Policy/Vol.46(6)-2019/scz037.pdf
tags:
- comprehensive impact
- researchers
- hypothesis
type: annotation
url: /annotations/2022/11/27/1669556048
---
<blockquote> increasing body of research analytically exploresthe consequences of the research impact agenda on academic work,including the risks posed to research quality (Chubb and Reed2018), prioritising of short-term impacts rather than more concep-tual impacts (Greenhalgh and Fahy 2015; Meagher and Martin2017), ethical risks (Smith and Stewart 2017), and a focus on indi-vidual academics rather than on the broader context of research-based policy change (Dunlop 2018)</blockquote>Lots of papers write about the effect that the UK's focus on comprehensive impact affects the quality of research and individual researchers

View File

@ -1,61 +0,0 @@
---
date: '2022-11-28T11:31:57'
hypothesis-meta:
created: '2022-11-28T11:31:57.626263+00:00'
document:
title:
- Countering the Effects of Lead Bias in News Summarization via Multi-Stage Training
and Auxiliary Losses
flagged: false
group: __world__
hidden: false
id: RG4WUG8QEe2-tZ8v6nNOJA
links:
html: https://hypothes.is/a/RG4WUG8QEe2-tZ8v6nNOJA
incontext: https://hyp.is/RG4WUG8QEe2-tZ8v6nNOJA/aclanthology.org/D19-1620.pdf
json: https://hypothes.is/api/annotations/RG4WUG8QEe2-tZ8v6nNOJA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- NLProc
- summarization
- bandit
- rl
target:
- selector:
- end: 7922
start: 7890
type: TextPositionSelector
- exact: BanditSum a hierarchical bi-LSTM
prefix: S uses a CNN+bi-GRU encoder, and
suffix: ". RNES\u2019s de-coder is auto-regres"
type: TextQuoteSelector
source: https://aclanthology.org/D19-1620.pdf
text: Banditsum uses bi-directional LSTM encoding. It generates sentence-level representations
updated: '2022-11-28T11:34:57.447988+00:00'
uri: https://aclanthology.org/D19-1620.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://aclanthology.org/D19-1620.pdf
tags:
- NLProc
- summarization
- bandit
- rl
- hypothesis
type: annotation
url: /annotations/2022/11/28/1669635117
---
<blockquote>BanditSum a hierarchical bi-LSTM</blockquote>Banditsum uses bi-directional LSTM encoding. It generates sentence-level representations

View File

@ -1,62 +0,0 @@
---
date: '2022-11-28T11:34:45'
hypothesis-meta:
created: '2022-11-28T11:34:45.963292+00:00'
document:
title:
- 1809.09672.pdf
flagged: false
group: __world__
hidden: false
id: qMPVfG8QEe2WJWufCDu9ww
links:
html: https://hypothes.is/a/qMPVfG8QEe2WJWufCDu9ww
incontext: https://hyp.is/qMPVfG8QEe2WJWufCDu9ww/arxiv.org/pdf/1809.09672.pdf
json: https://hypothes.is/api/annotations/qMPVfG8QEe2WJWufCDu9ww
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- rl
- bandit
- nlproc
- summarization
target:
- selector:
- end: 10089
start: 9945
type: TextPositionSelector
- exact: andit is a decision-making formal-ization in which an agent repeatedly
chooses oneof several actions, and receives a reward based onthis choice.
prefix: dient reinforcementlearning. A b
suffix: " The agent\u2019s goal is to quickly "
type: TextQuoteSelector
source: https://arxiv.org/pdf/1809.09672.pdf
text: 'Definition for contextual bandit: an agent that repeatedly choses one of
several actions and receives a reward based on this choice.'
updated: '2022-11-28T11:34:45.963292+00:00'
uri: https://arxiv.org/pdf/1809.09672.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/1809.09672.pdf
tags:
- rl
- bandit
- nlproc
- summarization
- hypothesis
type: annotation
url: /annotations/2022/11/28/1669635285
---
<blockquote>andit is a decision-making formal-ization in which an agent repeatedly chooses oneof several actions, and receives a reward based onthis choice.</blockquote>Definition for contextual bandit: an agent that repeatedly choses one of several actions and receives a reward based on this choice.

View File

@ -1,64 +0,0 @@
---
date: '2022-11-28T11:37:23'
hypothesis-meta:
created: '2022-11-28T11:37:23.032429+00:00'
document:
title:
- 1809.09672.pdf
flagged: false
group: __world__
hidden: false
id: BmIgdm8REe2-umvTlBFiag
links:
html: https://hypothes.is/a/BmIgdm8REe2-umvTlBFiag
incontext: https://hyp.is/BmIgdm8REe2-umvTlBFiag/arxiv.org/pdf/1809.09672.pdf
json: https://hypothes.is/api/annotations/BmIgdm8REe2-umvTlBFiag
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- rl
- bandit
- NLProc
- summarization
target:
- selector:
- end: 10812
start: 10640
type: TextPositionSelector
- exact: "Extractive summarization may be regarded as acontextual bandit as follows.\
\ Each document is acontext, and each ordered subset of a document\u2019ssentences\
\ is a different action"
prefix: h ev-ery episode has length one.
suffix: . Formally, assumethat each cont
type: TextQuoteSelector
source: https://arxiv.org/pdf/1809.09672.pdf
text: We can represent extractive summarization as a bandit problem by treating
the document as the context and possible reorderings of sentences as actions an
agent could take
updated: '2022-11-28T11:37:23.032429+00:00'
uri: https://arxiv.org/pdf/1809.09672.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/1809.09672.pdf
tags:
- rl
- bandit
- NLProc
- summarization
- hypothesis
type: annotation
url: /annotations/2022/11/28/1669635443
---
<blockquote>Extractive summarization may be regarded as acontextual bandit as follows. Each document is acontext, and each ordered subset of a documentssentences is a different action</blockquote>We can represent extractive summarization as a bandit problem by treating the document as the context and possible reorderings of sentences as actions an agent could take

View File

@ -1,68 +0,0 @@
---
date: '2022-12-01T22:20:26'
hypothesis-meta:
created: '2022-12-01T22:20:26.080261+00:00'
document:
title:
- "It\u2019s True: The Typical Car Is Parked 95 Percent of the Time"
flagged: false
group: __world__
hidden: false
id: WvwqSHHGEe2XoPtDtpjTlw
links:
html: https://hypothes.is/a/WvwqSHHGEe2XoPtDtpjTlw
incontext: https://hyp.is/WvwqSHHGEe2XoPtDtpjTlw/usa.streetsblog.org/2016/03/10/its-true-the-typical-car-is-parked-95-percent-of-the-time/
json: https://hypothes.is/api/annotations/WvwqSHHGEe2XoPtDtpjTlw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- solar punk
target:
- selector:
- endContainer: /div[1]/div[1]/div[4]/div[1]/main[1]/article[1]/div[1]/div[3]/blockquote[1]/p[3]
endOffset: 329
startContainer: /div[1]/div[1]/div[4]/div[1]/main[1]/article[1]/div[1]/div[3]/blockquote[1]/p[3]
startOffset: 0
type: RangeSelector
- end: 5030
start: 4701
type: TextPositionSelector
- exact: "\u201C\u2026 there are about 25 billion car trips per year, and with\
\ some 27 million cars, this suggests an average of just under 18 trips per\
\ car every week. Since the duration of the average car trip is about 20 minutes,\
\ the typical car is only on the move for 6 hours in the week: for the remaining\
\ 162 hours it is stationary \u2013 parked.\u201D"
prefix: 'Travel Survey (NTS) (see p.23):
'
suffix: '
Since there are 168 hours in a '
type: TextQuoteSelector
source: https://usa.streetsblog.org/2016/03/10/its-true-the-typical-car-is-parked-95-percent-of-the-time/
text: 'This may be napkin maths but this is pretty shocking to think about. There
must be a better way! '
updated: '2022-12-01T22:20:26.080261+00:00'
uri: https://usa.streetsblog.org/2016/03/10/its-true-the-typical-car-is-parked-95-percent-of-the-time/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://usa.streetsblog.org/2016/03/10/its-true-the-typical-car-is-parked-95-percent-of-the-time/
tags:
- solar punk
- hypothesis
type: annotation
url: /annotations/2022/12/01/1669933226
---
<blockquote>“… there are about 25 billion car trips per year, and with some 27 million cars, this suggests an average of just under 18 trips per car every week. Since the duration of the average car trip is about 20 minutes, the typical car is only on the move for 6 hours in the week: for the remaining 162 hours it is stationary parked.”</blockquote>This may be napkin maths but this is pretty shocking to think about. There must be a better way!

View File

@ -1,68 +0,0 @@
---
date: '2022-12-04T16:29:05'
hypothesis-meta:
created: '2022-12-04T16:29:05.263170+00:00'
document:
title:
- Exploring vs. exploiting - Herbert Lui
flagged: false
group: __world__
hidden: false
id: xQywjnPwEe2lk_tZfYP65Q
links:
html: https://hypothes.is/a/xQywjnPwEe2lk_tZfYP65Q
incontext: https://hyp.is/xQywjnPwEe2lk_tZfYP65Q/herbertlui.net/exploring-vs-exploiting/
json: https://hypothes.is/api/annotations/xQywjnPwEe2lk_tZfYP65Q
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- pkm
- tools for thought
target:
- selector:
- endContainer: /div[1]/div[1]/div[1]/main[1]/article[1]/div[1]/div[1]/p[6]
endOffset: 319
startContainer: /div[1]/div[1]/div[1]/main[1]/article[1]/div[1]/div[1]/p[6]
startOffset: 0
type: RangeSelector
- end: 2272
start: 1953
type: TextPositionSelector
- exact: "It\u2019s always worth gathering information, nurturing other projects,\
\ and putting together some backup plans. You\u2019ll need to define what\
\ success means to you for each of them, because you won\u2019t make overnight\
\ progress; instead, you\u2019re best served picking projects that you can\
\ learn critical lessons from, even if you fail"
prefix: "even better than their Plan A.\u201D\n"
suffix: ".\nEven if you\u2019re focused and mak"
type: TextQuoteSelector
source: https://herbertlui.net/exploring-vs-exploiting/
text: It's interesting because this way of thinking is eminently compatible with
the zettelkasten way of thinking e.g. don't necessarily set out with a hypothesis
in mind that you're trying to prove but rather explore until something interesting
emerges.
updated: '2022-12-04T16:29:05.263170+00:00'
uri: https://herbertlui.net/exploring-vs-exploiting/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://herbertlui.net/exploring-vs-exploiting/
tags:
- pkm
- tools for thought
- hypothesis
type: annotation
url: /annotations/2022/12/04/1670171345
---
<blockquote>Its always worth gathering information, nurturing other projects, and putting together some backup plans. Youll need to define what success means to you for each of them, because you wont make overnight progress; instead, youre best served picking projects that you can learn critical lessons from, even if you fail</blockquote>It's interesting because this way of thinking is eminently compatible with the zettelkasten way of thinking e.g. don't necessarily set out with a hypothesis in mind that you're trying to prove but rather explore until something interesting emerges.

View File

@ -1,66 +0,0 @@
---
date: '2022-12-04T20:14:02'
hypothesis-meta:
created: '2022-12-04T20:14:02.815622+00:00'
document:
title:
- Hyperbolic Distance Discounting
flagged: false
group: __world__
hidden: false
id: MjfCdHQQEe2XA6-Y-PXOtA
links:
html: https://hypothes.is/a/MjfCdHQQEe2XA6-Y-PXOtA
incontext: https://hyp.is/MjfCdHQQEe2XA6-Y-PXOtA/www.atvbt.com/hyperbolic/
json: https://hypothes.is/api/annotations/MjfCdHQQEe2XA6-Y-PXOtA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- psychology
- delayed gratification
- behaviour
target:
- selector:
- endContainer: /div[1]/main[1]/article[1]/div[1]/p[5]
endOffset: 292
startContainer: /div[1]/main[1]/article[1]/div[1]/p[5]
startOffset: 0
type: RangeSelector
- end: 1911
start: 1619
type: TextPositionSelector
- exact: 'You may have heard of hyperbolic discounting from behavioral economics:
people will generally disproportionally, i.e. hyperbolically, discount the
value of something the farther off it is. The average person judges $15 now
as equivalent to $30 in 3-months (an annual rate of return of 277%!).'
prefix: on center.Hyperbolic Discounting
suffix: "This excessive time-based or \u201Cte"
type: TextQuoteSelector
source: https://www.atvbt.com/hyperbolic/
text: this is fascinating and must relate to delayed gratification
updated: '2022-12-04T20:14:02.815622+00:00'
uri: https://www.atvbt.com/hyperbolic/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.atvbt.com/hyperbolic/
tags:
- psychology
- delayed gratification
- behaviour
- hypothesis
type: annotation
url: /annotations/2022/12/04/1670184842
---
<blockquote>You may have heard of hyperbolic discounting from behavioral economics: people will generally disproportionally, i.e. hyperbolically, discount the value of something the farther off it is. The average person judges $15 now as equivalent to $30 in 3-months (an annual rate of return of 277%!).</blockquote>this is fascinating and must relate to delayed gratification

View File

@ -1,67 +0,0 @@
---
date: '2022-12-04T20:15:19'
hypothesis-meta:
created: '2022-12-04T20:15:19.784065+00:00'
document:
title:
- Hyperbolic Distance Discounting
flagged: false
group: __world__
hidden: false
id: YBFyOnQQEe2WiKdsj1LCZg
links:
html: https://hypothes.is/a/YBFyOnQQEe2WiKdsj1LCZg
incontext: https://hyp.is/YBFyOnQQEe2WiKdsj1LCZg/www.atvbt.com/hyperbolic/
json: https://hypothes.is/api/annotations/YBFyOnQQEe2WiKdsj1LCZg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- psychology
- delayed gratification
- behaviour
target:
- selector:
- endContainer: /div[1]/main[1]/article[1]/div[1]/p[14]
endOffset: 278
startContainer: /div[1]/main[1]/article[1]/div[1]/p[14]
startOffset: 0
type: RangeSelector
- end: 4013
start: 3735
type: TextPositionSelector
- exact: "Of course, the closest you can get is having the activity available\
\ in your own living space, but as unused home treadmills and exercise bikes\
\ demonstrate, this has its pitfalls. There could be something about a thing\
\ always being available that means there\u2019s never any urgency."
prefix: ay (and maybe worth paying for).
suffix: I think the ideal is to plan a r
type: TextQuoteSelector
source: https://www.atvbt.com/hyperbolic/
text: There seems to be a minimum at which hyperbolic discounting stops working
because things are too easy to access
updated: '2022-12-04T20:15:19.784065+00:00'
uri: https://www.atvbt.com/hyperbolic/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.atvbt.com/hyperbolic/
tags:
- psychology
- delayed gratification
- behaviour
- hypothesis
type: annotation
url: /annotations/2022/12/04/1670184919
---
<blockquote>Of course, the closest you can get is having the activity available in your own living space, but as unused home treadmills and exercise bikes demonstrate, this has its pitfalls. There could be something about a thing always being available that means theres never any urgency.</blockquote>There seems to be a minimum at which hyperbolic discounting stops working because things are too easy to access

View File

@ -1,74 +0,0 @@
---
date: '2022-12-04T20:26:10'
hypothesis-meta:
created: '2022-12-04T20:26:10.856094+00:00'
document:
title:
- Language builds culture - Herbert Lui
flagged: false
group: __world__
hidden: false
id: 5DIcYnQREe2NVTOF9GGXvA
links:
html: https://hypothes.is/a/5DIcYnQREe2NVTOF9GGXvA
incontext: https://hyp.is/5DIcYnQREe2NVTOF9GGXvA/herbertlui.net/language-builds-culture/
json: https://hypothes.is/api/annotations/5DIcYnQREe2NVTOF9GGXvA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- linguistics
- behaviour
- learning-in-public
target:
- selector:
- endContainer: /div[1]/div[1]/div[1]/main[1]/article[1]/div[1]/div[1]/p[2]
endOffset: 278
startContainer: /div[1]/div[1]/div[1]/main[1]/article[1]/div[1]/div[1]/p[2]
startOffset: 0
type: RangeSelector
- end: 867
start: 589
type: TextPositionSelector
- exact: "Whether you want to call them mottos, memes, or manifestos, words can\
\ be the building blocks of how we think and transmit ideas. You can also\
\ gauge how well someone is grasping your concepts\u2014or at least making\
\ an effort to\u2014by the language they\u2019re responding to you with as\
\ well."
prefix: "falls, and favorable outcomes.\u201D\n"
suffix: '
Posted in Contentions, Life. '
type: TextQuoteSelector
source: https://herbertlui.net/language-builds-culture/
text: You can use the way that a person responds to your concepts as a metric for
how well they understand you. If they don't understand chances are they will retreat
back to jargon to try to hide the fact that they're struggling. If they're getting
on well they might have an insightful way to extend your metaphor
updated: '2022-12-04T20:26:10.856094+00:00'
uri: https://herbertlui.net/language-builds-culture/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://herbertlui.net/language-builds-culture/
tags:
- linguistics
- behaviour
- learning-in-public
- hypothesis
type: annotation
url: /annotations/2022/12/04/1670185570
---
<blockquote>Whether you want to call them mottos, memes, or manifestos, words can be the building blocks of how we think and transmit ideas. You can also gauge how well someone is grasping your concepts—or at least making an effort to—by the language theyre responding to you with as well.</blockquote>You can use the way that a person responds to your concepts as a metric for how well they understand you. If they don't understand chances are they will retreat back to jargon to try to hide the fact that they're struggling. If they're getting on well they might have an insightful way to extend your metaphor

View File

@ -1,64 +0,0 @@
---
date: '2022-12-06T06:41:27'
hypothesis-meta:
created: '2022-12-06T06:41:27.851505+00:00'
document:
title:
- Ron DeSantis' Quiet Relationship with Amazon
flagged: false
group: __world__
hidden: false
id: AsgtBHUxEe2ilAfmS4q53w
links:
html: https://hypothes.is/a/AsgtBHUxEe2ilAfmS4q53w
incontext: https://hyp.is/AsgtBHUxEe2ilAfmS4q53w/mattstoller.substack.com/p/ron-desantis-quiet-relationship-with
json: https://hypothes.is/api/annotations/AsgtBHUxEe2ilAfmS4q53w
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- capitalism
target:
- selector:
- endContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[4]/div[1]/div[1]/p[12]/span[2]
endOffset: 141
startContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[4]/div[1]/div[1]/p[12]/span[1]
startOffset: 0
type: RangeSelector
- end: 9023
start: 8736
type: TextPositionSelector
- exact: "Amazon is hated on the right as a bulwark of progressivism. For instance,\
\ to pick a random example, GOP icon Tucker Carlson recently characterized\
\ the firm\u2019s behavior as \u2018modern-day book burning.\u2019 And you\
\ can find an endless number of right-wing critiques. Conservatives distrust\
\ Amazon."
prefix: ne his relationship with Amazon.
suffix: An association with the tech gia
type: TextQuoteSelector
source: https://mattstoller.substack.com/p/ron-desantis-quiet-relationship-with
text: 'That is really interesting. Amazon is not exactly renowned as an m upholder
of progressive values by the left either. '
updated: '2022-12-06T06:41:27.851505+00:00'
uri: https://mattstoller.substack.com/p/ron-desantis-quiet-relationship-with
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://mattstoller.substack.com/p/ron-desantis-quiet-relationship-with
tags:
- capitalism
- hypothesis
type: annotation
url: /annotations/2022/12/06/1670308887
---
<blockquote>Amazon is hated on the right as a bulwark of progressivism. For instance, to pick a random example, GOP icon Tucker Carlson recently characterized the firms behavior as modern-day book burning. And you can find an endless number of right-wing critiques. Conservatives distrust Amazon.</blockquote>That is really interesting. Amazon is not exactly renowned as an m upholder of progressive values by the left either.

View File

@ -1,66 +0,0 @@
---
date: '2022-12-07T11:55:42'
hypothesis-meta:
created: '2022-12-07T11:55:42.527155+00:00'
document:
title:
- 2203.15556.pdf
flagged: false
group: __world__
hidden: false
id: E3TX9nYmEe2IOgdyjyKG9w
links:
html: https://hypothes.is/a/E3TX9nYmEe2IOgdyjyKG9w
incontext: https://hyp.is/E3TX9nYmEe2IOgdyjyKG9w/arxiv.org/pdf/2203.15556.pdf
json: https://hypothes.is/api/annotations/E3TX9nYmEe2IOgdyjyKG9w
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- nlproc
- efficient ml
target:
- selector:
- end: 1689
start: 1063
type: TextPositionSelector
- exact: "We test this hypothesis by training a predicted compute-optimal model,\
\ Chinchilla, that uses the same compute budget as Gopher but with 70B parameters\
\ and4\xD7 more more data. Chinchilla uniformly and significantly outperforms\
\ Gopher (280B), GPT-3 (175B),Jurassic-1 (178B), and Megatron-Turing NLG (530B)\
\ on a large range of downstream evaluation tasks.This also means that Chinchilla\
\ uses substantially less compute for fine-tuning and inference, greatlyfacilitating\
\ downstream usage. As a highlight, Chinchilla reaches a state-of-the-art\
\ average accuracy of67.5% on the MMLU benchmark, greater than a 7% improvement\
\ over Gopher"
prefix: ' tokens should also be doubled. '
suffix: .1. IntroductionRecently a serie
type: TextQuoteSelector
source: https://arxiv.org/pdf/2203.15556.pdf
text: By using more data on a smaller language model the authors were able to achieve
better performance than with the larger models - this reduces the cost of using
the model for inference.
updated: '2022-12-07T11:55:42.527155+00:00'
uri: https://arxiv.org/pdf/2203.15556.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/2203.15556.pdf
tags:
- nlproc
- efficient ml
- hypothesis
type: annotation
url: /annotations/2022/12/07/1670414142
---
<blockquote>We test this hypothesis by training a predicted compute-optimal model, Chinchilla, that uses the same compute budget as Gopher but with 70B parameters and4× more more data. Chinchilla uniformly and significantly outperforms Gopher (280B), GPT-3 (175B),Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks.This also means that Chinchilla uses substantially less compute for fine-tuning and inference, greatlyfacilitating downstream usage. As a highlight, Chinchilla reaches a state-of-the-art average accuracy of67.5% on the MMLU benchmark, greater than a 7% improvement over Gopher</blockquote>By using more data on a smaller language model the authors were able to achieve better performance than with the larger models - this reduces the cost of using the model for inference.

View File

@ -1,62 +0,0 @@
---
date: '2022-12-10T23:29:56'
hypothesis-meta:
created: '2022-12-10T23:29:56.562311+00:00'
document:
title:
- AI's Jurassic Park moment
flagged: false
group: __world__
hidden: false
id: jnOjknjiEe2uiysybnY9lA
links:
html: https://hypothes.is/a/jnOjknjiEe2uiysybnY9lA
incontext: https://hyp.is/jnOjknjiEe2uiysybnY9lA/garymarcus.substack.com/p/ais-jurassic-park-moment
json: https://hypothes.is/api/annotations/jnOjknjiEe2uiysybnY9lA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- nlproc
- LLMs
target:
- selector:
- endContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[4]/div[1]/div[1]/p[12]/span[1]
endOffset: 228
startContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[4]/div[1]/div[1]/p[12]/span[1]
startOffset: 170
type: RangeSelector
- end: 5114
start: 5056
type: TextPositionSelector
- exact: 'anyone skilled in the art can now replicate their recipe. '
prefix: ' described what was being done; '
suffix: '(Indeed Stability.AI is already '
type: TextQuoteSelector
source: https://garymarcus.substack.com/p/ais-jurassic-park-moment
text: 'Well anyone skilled enough who has $500k for the gpu bill and access to and
the means to store the corpus... So corporations I guess... Yey! '
updated: '2022-12-10T23:29:56.562311+00:00'
uri: https://garymarcus.substack.com/p/ais-jurassic-park-moment
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://garymarcus.substack.com/p/ais-jurassic-park-moment
tags:
- nlproc
- LLMs
- hypothesis
type: annotation
url: /annotations/2022/12/10/1670714996
---
<blockquote>anyone skilled in the art can now replicate their recipe. </blockquote>Well anyone skilled enough who has $500k for the gpu bill and access to and the means to store the corpus... So corporations I guess... Yey!

View File

@ -1,69 +0,0 @@
---
date: '2022-12-10T23:33:16'
hypothesis-meta:
created: '2022-12-10T23:33:16.013137+00:00'
document:
title:
- AI's Jurassic Park moment
flagged: false
group: __world__
hidden: false
id: BV5ojnjjEe2dH2uIOtj19g
links:
html: https://hypothes.is/a/BV5ojnjjEe2dH2uIOtj19g
incontext: https://hyp.is/BV5ojnjjEe2dH2uIOtj19g/garymarcus.substack.com/p/ais-jurassic-park-moment
json: https://hypothes.is/api/annotations/BV5ojnjjEe2dH2uIOtj19g
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- nlproc
- Policy
- LLMs
target:
- selector:
- endContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[4]/div[1]/div[1]/p[22]
endOffset: 400
startContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[4]/div[1]/div[1]/p[22]
startOffset: 8
type: RangeSelector
- end: 8951
start: 8559
type: TextPositionSelector
- exact: "every country is going to need to reconsider its policies on misinformation.\
\ It\u2019s one thing for the occasional lie to slip through; it\u2019s another\
\ for us all to swim in a veritable ocean of lies. In time, though it would\
\ not be a popular decision, we may have to begin to treat misinformation\
\ as we do libel, making it actionable if it is created with sufficient malice\
\ and sufficient volume. "
prefix: "ds for a user\u2019s removal.Second, "
suffix: Third, provenance is more import
type: TextQuoteSelector
source: https://garymarcus.substack.com/p/ais-jurassic-park-moment
text: 'What to do then when our government reps are already happy to perpetuate
"culture wars" and empty talking points? '
updated: '2022-12-10T23:33:16.013137+00:00'
uri: https://garymarcus.substack.com/p/ais-jurassic-park-moment
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://garymarcus.substack.com/p/ais-jurassic-park-moment
tags:
- nlproc
- Policy
- LLMs
- hypothesis
type: annotation
url: /annotations/2022/12/10/1670715196
---
<blockquote>every country is going to need to reconsider its policies on misinformation. Its one thing for the occasional lie to slip through; its another for us all to swim in a veritable ocean of lies. In time, though it would not be a popular decision, we may have to begin to treat misinformation as we do libel, making it actionable if it is created with sufficient malice and sufficient volume. </blockquote>What to do then when our government reps are already happy to perpetuate "culture wars" and empty talking points?

View File

@ -1,67 +0,0 @@
---
date: '2022-12-11T09:05:49'
hypothesis-meta:
created: '2022-12-11T09:05:49.918372+00:00'
document:
title:
- What if failure is the plan? | danah boyd | apophenia
flagged: false
group: __world__
hidden: false
id: Ado6HHkzEe2rBk_IxADl3w
links:
html: https://hypothes.is/a/Ado6HHkzEe2rBk_IxADl3w
incontext: https://hyp.is/Ado6HHkzEe2rBk_IxADl3w/www.zephoria.org/thoughts/archives/2022/12/05/what-if-failure-is-the-plan.html
json: https://hypothes.is/api/annotations/Ado6HHkzEe2rBk_IxADl3w
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- psychology
target:
- selector:
- endContainer: /div[1]/div[1]/div[1]/div[1]/article[1]/div[1]/div[1]/p[25]
endOffset: 738
startContainer: /div[1]/div[1]/div[1]/div[1]/article[1]/div[1]/div[1]/p[25]
startOffset: 646
type: RangeSelector
- end: 12327
start: 12235
type: TextPositionSelector
- exact: "Perceptions of failure don\u2019t always lead to shared ideas of how\
\ to learn from these lessons."
prefix: " it should\u2019ve been done better. "
suffix: '
The partisan and geopolitica'
type: TextQuoteSelector
source: https://www.zephoria.org/thoughts/archives/2022/12/05/what-if-failure-is-the-plan.html
text: 'Really good insight that I hadn''t really considered before. If normally
opposing parties reach the same end goal then nobody wants to think about why,
we''d rather just take the win. '
updated: '2022-12-11T09:05:49.918372+00:00'
uri: https://www.zephoria.org/thoughts/archives/2022/12/05/what-if-failure-is-the-plan.html
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.zephoria.org/thoughts/archives/2022/12/05/what-if-failure-is-the-plan.html
tags:
- psychology
- hypothesis
type: annotation
url: /annotations/2022/12/11/1670749549
---
<blockquote>Perceptions of failure dont always lead to shared ideas of how to learn from these lessons.</blockquote>Really good insight that I hadn't really considered before. If normally opposing parties reach the same end goal then nobody wants to think about why, we'd rather just take the win.

View File

@ -1,65 +0,0 @@
---
date: '2022-12-11T09:27:05'
hypothesis-meta:
created: '2022-12-11T09:27:05.220993+00:00'
document:
title:
- What if failure is the plan? | danah boyd | apophenia
flagged: false
group: __world__
hidden: false
id: -f33_nk1Ee2NpfvtJYAnCQ
links:
html: https://hypothes.is/a/-f33_nk1Ee2NpfvtJYAnCQ
incontext: https://hyp.is/-f33_nk1Ee2NpfvtJYAnCQ/www.zephoria.org/thoughts/archives/2022/12/05/what-if-failure-is-the-plan.html
json: https://hypothes.is/api/annotations/-f33_nk1Ee2NpfvtJYAnCQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- capitalism
target:
- selector:
- endContainer: /div[1]/div[1]/div[1]/div[1]/article[1]/div[1]/div[1]/p[33]
endOffset: 563
startContainer: /div[1]/div[1]/div[1]/div[1]/article[1]/div[1]/div[1]/p[33]
startOffset: 142
type: RangeSelector
- end: 16164
start: 15743
type: TextPositionSelector
- exact: "Throughout the 80s and 90s, private equity firms and hedge funds gobbled\
\ up local news enterprises to extract their real estate. They didn\u2019\
t give a shit about journalism; they just wanted prime real estate that they\
\ could develop. And news organizations had it in the form of buildings in\
\ the middle of town. So financiers squeezed the news orgs until there was\
\ no money to be squeezed and then they hung them out to dry."
prefix: 'st or Google drives me bonkers. '
suffix: ' There was no configuration in w'
type: TextQuoteSelector
source: https://www.zephoria.org/thoughts/archives/2022/12/05/what-if-failure-is-the-plan.html
text: Wild that driving functional organisations into the ground could just be the
cost of doing business
updated: '2022-12-11T09:27:05.220993+00:00'
uri: https://www.zephoria.org/thoughts/archives/2022/12/05/what-if-failure-is-the-plan.html
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.zephoria.org/thoughts/archives/2022/12/05/what-if-failure-is-the-plan.html
tags:
- capitalism
- hypothesis
type: annotation
url: /annotations/2022/12/11/1670750825
---
<blockquote>Throughout the 80s and 90s, private equity firms and hedge funds gobbled up local news enterprises to extract their real estate. They didnt give a shit about journalism; they just wanted prime real estate that they could develop. And news organizations had it in the form of buildings in the middle of town. So financiers squeezed the news orgs until there was no money to be squeezed and then they hung them out to dry.</blockquote>Wild that driving functional organisations into the ground could just be the cost of doing business

View File

@ -1,67 +0,0 @@
---
date: '2022-12-13T06:32:01'
hypothesis-meta:
created: '2022-12-13T06:32:01.500506+00:00'
document:
title:
- "The viral AI avatar app Lensa undressed me\u2014without my consent"
flagged: false
group: __world__
hidden: false
id: 2iVhJnqvEe2HRauIjYpzBw
links:
html: https://hypothes.is/a/2iVhJnqvEe2HRauIjYpzBw
incontext: https://hyp.is/2iVhJnqvEe2HRauIjYpzBw/www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/
json: https://hypothes.is/api/annotations/2iVhJnqvEe2HRauIjYpzBw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- ml
- bias
target:
- selector:
- endContainer: /div[1]/div[1]/main[1]/div[1]/div[2]/div[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[6]/div[1]/p[6]
endOffset: 245
startContainer: /div[1]/div[1]/main[1]/div[1]/div[2]/div[1]/div[1]/div[1]/div[2]/div[1]/div[1]/div[6]/div[1]/p[6]
startOffset: 0
type: RangeSelector
- end: 3237
start: 2992
type: TextPositionSelector
- exact: AI training data is filled with racist stereotypes, pornography, and
explicit images of rape, researchers Abeba Birhane, Vinay Uday Prabhu, and
Emmanuel Kahembwe found after analyzing a data set similar to the one used
to build Stable Diffusion.
prefix: "n historically disadvantaged.\_ "
suffix: " It\u2019s notable that their finding"
type: TextQuoteSelector
source: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/
text: 'That is horrifying. You''d think that authors would attempt to remove or
filter this kind of material. There are, after all models out there that are
trained to find it. It makes me wonder what awful stuff is in the GPT-3 dataset
too. '
updated: '2022-12-13T06:43:06.391962+00:00'
uri: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.technologyreview.com/2022/12/12/1064751/the-viral-ai-avatar-app-lensa-undressed-me-without-my-consent/
tags:
- ml
- bias
- hypothesis
type: annotation
url: /annotations/2022/12/13/1670913121
---
<blockquote>AI training data is filled with racist stereotypes, pornography, and explicit images of rape, researchers Abeba Birhane, Vinay Uday Prabhu, and Emmanuel Kahembwe found after analyzing a data set similar to the one used to build Stable Diffusion.</blockquote>That is horrifying. You'd think that authors would attempt to remove or filter this kind of material. There are, after all models out there that are trained to find it. It makes me wonder what awful stuff is in the GPT-3 dataset too.

View File

@ -1,81 +0,0 @@
---
date: '2022-12-13T08:23:35'
hypothesis-meta:
created: '2022-12-13T08:23:35.919113+00:00'
document:
title:
- "Skill and self-knowledge: empirical refutation of the dual-burden account of\
\ the Dunning\u2013Kruger effect | Royal Society Open Science"
flagged: false
group: __world__
hidden: false
id: cFHoSnq_Ee2D6xvNIG1bgw
links:
html: https://hypothes.is/a/cFHoSnq_Ee2D6xvNIG1bgw
incontext: https://hyp.is/cFHoSnq_Ee2D6xvNIG1bgw/royalsocietypublishing.org/doi/10.1098/rsos.191727
json: https://hypothes.is/api/annotations/cFHoSnq_Ee2D6xvNIG1bgw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- psychology
target:
- selector:
- endContainer: /div[3]/div[1]/main[1]/div[2]/div[1]/div[1]/article[1]/div[1]/div[1]/div[1]/div[2]/div[4]/div[1]/div[1]/p[1]
endOffset: 1466
startContainer: /div[3]/div[1]/main[1]/div[2]/div[1]/div[1]/article[1]/div[1]/div[1]/div[1]/div[2]/div[4]/div[1]/div[1]/p[1]
startOffset: 0
type: RangeSelector
- end: 11913
start: 10447
type: TextPositionSelector
- exact: "For many intellectual tasks, the people with the least skill overestimate\
\ themselves the most, a pattern popularly known as the Dunning\u2013Kruger\
\ effect (DKE). The dominant account of this effect depends on the idea that\
\ assessing the quality of one's performance (metacognition) requires the\
\ same mental resources as task performance itself (cognition). Unskilled\
\ people are said to suffer a dual burden: they lack the cognitive resources\
\ to perform well, and this deprives them of metacognitive insight into their\
\ failings. In this Registered Report, we applied recently developed methods\
\ for the measurement of metacognition to a matrix reasoning task, to test\
\ the dual-burden account. Metacognitive sensitivity (information exploited\
\ by metacognition) tracked performance closely, so less information was exploited\
\ by the metacognitive judgements of poor performers; but metacognitive efficiency\
\ (quality of metacognitive processing itself) was unrelated to performance.\
\ Metacognitive bias (overall tendency towards high or low confidence) was\
\ positively associated with performance, so poor performers were appropriately\
\ less confident\u2014not more confident\u2014than good performers. Crucially,\
\ these metacognitive factors did not cause the DKE pattern, which was driven\
\ overwhelmingly by performance scores. These results refute the dual-burden\
\ account and suggest that the classic DKE is a statistical regression artefact\
\ that tells us nothing much about metacognition."
prefix: "t\n \n \n \n\nAbstract"
suffix: '1. Introduction1.1. Skill and '
type: TextQuoteSelector
source: https://royalsocietypublishing.org/doi/10.1098/rsos.191727
text: The Dunning-Kruger effect (DKE) seems to be a statistical regression artefact
that doesn't actually explain whether people who are good at a task are able to
estimate their own abilities at the task
updated: '2022-12-13T08:23:35.919113+00:00'
uri: https://royalsocietypublishing.org/doi/10.1098/rsos.191727
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://royalsocietypublishing.org/doi/10.1098/rsos.191727
tags:
- psychology
- hypothesis
type: annotation
url: /annotations/2022/12/13/1670919815
---
<blockquote>For many intellectual tasks, the people with the least skill overestimate themselves the most, a pattern popularly known as the DunningKruger effect (DKE). The dominant account of this effect depends on the idea that assessing the quality of one's performance (metacognition) requires the same mental resources as task performance itself (cognition). Unskilled people are said to suffer a dual burden: they lack the cognitive resources to perform well, and this deprives them of metacognitive insight into their failings. In this Registered Report, we applied recently developed methods for the measurement of metacognition to a matrix reasoning task, to test the dual-burden account. Metacognitive sensitivity (information exploited by metacognition) tracked performance closely, so less information was exploited by the metacognitive judgements of poor performers; but metacognitive efficiency (quality of metacognitive processing itself) was unrelated to performance. Metacognitive bias (overall tendency towards high or low confidence) was positively associated with performance, so poor performers were appropriately less confident—not more confident—than good performers. Crucially, these metacognitive factors did not cause the DKE pattern, which was driven overwhelmingly by performance scores. These results refute the dual-burden account and suggest that the classic DKE is a statistical regression artefact that tells us nothing much about metacognition.</blockquote>The Dunning-Kruger effect (DKE) seems to be a statistical regression artefact that doesn't actually explain whether people who are good at a task are able to estimate their own abilities at the task

View File

@ -1,72 +0,0 @@
---
date: '2022-12-14T16:54:30'
hypothesis-meta:
created: '2022-12-14T16:54:30.584705+00:00'
document:
title:
- the new networked norm
flagged: false
group: __world__
hidden: false
id: -lRXknvPEe28bXvva9iHbg
links:
html: https://hypothes.is/a/-lRXknvPEe28bXvva9iHbg
incontext: https://hyp.is/-lRXknvPEe28bXvva9iHbg/jarche.com/2022/12/gpt-3-through-a-glass-darkly/
json: https://hypothes.is/api/annotations/-lRXknvPEe28bXvva9iHbg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- prompt-models
- nlproc
- productivity
- self-employed
- capitalism
target:
- selector:
- endContainer: /div[1]/div[1]/div[1]/div[1]/article[1]/section[1]/p[7]
endOffset: 218
startContainer: /div[1]/div[1]/div[1]/div[1]/article[1]/section[1]/p[7]
startOffset: 0
type: RangeSelector
- end: 4287
start: 4069
type: TextPositionSelector
- exact: If my interpretation of the Retrieval quadrant is correct, it will become
much more difficult to be an average, or even above average, writer. Only
the best will flourish. Perhaps we will see a rise in neo-generalists.
prefix: 'mpson, The Atlantic, 2022-12-01
'
suffix: ' If you are early in your career'
type: TextQuoteSelector
source: https://jarche.com/2022/12/gpt-3-through-a-glass-darkly/
text: This is probably true of average or poor software engineers given that GPT-3
can produce pretty reasonable code snippets
updated: '2022-12-14T16:54:30.584705+00:00'
uri: https://jarche.com/2022/12/gpt-3-through-a-glass-darkly/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://jarche.com/2022/12/gpt-3-through-a-glass-darkly/
tags:
- prompt-models
- nlproc
- productivity
- self-employed
- capitalism
- hypothesis
type: annotation
url: /annotations/2022/12/14/1671036870
---
<blockquote>If my interpretation of the Retrieval quadrant is correct, it will become much more difficult to be an average, or even above average, writer. Only the best will flourish. Perhaps we will see a rise in neo-generalists.</blockquote>This is probably true of average or poor software engineers given that GPT-3 can produce pretty reasonable code snippets

View File

@ -1,78 +0,0 @@
---
date: '2022-12-19T14:04:52'
hypothesis-meta:
created: '2022-12-19T14:04:52.852856+00:00'
document:
title:
- My AI Safety Lecture for UT Effective Altruism
flagged: false
group: __world__
hidden: false
id: G_zRJH-mEe2Hz98VxKK5Gw
links:
html: https://hypothes.is/a/G_zRJH-mEe2Hz98VxKK5Gw
incontext: https://hyp.is/G_zRJH-mEe2Hz98VxKK5Gw/scottaaronson.blog/?p=6823
json: https://hypothes.is/api/annotations/G_zRJH-mEe2Hz98VxKK5Gw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- nlproc
target:
- selector:
- endContainer: /div[2]/div[2]/div[2]/div[1]/p[36]
endOffset: 642
startContainer: /div[2]/div[2]/div[2]/div[1]/p[36]
startOffset: 0
type: RangeSelector
- end: 13632
start: 12990
type: TextPositionSelector
- exact: "Okay, but one thing that\u2019s been found empirically is that you take\
\ commonsense questions that are flubbed by GPT-2, let\u2019s say, and you\
\ try them on GPT-3, and very often now it gets them right. You take the\
\ things that the original GPT-3 flubbed, and you try them on the latest public\
\ model, which is sometimes called GPT-3.5 (incorporating an advance called\
\ InstructGPT), and again it often gets them right. So it\u2019s extremely\
\ risky right now to pin your case against AI on these sorts of examples!\
\ Very plausibly, just one more order of magnitude of scale is all it\u2019\
ll take to kick the ball in, and then you\u2019ll have to move the goal again."
prefix: ' Cheetahs are faster, right?
'
suffix: '
A deeper objection is that t'
type: TextQuoteSelector
source: https://scottaaronson.blog/?p=6823
text: the stochastic parrots argument could be defeated as models get bigger and
more complex
updated: '2022-12-19T14:04:52.852856+00:00'
uri: https://scottaaronson.blog/?p=6823
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://scottaaronson.blog/?p=6823
tags:
- nlproc
- hypothesis
type: annotation
url: /annotations/2022/12/19/1671458692
---
<blockquote>Okay, but one thing thats been found empirically is that you take commonsense questions that are flubbed by GPT-2, lets say, and you try them on GPT-3, and very often now it gets them right. You take the things that the original GPT-3 flubbed, and you try them on the latest public model, which is sometimes called GPT-3.5 (incorporating an advance called InstructGPT), and again it often gets them right. So its extremely risky right now to pin your case against AI on these sorts of examples! Very plausibly, just one more order of magnitude of scale is all itll take to kick the ball in, and then youll have to move the goal again.</blockquote>the stochastic parrots argument could be defeated as models get bigger and more complex

View File

@ -1,73 +0,0 @@
---
date: '2022-12-19T14:09:11'
hypothesis-meta:
created: '2022-12-19T14:09:11.863238+00:00'
document:
title:
- My AI Safety Lecture for UT Effective Altruism
flagged: false
group: __world__
hidden: false
id: tmH8RH-mEe27ArstPwKXEA
links:
html: https://hypothes.is/a/tmH8RH-mEe27ArstPwKXEA
incontext: https://hyp.is/tmH8RH-mEe27ArstPwKXEA/scottaaronson.blog/?p=6823
json: https://hypothes.is/api/annotations/tmH8RH-mEe27ArstPwKXEA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- nlproc
target:
- selector:
- endContainer: /div[2]/div[2]/div[2]/div[1]/p[43]
endOffset: 779
startContainer: /div[2]/div[2]/div[2]/div[1]/p[43]
startOffset: 174
type: RangeSelector
- end: 16443
start: 15838
type: TextPositionSelector
- exact: " And famously, self-driving cars have taken a lot longer than many people\
\ expected a decade ago. This is partly because of regulatory barriers and\
\ public relations: even if a self-driving car actually crashes less than\
\ a human does, that\u2019s still not good enough, because when it does crash\
\ the circumstances are too weird. So, the AI is actually held to a higher\
\ standard. But it\u2019s also partly just that there was a long tail of\
\ really weird events. A deer crosses the road, or you have some crazy lighting\
\ conditions\u2014such things are really hard to get right, and of course\
\ 99% isn\u2019t good enough here."
prefix: ' the last jobs to be automated. '
suffix: '
We can maybe fuzzily see ahe'
type: TextQuoteSelector
source: https://scottaaronson.blog/?p=6823
text: I think the emphasis is wrong here. The regulation is secondary. The long
tail of weird events is the more important thing.
updated: '2022-12-19T14:09:11.863238+00:00'
uri: https://scottaaronson.blog/?p=6823
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://scottaaronson.blog/?p=6823
tags:
- nlproc
- hypothesis
type: annotation
url: /annotations/2022/12/19/1671458951
---
<blockquote> And famously, self-driving cars have taken a lot longer than many people expected a decade ago. This is partly because of regulatory barriers and public relations: even if a self-driving car actually crashes less than a human does, thats still not good enough, because when it does crash the circumstances are too weird. So, the AI is actually held to a higher standard. But its also partly just that there was a long tail of really weird events. A deer crosses the road, or you have some crazy lighting conditions—such things are really hard to get right, and of course 99% isnt good enough here.</blockquote>I think the emphasis is wrong here. The regulation is secondary. The long tail of weird events is the more important thing.

View File

@ -1,62 +0,0 @@
---
date: '2022-12-19T14:20:33'
hypothesis-meta:
created: '2022-12-19T14:20:33.068063+00:00'
document:
title:
- My AI Safety Lecture for UT Effective Altruism
flagged: false
group: __world__
hidden: false
id: TGVxKn-oEe2vUGtB_ufnbw
links:
html: https://hypothes.is/a/TGVxKn-oEe2vUGtB_ufnbw
incontext: https://hyp.is/TGVxKn-oEe2vUGtB_ufnbw/scottaaronson.blog/?p=6823
json: https://hypothes.is/api/annotations/TGVxKn-oEe2vUGtB_ufnbw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- ai
- nlproc
target:
- selector:
- endContainer: /div[2]/div[2]/div[2]/div[1]/p[49]
endOffset: 48
startContainer: /div[2]/div[2]/div[2]/div[1]/p[49]
startOffset: 33
type: RangeSelector
- end: 19549
start: 19534
type: TextPositionSelector
- exact: " \u201CAI alignment\u201D"
prefix: t the other end of the spectrum,
suffix: ' is where you believe that reall'
type: TextQuoteSelector
source: https://scottaaronson.blog/?p=6823
text: AI Alignment is terminator situation. This versus AI Ethics which is more
the concern around current models being racist etc.
updated: '2022-12-19T14:20:33.068063+00:00'
uri: https://scottaaronson.blog/?p=6823
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://scottaaronson.blog/?p=6823
tags:
- ai
- nlproc
- hypothesis
type: annotation
url: /annotations/2022/12/19/1671459633
---
<blockquote> “AI alignment”</blockquote>AI Alignment is terminator situation. This versus AI Ethics which is more the concern around current models being racist etc.

View File

@ -1,78 +0,0 @@
---
date: '2022-12-19T14:46:26'
hypothesis-meta:
created: '2022-12-19T14:46:26.361697+00:00'
document:
title:
- My AI Safety Lecture for UT Effective Altruism
flagged: false
group: __world__
hidden: false
id: 6k0-pn-rEe20ccNOEgwbaQ
links:
html: https://hypothes.is/a/6k0-pn-rEe20ccNOEgwbaQ
incontext: https://hyp.is/6k0-pn-rEe20ccNOEgwbaQ/scottaaronson.blog/?p=6823
json: https://hypothes.is/api/annotations/6k0-pn-rEe20ccNOEgwbaQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- nlproc
- explainability
target:
- selector:
- endContainer: /div[2]/div[2]/div[2]/div[1]/p[68]
endOffset: 803
startContainer: /div[2]/div[2]/div[2]/div[1]/p[68]
startOffset: 0
type: RangeSelector
- end: 27975
start: 27172
type: TextPositionSelector
- exact: "(3) A third direction, and I would say maybe the most popular one in\
\ AI alignment research right now, is called interpretability. This is also\
\ a major direction in mainstream machine learning research, so there\u2019\
s a big point of intersection there. The idea of interpretability is, why\
\ don\u2019t we exploit the fact that we actually have complete access to\
\ the code of the AI\u2014or if it\u2019s a neural net, complete access to\
\ its parameters? So we can look inside of it. We can do the AI analogue\
\ of neuroscience. Except, unlike an fMRI machine, which gives you only an\
\ extremely crude snapshot of what a brain is doing, we can see exactly what\
\ every neuron in a neural net is doing at every point in time. If we don\u2019\
t exploit that, then aren\u2019t we trying to make AI safe with our hands\
\ tied behind our backs?"
prefix: ' take over the world, right?
'
suffix: "\n\n\n\nSo we should look inside\u2014but"
type: TextQuoteSelector
source: https://scottaaronson.blog/?p=6823
text: Interesting metaphor - it is a bit like MRI for neural networks but actually
more accurate/powerful
updated: '2022-12-19T14:46:26.361697+00:00'
uri: https://scottaaronson.blog/?p=6823
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://scottaaronson.blog/?p=6823
tags:
- nlproc
- explainability
- hypothesis
type: annotation
url: /annotations/2022/12/19/1671461186
---
<blockquote>(3) A third direction, and I would say maybe the most popular one in AI alignment research right now, is called interpretability. This is also a major direction in mainstream machine learning research, so theres a big point of intersection there. The idea of interpretability is, why dont we exploit the fact that we actually have complete access to the code of the AI—or if its a neural net, complete access to its parameters? So we can look inside of it. We can do the AI analogue of neuroscience. Except, unlike an fMRI machine, which gives you only an extremely crude snapshot of what a brain is doing, we can see exactly what every neuron in a neural net is doing at every point in time. If we dont exploit that, then arent we trying to make AI safe with our hands tied behind our backs?</blockquote>Interesting metaphor - it is a bit like MRI for neural networks but actually more accurate/powerful

View File

@ -1,68 +0,0 @@
---
date: '2022-12-19T14:50:09'
hypothesis-meta:
created: '2022-12-19T14:50:09.008193+00:00'
document:
title:
- My AI Safety Lecture for UT Effective Altruism
flagged: false
group: __world__
hidden: false
id: bvVepH-sEe2uPgfvTF7V-w
links:
html: https://hypothes.is/a/bvVepH-sEe2uPgfvTF7V-w
incontext: https://hyp.is/bvVepH-sEe2uPgfvTF7V-w/scottaaronson.blog/?p=6823
json: https://hypothes.is/api/annotations/bvVepH-sEe2uPgfvTF7V-w
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- explainability
- nlproc
target:
- selector:
- endContainer: /div[2]/div[2]/div[2]/div[1]/p[72]
endOffset: 437
startContainer: /div[2]/div[2]/div[2]/div[1]/p[72]
startOffset: 10
type: RangeSelector
- end: 29171
start: 28744
type: TextPositionSelector
- exact: " Eventually GPT will say, \u201Coh, I know what game we\u2019re playing!\
\ it\u2019s the \u2018give false answers\u2019 game!\u201D And it will then\
\ continue playing that game and give you more false answers. What the new\
\ paper shows is that, in such cases, one can actually look at the inner layers\
\ of the neural net and find where it has an internal representation of what\
\ was the true answer, which then gets overridden once you get to the output\
\ layer."
prefix: "Does 2+2=4? No.\u201D\n\n\n\n\nand so on."
suffix: "\n\n\n\nTo be clear, there\u2019s no know"
type: TextQuoteSelector
source: https://scottaaronson.blog/?p=6823
text: this is fascinating - GPT learns the true answer to a question but will ignore
it and let the user override this in later layers of the model
updated: '2022-12-19T14:50:09.008193+00:00'
uri: https://scottaaronson.blog/?p=6823
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://scottaaronson.blog/?p=6823
tags:
- explainability
- nlproc
- hypothesis
type: annotation
url: /annotations/2022/12/19/1671461409
---
<blockquote> Eventually GPT will say, “oh, I know what game were playing! its the give false answers game!” And it will then continue playing that game and give you more false answers. What the new paper shows is that, in such cases, one can actually look at the inner layers of the neural net and find where it has an internal representation of what was the true answer, which then gets overridden once you get to the output layer.</blockquote>this is fascinating - GPT learns the true answer to a question but will ignore it and let the user override this in later layers of the model

View File

@ -1,69 +0,0 @@
---
date: '2022-12-19T14:55:52'
hypothesis-meta:
created: '2022-12-19T14:55:52.384335+00:00'
document:
title:
- My AI Safety Lecture for UT Effective Altruism
flagged: false
group: __world__
hidden: false
id: O7YUan-tEe29vjfmuBFMKQ
links:
html: https://hypothes.is/a/O7YUan-tEe29vjfmuBFMKQ
incontext: https://hyp.is/O7YUan-tEe29vjfmuBFMKQ/scottaaronson.blog/?p=6823
json: https://hypothes.is/api/annotations/O7YUan-tEe29vjfmuBFMKQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- explainability
- nlproc
target:
- selector:
- endContainer: /div[2]/div[2]/div[2]/div[1]/p[95]
endOffset: 193
startContainer: /div[2]/div[2]/div[2]/div[1]/p[95]
startOffset: 0
type: RangeSelector
- end: 38138
start: 37945
type: TextPositionSelector
- exact: So then to watermark, instead of selecting the next token randomly, the
idea will be to select it pseudorandomly, using a cryptographic pseudorandom
function, whose key is known only to OpenAI.
prefix: 'of output tokens) each time.
'
suffix: " That won\u2019t make any detectable"
type: TextQuoteSelector
source: https://scottaaronson.blog/?p=6823
text: Watermarking by applying cryptographic pseudorandom functions to the model
output instead of true random (true pseudo-random)
updated: '2022-12-19T14:55:52.384335+00:00'
uri: https://scottaaronson.blog/?p=6823
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://scottaaronson.blog/?p=6823
tags:
- explainability
- nlproc
- hypothesis
type: annotation
url: /annotations/2022/12/19/1671461752
---
<blockquote>So then to watermark, instead of selecting the next token randomly, the idea will be to select it pseudorandomly, using a cryptographic pseudorandom function, whose key is known only to OpenAI.</blockquote>Watermarking by applying cryptographic pseudorandom functions to the model output instead of true random (true pseudo-random)

View File

@ -1,77 +0,0 @@
---
date: '2022-12-19T14:57:08'
hypothesis-meta:
created: '2022-12-19T14:57:08.575784+00:00'
document:
title:
- My AI Safety Lecture for UT Effective Altruism
flagged: false
group: __world__
hidden: false
id: aQ51un-tEe29v2MBjEX6Xw
links:
html: https://hypothes.is/a/aQ51un-tEe29v2MBjEX6Xw
incontext: https://hyp.is/aQ51un-tEe29v2MBjEX6Xw/scottaaronson.blog/?p=6823
json: https://hypothes.is/api/annotations/aQ51un-tEe29v2MBjEX6Xw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- explainability
- nlproc
target:
- selector:
- endContainer: /div[2]/div[2]/div[2]/div[1]/p[99]
endOffset: 386
startContainer: /div[2]/div[2]/div[2]/div[1]/p[99]
startOffset: 0
type: RangeSelector
- end: 40910
start: 40524
type: TextPositionSelector
- exact: "Anyway, we actually have a working prototype of the watermarking scheme,\
\ built by OpenAI engineer Hendrik Kirchner. It seems to work pretty well\u2014\
empirically, a few hundred tokens seem to be enough to get a reasonable signal\
\ that yes, this text came from GPT. In principle, you could even take a\
\ long text and isolate which parts probably came from GPT and which parts\
\ probably didn\u2019t."
prefix: 'irst hundred prime numbers).
'
suffix: '
Now, this can all be defeate'
type: TextQuoteSelector
source: https://scottaaronson.blog/?p=6823
text: Scott's team hsas already developed a prototype watermarking scheme at OpenAI
and it works pretty well
updated: '2022-12-19T14:57:08.575784+00:00'
uri: https://scottaaronson.blog/?p=6823
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://scottaaronson.blog/?p=6823
tags:
- explainability
- nlproc
- hypothesis
type: annotation
url: /annotations/2022/12/19/1671461828
---
<blockquote>Anyway, we actually have a working prototype of the watermarking scheme, built by OpenAI engineer Hendrik Kirchner. It seems to work pretty well—empirically, a few hundred tokens seem to be enough to get a reasonable signal that yes, this text came from GPT. In principle, you could even take a long text and isolate which parts probably came from GPT and which parts probably didnt.</blockquote>Scott's team hsas already developed a prototype watermarking scheme at OpenAI and it works pretty well

View File

@ -1,71 +0,0 @@
---
date: '2022-12-19T14:58:05'
hypothesis-meta:
created: '2022-12-19T14:58:05.006973+00:00'
document:
title:
- My AI Safety Lecture for UT Effective Altruism
flagged: false
group: __world__
hidden: false
id: iqqNRH-tEe2fKTMGgQumvA
links:
html: https://hypothes.is/a/iqqNRH-tEe2fKTMGgQumvA
incontext: https://hyp.is/iqqNRH-tEe2fKTMGgQumvA/scottaaronson.blog/?p=6823
json: https://hypothes.is/api/annotations/iqqNRH-tEe2fKTMGgQumvA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- explainability
- nlproc
target:
- selector:
- endContainer: /div[2]/div[2]/div[2]/div[1]/p[100]
endOffset: 429
startContainer: /div[2]/div[2]/div[2]/div[1]/p[100]
startOffset: 0
type: RangeSelector
- end: 41343
start: 40914
type: TextPositionSelector
- exact: "Now, this can all be defeated with enough effort. For example, if you\
\ used another AI to paraphrase GPT\u2019s output\u2014well okay, we\u2019\
re not going to be able to detect that. On the other hand, if you just insert\
\ or delete a few words here and there, or rearrange the order of some sentences,\
\ the watermarking signal will still be there. Because it depends only on\
\ a sum over n-grams, it\u2019s robust against those sorts of interventions."
prefix: "which parts probably didn\u2019t.\n\n\n\n"
suffix: '
The hope is that this can be'
type: TextQuoteSelector
source: https://scottaaronson.blog/?p=6823
text: this mechanism can be defeated by paraphrasing the output with another model
updated: '2022-12-19T14:58:05.006973+00:00'
uri: https://scottaaronson.blog/?p=6823
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://scottaaronson.blog/?p=6823
tags:
- explainability
- nlproc
- hypothesis
type: annotation
url: /annotations/2022/12/19/1671461885
---
<blockquote>Now, this can all be defeated with enough effort. For example, if you used another AI to paraphrase GPTs output—well okay, were not going to be able to detect that. On the other hand, if you just insert or delete a few words here and there, or rearrange the order of some sentences, the watermarking signal will still be there. Because it depends only on a sum over n-grams, its robust against those sorts of interventions.</blockquote>this mechanism can be defeated by paraphrasing the output with another model

View File

@ -1,68 +0,0 @@
---
date: '2022-12-24T17:14:54'
hypothesis-meta:
created: '2022-12-24T17:14:54.010952+00:00'
document:
title:
- "TSS #050: Growing Your Audience in 2023"
flagged: false
group: __world__
hidden: false
id: e6OUxoOuEe2xiNehYwfZHw
links:
html: https://hypothes.is/a/e6OUxoOuEe2xiNehYwfZHw
incontext: https://hyp.is/e6OUxoOuEe2xiNehYwfZHw/www.justinwelsh.me/e/BAh7BjoWZW1haWxfZGVsaXZlcnlfaWRsKwhE9WlwAgA=--836e2ece0a095e4d01929e8555a80a8a654627b4?skip_click_tracking=true
json: https://hypothes.is/api/annotations/e6OUxoOuEe2xiNehYwfZHw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- writing
- solopreneur
target:
- selector:
- endContainer: /table[1]/tbody[1]/tr[1]/td[1]/div[1]/div[2]/table[1]/tbody[1]/tr[2]/td[1]/table[1]/tbody[1]/tr[1]/td[3]/table[1]/tbody[1]/tr[1]/td[1]/p[16]/span[1]
endOffset: 183
startContainer: /table[1]/tbody[1]/tr[1]/td[1]/div[1]/div[2]/table[1]/tbody[1]/tr[2]/td[1]/table[1]/tbody[1]/tr[1]/td[3]/table[1]/tbody[1]/tr[1]/td[1]/p[16]/span[1]
startOffset: 0
type: RangeSelector
- end: 4162
start: 3979
type: TextPositionSelector
- exact: My goal with my content is to make it so recognizable that you would
know it was me even if it didn't have my name on it. The same style. The same
thought process. The same character.
prefix: 'lays very well on social media.
'
suffix: '
So work on becoming familiar. M'
type: TextQuoteSelector
source: https://www.justinwelsh.me/e/BAh7BjoWZW1haWxfZGVsaXZlcnlfaWRsKwhE9WlwAgA=--836e2ece0a095e4d01929e8555a80a8a654627b4?skip_click_tracking=true
text: building a recognisable tone of voice can help with repeat visitors
updated: '2022-12-24T17:14:54.010952+00:00'
uri: https://www.justinwelsh.me/e/BAh7BjoWZW1haWxfZGVsaXZlcnlfaWRsKwhE9WlwAgA=--836e2ece0a095e4d01929e8555a80a8a654627b4?skip_click_tracking=true
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.justinwelsh.me/e/BAh7BjoWZW1haWxfZGVsaXZlcnlfaWRsKwhE9WlwAgA=--836e2ece0a095e4d01929e8555a80a8a654627b4?skip_click_tracking=true
tags:
- writing
- solopreneur
- hypothesis
type: annotation
url: /annotations/2022/12/24/1671902094
---
<blockquote>My goal with my content is to make it so recognizable that you would know it was me even if it didn't have my name on it. The same style. The same thought process. The same character.</blockquote>building a recognisable tone of voice can help with repeat visitors

View File

@ -1,68 +0,0 @@
---
date: '2022-12-24T17:16:23'
hypothesis-meta:
created: '2022-12-24T17:16:23.873352+00:00'
document:
title:
- "TSS #050: Growing Your Audience in 2023"
flagged: false
group: __world__
hidden: false
id: sTMi5oOuEe20vWv9syGtAA
links:
html: https://hypothes.is/a/sTMi5oOuEe20vWv9syGtAA
incontext: https://hyp.is/sTMi5oOuEe20vWv9syGtAA/www.justinwelsh.me/e/BAh7BjoWZW1haWxfZGVsaXZlcnlfaWRsKwhE9WlwAgA=--836e2ece0a095e4d01929e8555a80a8a654627b4?skip_click_tracking=true
json: https://hypothes.is/api/annotations/sTMi5oOuEe20vWv9syGtAA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- writing
- solopreneur
- learning-in-public
target:
- selector:
- endContainer: /table[1]/tbody[1]/tr[1]/td[1]/div[1]/div[2]/table[1]/tbody[1]/tr[2]/td[1]/table[1]/tbody[1]/tr[1]/td[3]/table[1]/tbody[1]/tr[1]/td[1]/p[21]/span[1]
endOffset: 105
startContainer: /table[1]/tbody[1]/tr[1]/td[1]/div[1]/div[2]/table[1]/tbody[1]/tr[2]/td[1]/table[1]/tbody[1]/tr[1]/td[3]/table[1]/tbody[1]/tr[1]/td[1]/p[21]/span[1]
startOffset: 0
type: RangeSelector
- end: 4706
start: 4601
type: TextPositionSelector
- exact: "Don\u2019t try to convince everyone that what you say, feel, think,\
\ or have done is better than everyone else."
prefix: " Don\u2019t be better. Be different.\n"
suffix: '
Instead, come at your audience '
type: TextQuoteSelector
source: https://www.justinwelsh.me/e/BAh7BjoWZW1haWxfZGVsaXZlcnlfaWRsKwhE9WlwAgA=--836e2ece0a095e4d01929e8555a80a8a654627b4?skip_click_tracking=true
text: This is pretty normal for those of us who are academically inclined so it
shouldn't be too much of a stretch - after all a lot of the time what we're doing
is thinking about other peoples' works critically
updated: '2022-12-24T17:16:23.873352+00:00'
uri: https://www.justinwelsh.me/e/BAh7BjoWZW1haWxfZGVsaXZlcnlfaWRsKwhE9WlwAgA=--836e2ece0a095e4d01929e8555a80a8a654627b4?skip_click_tracking=true
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.justinwelsh.me/e/BAh7BjoWZW1haWxfZGVsaXZlcnlfaWRsKwhE9WlwAgA=--836e2ece0a095e4d01929e8555a80a8a654627b4?skip_click_tracking=true
tags:
- writing
- solopreneur
- learning-in-public
- hypothesis
type: annotation
url: /annotations/2022/12/24/1671902183
---
<blockquote>Dont try to convince everyone that what you say, feel, think, or have done is better than everyone else.</blockquote>This is pretty normal for those of us who are academically inclined so it shouldn't be too much of a stretch - after all a lot of the time what we're doing is thinking about other peoples' works critically

View File

@ -1,68 +0,0 @@
---
date: '2022-12-24T17:17:25'
hypothesis-meta:
created: '2022-12-24T17:17:25.549800+00:00'
document:
title:
- "TSS #050: Growing Your Audience in 2023"
flagged: false
group: __world__
hidden: false
id: 1fZ_aoOuEe2-498cRmzXxg
links:
html: https://hypothes.is/a/1fZ_aoOuEe2-498cRmzXxg
incontext: https://hyp.is/1fZ_aoOuEe2-498cRmzXxg/www.justinwelsh.me/e/BAh7BjoWZW1haWxfZGVsaXZlcnlfaWRsKwhE9WlwAgA=--836e2ece0a095e4d01929e8555a80a8a654627b4?skip_click_tracking=true
json: https://hypothes.is/api/annotations/1fZ_aoOuEe2-498cRmzXxg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- solopreneur
- learning-in-public
target:
- selector:
- endContainer: /table[1]/tbody[1]/tr[1]/td[1]/div[1]/div[2]/table[1]/tbody[1]/tr[2]/td[1]/table[1]/tbody[1]/tr[1]/td[3]/table[1]/tbody[1]/tr[1]/td[1]/p[29]/span[1]
endOffset: 202
startContainer: /table[1]/tbody[1]/tr[1]/td[1]/div[1]/div[2]/table[1]/tbody[1]/tr[2]/td[1]/table[1]/tbody[1]/tr[1]/td[3]/table[1]/tbody[1]/tr[1]/td[1]/p[29]/span[1]
startOffset: 0
type: RangeSelector
- end: 5901
start: 5699
type: TextPositionSelector
- exact: "My goal was simply to scale this ladder over time. I worked the list\
\ 5 people at a time,\_starting at the bottom. I engaged relentlessly with\
\ those accounts until they noticed me and began engaging back."
prefix: 'lowers up to 100,000 followers.
'
suffix: '
I used that engagement to grow '
type: TextQuoteSelector
source: https://www.justinwelsh.me/e/BAh7BjoWZW1haWxfZGVsaXZlcnlfaWRsKwhE9WlwAgA=--836e2ece0a095e4d01929e8555a80a8a654627b4?skip_click_tracking=true
text: Interesting approach and these people are going to be great candidates for
picking up new knowledge and self learning from too!
updated: '2022-12-24T17:17:25.549800+00:00'
uri: https://www.justinwelsh.me/e/BAh7BjoWZW1haWxfZGVsaXZlcnlfaWRsKwhE9WlwAgA=--836e2ece0a095e4d01929e8555a80a8a654627b4?skip_click_tracking=true
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.justinwelsh.me/e/BAh7BjoWZW1haWxfZGVsaXZlcnlfaWRsKwhE9WlwAgA=--836e2ece0a095e4d01929e8555a80a8a654627b4?skip_click_tracking=true
tags:
- solopreneur
- learning-in-public
- hypothesis
type: annotation
url: /annotations/2022/12/24/1671902245
---
<blockquote>My goal was simply to scale this ladder over time. I worked the list 5 people at a time, starting at the bottom. I engaged relentlessly with those accounts until they noticed me and began engaging back.</blockquote>Interesting approach and these people are going to be great candidates for picking up new knowledge and self learning from too!

View File

@ -1,64 +0,0 @@
---
date: '2022-12-31T18:39:18'
hypothesis-meta:
created: '2022-12-31T18:39:18.043992+00:00'
document:
title:
- "Don\u2019t Just Set Goals. Build Systems"
flagged: false
group: __world__
hidden: false
id: bv2yzok6Ee22tF9qOXweaQ
links:
html: https://hypothes.is/a/bv2yzok6Ee22tF9qOXweaQ
incontext: https://hyp.is/bv2yzok6Ee22tF9qOXweaQ/medium.com/swlh/dont-just-set-goals-build-systems-8158ac541df
json: https://hypothes.is/api/annotations/bv2yzok6Ee22tF9qOXweaQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- Productivity
- pkm
target:
- selector:
- endContainer: /div[1]/div[1]/div[3]/div[2]/div[1]/main[1]/div[1]/div[3]/div[1]/div[1]/article[1]/div[1]/div[2]/section[1]/div[1]/div[2]/p[28]
endOffset: 141
startContainer: /div[1]/div[1]/div[3]/div[2]/div[1]/main[1]/div[1]/div[3]/div[1]/div[1]/article[1]/div[1]/div[2]/section[1]/div[1]/div[2]/p[27]
startOffset: 0
type: RangeSelector
- end: 4516
start: 4304
type: TextPositionSelector
- exact: "Positive fantasies allow you to indulge in the desired future mentally\u2026\
You can taste the sensations of what it\u2019s like to achieve your goal in\
\ the present \u2014 this depletes your energy to pursue your desired future."
prefix: ng your goals is by fantasizing.
suffix: "You\u2019re also not alert to the obs"
type: TextQuoteSelector
source: https://medium.com/swlh/dont-just-set-goals-build-systems-8158ac541df
text: 'It''s easy to get caught up fantasising about what you could achieve rather
than actually taking action to achieve it. '
updated: '2022-12-31T18:39:18.043992+00:00'
uri: https://medium.com/swlh/dont-just-set-goals-build-systems-8158ac541df
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://medium.com/swlh/dont-just-set-goals-build-systems-8158ac541df
tags:
- Productivity
- pkm
- hypothesis
type: annotation
url: /annotations/2022/12/31/1672511958
---
<blockquote>Positive fantasies allow you to indulge in the desired future mentally…You can taste the sensations of what its like to achieve your goal in the present — this depletes your energy to pursue your desired future.</blockquote>It's easy to get caught up fantasising about what you could achieve rather than actually taking action to achieve it.

View File

@ -1,63 +0,0 @@
---
date: '2022-12-31T18:41:15'
hypothesis-meta:
created: '2022-12-31T18:41:15.494522+00:00'
document:
title:
- "Don\u2019t Just Set Goals. Build Systems"
flagged: false
group: __world__
hidden: false
id: tPg4MIk6Ee2E9QfeyL1ksQ
links:
html: https://hypothes.is/a/tPg4MIk6Ee2E9QfeyL1ksQ
incontext: https://hyp.is/tPg4MIk6Ee2E9QfeyL1ksQ/medium.com/swlh/dont-just-set-goals-build-systems-8158ac541df
json: https://hypothes.is/api/annotations/tPg4MIk6Ee2E9QfeyL1ksQ
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- productivity
- psychology
target:
- selector:
- endContainer: /div[1]/div[1]/div[3]/div[2]/div[1]/main[1]/div[1]/div[3]/div[1]/div[1]/article[1]/div[1]/div[2]/section[1]/div[1]/div[2]/p[44]
endOffset: 123
startContainer: /div[1]/div[1]/div[3]/div[2]/div[1]/main[1]/div[1]/div[3]/div[1]/div[1]/article[1]/div[1]/div[2]/section[1]/div[1]/div[2]/p[44]
startOffset: 0
type: RangeSelector
- end: 5742
start: 5619
type: TextPositionSelector
- exact: Happiness is pushed to some later date in the future while your present
self battles with the misery of the current moment.
prefix: opting the goal-first mentality.
suffix: 'The reason it occurs is simple: '
type: TextQuoteSelector
source: https://medium.com/swlh/dont-just-set-goals-build-systems-8158ac541df
text: 'Journey before destination, don''t get caught up in the future, you''ll miss
the now. Instead, [rest in motion](https://mindingourway.com/rest-in-motion/) '
updated: '2022-12-31T18:41:15.494522+00:00'
uri: https://medium.com/swlh/dont-just-set-goals-build-systems-8158ac541df
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://medium.com/swlh/dont-just-set-goals-build-systems-8158ac541df
tags:
- productivity
- psychology
- hypothesis
type: annotation
url: /annotations/2022/12/31/1672512075
---
<blockquote>Happiness is pushed to some later date in the future while your present self battles with the misery of the current moment.</blockquote>Journey before destination, don't get caught up in the future, you'll miss the now. Instead, [rest in motion](https://mindingourway.com/rest-in-motion/)

View File

@ -1,76 +0,0 @@
---
date: '2023-01-18T06:44:57'
hypothesis-meta:
created: '2023-01-18T06:44:57.024539+00:00'
document:
title:
- How to process reading annotations into evergreen notes
flagged: false
group: __world__
hidden: false
id: nz1iOpb7Ee2ZZtczxJmosw
links:
html: https://hypothes.is/a/nz1iOpb7Ee2ZZtczxJmosw
incontext: https://hyp.is/nz1iOpb7Ee2ZZtczxJmosw/notes.andymatuschak.org/How_to_process_reading_annotations_into_evergreen_notes
json: https://hypothes.is/api/annotations/nz1iOpb7Ee2ZZtczxJmosw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- pkm
- Tools For Thought
target:
- selector:
- endContainer: /div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/p[2]
endOffset: 376
startContainer: /div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/p[2]
startOffset: 129
type: RangeSelector
- end: 736
start: 489
type: TextPositionSelector
- exact: You need to take a step back and form a picture of the overall structure
of the ideas. Concretely, you might do that by clustering your scraps into
piles and observing the structure that emerges. Or you might sketch a mind
map or a visual outline.
prefix: ', so what are the key concepts? '
suffix: ' The structure you observe does '
type: TextQuoteSelector
source: https://notes.andymatuschak.org/How_to_process_reading_annotations_into_evergreen_notes
text: 'Andy suggests taking a step back and clustering annotations into piles or
using a mind map or visualisations to identify common themes.
I wonder if this is a bit overkill for the number of notes I tend to take or a
sign that I''m not taking enough notes?
What tools are out there that could integrate with my stack and help me do this. '
updated: '2023-01-18T06:44:57.024539+00:00'
uri: https://notes.andymatuschak.org/How_to_process_reading_annotations_into_evergreen_notes
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://notes.andymatuschak.org/How_to_process_reading_annotations_into_evergreen_notes
tags:
- pkm
- Tools For Thought
- hypothesis
type: annotation
url: /annotations/2023/01/18/1674024297
---
<blockquote>You need to take a step back and form a picture of the overall structure of the ideas. Concretely, you might do that by clustering your scraps into piles and observing the structure that emerges. Or you might sketch a mind map or a visual outline.</blockquote>Andy suggests taking a step back and clustering annotations into piles or using a mind map or visualisations to identify common themes.
I wonder if this is a bit overkill for the number of notes I tend to take or a sign that I'm not taking enough notes?
What tools are out there that could integrate with my stack and help me do this.

View File

@ -1,64 +0,0 @@
---
date: '2023-01-18T06:46:08'
hypothesis-meta:
created: '2023-01-18T06:46:08.209473+00:00'
document:
title:
- How to process reading annotations into evergreen notes
flagged: false
group: __world__
hidden: false
id: yabyepb7Ee2dfE-INbM_6Q
links:
html: https://hypothes.is/a/yabyepb7Ee2dfE-INbM_6Q
incontext: https://hyp.is/yabyepb7Ee2dfE-INbM_6Q/notes.andymatuschak.org/How_to_process_reading_annotations_into_evergreen_notes
json: https://hypothes.is/api/annotations/yabyepb7Ee2dfE-INbM_6Q
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- pkm
- Zettelkasten
- Tools For Thought
target:
- selector:
- endContainer: /div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/p[3]
endOffset: 189
startContainer: /div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/div[1]/p[3]
startOffset: 100
type: RangeSelector
- end: 1085
start: 996
type: TextPositionSelector
- exact: "Here I\u2019ve summarized Christian Tietze\u2019s process, which I\u2019\
m presently adopting / adapting:"
prefix: 'rative process of note-writing. '
suffix: Write a broad note which capture
type: TextQuoteSelector
source: https://notes.andymatuschak.org/How_to_process_reading_annotations_into_evergreen_notes
text: Andy is Adapting the approach of zettelkasten writer Christian Tietze
updated: '2023-01-18T06:47:33.827850+00:00'
uri: https://notes.andymatuschak.org/How_to_process_reading_annotations_into_evergreen_notes
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://notes.andymatuschak.org/How_to_process_reading_annotations_into_evergreen_notes
tags:
- pkm
- Zettelkasten
- Tools For Thought
- hypothesis
type: annotation
url: /annotations/2023/01/18/1674024368
---
<blockquote>Here Ive summarized Christian Tietzes process, which Im presently adopting / adapting:</blockquote>Andy is Adapting the approach of zettelkasten writer Christian Tietze

View File

@ -1,77 +0,0 @@
---
date: '2023-01-22T07:31:55'
hypothesis-meta:
created: '2023-01-22T07:31:55.232729+00:00'
document:
title:
- Who Owns the Generative AI Platform? | Andreessen Horowitz
flagged: false
group: __world__
hidden: false
id: 2K_fLpomEe2ZVWufTaYTPg
links:
html: https://hypothes.is/a/2K_fLpomEe2ZVWufTaYTPg
incontext: https://hyp.is/2K_fLpomEe2ZVWufTaYTPg/a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
json: https://hypothes.is/api/annotations/2K_fLpomEe2ZVWufTaYTPg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- AI
- generative ai
- startups
target:
- selector:
- endContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/p[5]/span[4]
endOffset: 104
startContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/p[5]/span[1]
startOffset: 0
type: RangeSelector
- end: 10213
start: 9651
type: TextPositionSelector
- exact: "Over the last year, we\u2019ve met with dozens of startup founders and\
\ operators in large companies who deal directly with generative AI. We\u2019\
ve observed that infrastructure vendors are likely the biggest winners in\
\ this market so far, capturing the majority of dollars flowing through the\
\ stack. Application companies are growing topline revenues very quickly but\
\ often struggle with retention, product differentiation, and gross margins.\
\ And most model providers, though responsible for the very existence of this\
\ market, haven\u2019t yet achieved large commercial scale."
prefix: ' this market will value accrue?
'
suffix: '
In other words, the companies c'
type: TextQuoteSelector
source: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
text: Infrastructure vendors are laughing all the way to the bank because companies
are dumping millions on GPUs. Meanwhile, the people building apps on top of these
models are struggling. We've seen this sort of gold-rush before and infrastructure
providers are selling the shovels.
updated: '2023-01-22T07:31:55.232729+00:00'
uri: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
tags:
- AI
- generative ai
- startups
- hypothesis
type: annotation
url: /annotations/2023/01/22/1674372715
---
<blockquote>Over the last year, weve met with dozens of startup founders and operators in large companies who deal directly with generative AI. Weve observed that infrastructure vendors are likely the biggest winners in this market so far, capturing the majority of dollars flowing through the stack. Application companies are growing topline revenues very quickly but often struggle with retention, product differentiation, and gross margins. And most model providers, though responsible for the very existence of this market, havent yet achieved large commercial scale.</blockquote>Infrastructure vendors are laughing all the way to the bank because companies are dumping millions on GPUs. Meanwhile, the people building apps on top of these models are struggling. We've seen this sort of gold-rush before and infrastructure providers are selling the shovels.

View File

@ -1,66 +0,0 @@
---
date: '2023-01-22T10:52:04'
hypothesis-meta:
created: '2023-01-22T10:52:04.322820+00:00'
document:
title:
- Who Owns the Generative AI Platform? | Andreessen Horowitz
flagged: false
group: __world__
hidden: false
id: zqgktppCEe2mGhczyiwYLg
links:
html: https://hypothes.is/a/zqgktppCEe2mGhczyiwYLg
incontext: https://hyp.is/zqgktppCEe2mGhczyiwYLg/a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
json: https://hypothes.is/api/annotations/zqgktppCEe2mGhczyiwYLg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- mlops
- llmops
- ai
target:
- selector:
- endContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/p[11]/span[1]
endOffset: 417
startContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/p[11]/span[1]
startOffset: 282
type: RangeSelector
- end: 12204
start: 12069
type: TextPositionSelector
- exact: "We\u2019re also not going deep here on MLops or LLMops tooling, which\
\ is not yet highly standardized and will be addressed in a future post."
prefix: 'ations that have been released. '
suffix: '
The first wave of generative AI'
type: TextQuoteSelector
source: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
text: first mention of LLMops I've seen in the wild
updated: '2023-01-22T10:52:04.322820+00:00'
uri: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
tags:
- mlops
- llmops
- ai
- hypothesis
type: annotation
url: /annotations/2023/01/22/1674384724
---
<blockquote>Were also not going deep here on MLops or LLMops tooling, which is not yet highly standardized and will be addressed in a future post.</blockquote>first mention of LLMops I've seen in the wild

View File

@ -1,64 +0,0 @@
---
date: '2023-01-22T10:55:48'
hypothesis-meta:
created: '2023-01-22T10:55:48.838124+00:00'
document:
title:
- Who Owns the Generative AI Platform? | Andreessen Horowitz
flagged: false
group: __world__
hidden: false
id: VHeccJpDEe2aL7PP1S7d_w
links:
html: https://hypothes.is/a/VHeccJpDEe2aL7PP1S7d_w
incontext: https://hyp.is/VHeccJpDEe2aL7PP1S7d_w/a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
json: https://hypothes.is/api/annotations/VHeccJpDEe2aL7PP1S7d_w
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- ai
- generative ai
target:
- selector:
- endContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/p[16]/span[1]
endOffset: 659
startContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/p[16]/span[1]
startOffset: 453
type: RangeSelector
- end: 14059
start: 13853
type: TextPositionSelector
- exact: "Many apps are also relatively undifferentiated, since they rely on similar\
\ underlying AI models and haven\u2019t discovered obvious network effects,\
\ or data/workflows, that are hard for competitors to duplicate."
prefix: 'nd retention start to tail off. '
suffix: "\nSo, it\u2019s not yet obvious that s"
type: TextQuoteSelector
source: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
text: Companies that rely on underlying AI models without adding value via model
improvements are going to find that they have no moat.
updated: '2023-01-22T10:55:48.838124+00:00'
uri: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
tags:
- ai
- generative ai
- hypothesis
type: annotation
url: /annotations/2023/01/22/1674384948
---
<blockquote>Many apps are also relatively undifferentiated, since they rely on similar underlying AI models and havent discovered obvious network effects, or data/workflows, that are hard for competitors to duplicate.</blockquote>Companies that rely on underlying AI models without adding value via model improvements are going to find that they have no moat.

View File

@ -1,75 +0,0 @@
---
date: '2023-01-22T10:57:34'
hypothesis-meta:
created: '2023-01-22T10:57:34.532045+00:00'
document:
title:
- Who Owns the Generative AI Platform? | Andreessen Horowitz
flagged: false
group: __world__
hidden: false
id: k3jJlJpDEe2r9LtfV5j0MA
links:
html: https://hypothes.is/a/k3jJlJpDEe2r9LtfV5j0MA
incontext: https://hyp.is/k3jJlJpDEe2r9LtfV5j0MA/a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
json: https://hypothes.is/api/annotations/k3jJlJpDEe2r9LtfV5j0MA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- ai
- generative ai
target:
- selector:
- endContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/ul[2]/li[1]/span[2]
endOffset: 238
startContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/ul[2]/li[1]/b[1]
startOffset: 0
type: RangeSelector
- end: 15074
start: 14604
type: TextPositionSelector
- exact: "Vertical integration (\u201Cmodel + app\u201D). Consuming AI models\
\ as a service allows app developers to iterate quickly with a small team\
\ and swap model providers as technology advances. On the flip side, some\
\ devs argue that the product is the model, and that training from scratch\
\ is the only way to create defensibility \u2014 i.e. by continually re-training\
\ on proprietary product data. But it comes at the cost of much higher capital\
\ requirements and a less nimble product team."
prefix: 'tive AI app companies include:
'
suffix: '
Building features vs. apps. Gen'
type: TextQuoteSelector
source: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
text: There's definitely a middle ground of taking an open source model that is
suitably mature and fine-tuning it for a specific use case. You could start without
a moat and build one over time through collecting use data (similar to network
effect)
updated: '2023-01-22T10:57:34.532045+00:00'
uri: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
tags:
- ai
- generative ai
- hypothesis
type: annotation
url: /annotations/2023/01/22/1674385054
---
<blockquote>Vertical integration (“model + app”). Consuming AI models as a service allows app developers to iterate quickly with a small team and swap model providers as technology advances. On the flip side, some devs argue that the product is the model, and that training from scratch is the only way to create defensibility — i.e. by continually re-training on proprietary product data. But it comes at the cost of much higher capital requirements and a less nimble product team.</blockquote>There's definitely a middle ground of taking an open source model that is suitably mature and fine-tuning it for a specific use case. You could start without a moat and build one over time through collecting use data (similar to network effect)

View File

@ -1,67 +0,0 @@
---
date: '2023-01-22T11:00:43'
hypothesis-meta:
created: '2023-01-22T11:00:43.211118+00:00'
document:
title:
- Who Owns the Generative AI Platform? | Andreessen Horowitz
flagged: false
group: __world__
hidden: false
id: A-7m9JpEEe2JryNca-mUVg
links:
html: https://hypothes.is/a/A-7m9JpEEe2JryNca-mUVg
incontext: https://hyp.is/A-7m9JpEEe2JryNca-mUVg/a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
json: https://hypothes.is/api/annotations/A-7m9JpEEe2JryNca-mUVg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- generative ai
- AI
target:
- selector:
- endContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/p[21]/span[3]
endOffset: 1
startContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/p[21]/span[1]
startOffset: 363
type: RangeSelector
- end: 16984
start: 16813
type: TextPositionSelector
- exact: In natural language models, OpenAI dominates with GPT-3/3.5 and ChatGPT.
But relatively few killer apps built on OpenAI exist so far, and prices have
already dropped once.
prefix: 'a core tenet of their business. '
suffix: '
This may be just a temporary ph'
type: TextQuoteSelector
source: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
text: OpenAI have already dropped prices on their GPT-3/3.5 models and relatively
few apps have emerged. This could be because companies are reluctant to build
their core offering around a third party API
updated: '2023-01-22T11:00:43.211118+00:00'
uri: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
tags:
- generative ai
- AI
- hypothesis
type: annotation
url: /annotations/2023/01/22/1674385243
---
<blockquote>In natural language models, OpenAI dominates with GPT-3/3.5 and ChatGPT. But relatively few killer apps built on OpenAI exist so far, and prices have already dropped once.</blockquote>OpenAI have already dropped prices on their GPT-3/3.5 models and relatively few apps have emerged. This could be because companies are reluctant to build their core offering around a third party API

View File

@ -1,73 +0,0 @@
---
date: '2023-01-22T11:02:54'
hypothesis-meta:
created: '2023-01-22T11:02:54.339397+00:00'
document:
title:
- Who Owns the Generative AI Platform? | Andreessen Horowitz
flagged: false
group: __world__
hidden: false
id: UhZ6LJpEEe2fsBs2mQHXSA
links:
html: https://hypothes.is/a/UhZ6LJpEEe2fsBs2mQHXSA
incontext: https://hyp.is/UhZ6LJpEEe2fsBs2mQHXSA/a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
json: https://hypothes.is/api/annotations/UhZ6LJpEEe2fsBs2mQHXSA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- generative ai
- AI
target:
- selector:
- endContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/ul[3]/li[1]/span[1]
endOffset: 389
startContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/ul[3]/li[1]/b[1]
startOffset: 0
type: RangeSelector
- end: 19180
start: 18774
type: TextPositionSelector
- exact: "Commoditization. There\u2019s a common belief that AI models will converge\
\ in performance over time. Talking to app developers, it\u2019s clear that\
\ hasn\u2019t happened yet, with strong leaders in both text and image models.\
\ Their advantages are based not on unique model architectures, but on high\
\ capital requirements, proprietary product interaction data, and scarce AI\
\ talent. Will this serve as a durable advantage?"
prefix: 'stions facing model providers:
'
suffix: '
Graduation risk. Relying on mod'
type: TextQuoteSelector
source: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
text: All current generation models have more-or-less the same architecture and
training regimes. Differentiation is in the training data and the number of hyper-parameters
that the company can afford to scale to.
updated: '2023-01-22T11:02:54.339397+00:00'
uri: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
tags:
- generative ai
- AI
- hypothesis
type: annotation
url: /annotations/2023/01/22/1674385374
---
<blockquote>Commoditization. Theres a common belief that AI models will converge in performance over time. Talking to app developers, its clear that hasnt happened yet, with strong leaders in both text and image models. Their advantages are based not on unique model architectures, but on high capital requirements, proprietary product interaction data, and scarce AI talent. Will this serve as a durable advantage?</blockquote>All current generation models have more-or-less the same architecture and training regimes. Differentiation is in the training data and the number of hyper-parameters that the company can afford to scale to.

View File

@ -1,77 +0,0 @@
---
date: '2023-01-22T11:07:18'
hypothesis-meta:
created: '2023-01-22T11:07:18.838647+00:00'
document:
title:
- Who Owns the Generative AI Platform? | Andreessen Horowitz
flagged: false
group: __world__
hidden: false
id: 771i6ppEEe2RxNtz0udwZw
links:
html: https://hypothes.is/a/771i6ppEEe2RxNtz0udwZw
incontext: https://hyp.is/771i6ppEEe2RxNtz0udwZw/a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
json: https://hypothes.is/api/annotations/771i6ppEEe2RxNtz0udwZw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- ai
- generative ai
- gpu
target:
- selector:
- endContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/p[35]/span[2]
endOffset: 111
startContainer: /div[1]/div[1]/main[1]/div[1]/div[1]/article[1]/main[1]/div[1]/div[1]/div[1]/div[1]/p[35]/span[1]
startOffset: 0
type: RangeSelector
- end: 23838
start: 23155
type: TextPositionSelector
- exact: Other hardware options do exist, including Google Tensor Processing Units
(TPUs); AMD Instinct GPUs; AWS Inferentia and Trainium chips; and AI accelerators
from startups like Cerebras, Sambanova, and Graphcore. Intel, late to the
game, is also entering the market with their high-end Habana chips and Ponte
Vecchio GPUs. But so far, few of these new chips have taken significant market
share. The two exceptions to watch are Google, whose TPUs have gained traction
in the Stable Diffusion community and in some large GCP deals, and TSMC, who
is believed to manufacture all of the chips listed here, including Nvidia
GPUs (Intel uses a mix of its own fabs and TSMC to make its chips).
prefix: ' top AI chip startups combined.
'
suffix: '
Infrastructure is, in other wor'
type: TextQuoteSelector
source: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
text: Look at market share for tensorflow and pytorch which both offer first-class
nvidia support and likely spells out the story. If you are getting in to AI you
go learn one of those frameworks and they tell you to install CUDA
updated: '2023-01-22T11:07:18.838647+00:00'
uri: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://a16z.com/2023/01/19/who-owns-the-generative-ai-platform/
tags:
- ai
- generative ai
- gpu
- hypothesis
type: annotation
url: /annotations/2023/01/22/1674385638
---
<blockquote>Other hardware options do exist, including Google Tensor Processing Units (TPUs); AMD Instinct GPUs; AWS Inferentia and Trainium chips; and AI accelerators from startups like Cerebras, Sambanova, and Graphcore. Intel, late to the game, is also entering the market with their high-end Habana chips and Ponte Vecchio GPUs. But so far, few of these new chips have taken significant market share. The two exceptions to watch are Google, whose TPUs have gained traction in the Stable Diffusion community and in some large GCP deals, and TSMC, who is believed to manufacture all of the chips listed here, including Nvidia GPUs (Intel uses a mix of its own fabs and TSMC to make its chips).</blockquote>Look at market share for tensorflow and pytorch which both offer first-class nvidia support and likely spells out the story. If you are getting in to AI you go learn one of those frameworks and they tell you to install CUDA

View File

@ -1,56 +0,0 @@
---
date: '2023-01-29T10:28:35'
hypothesis-meta:
created: '2023-01-29T10:28:35.193967+00:00'
document:
title:
- 2301.11305.pdf
flagged: false
group: __world__
hidden: false
id: r54Kmp-_Ee2ki69_6avEdA
links:
html: https://hypothes.is/a/r54Kmp-_Ee2ki69_6avEdA
incontext: https://hyp.is/r54Kmp-_Ee2ki69_6avEdA/arxiv.org/pdf/2301.11305.pdf
json: https://hypothes.is/api/annotations/r54Kmp-_Ee2ki69_6avEdA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- chatgpt
- detecting gpt
target:
- selector:
- end: 1440
start: 1365
type: TextPositionSelector
- exact: See ericmitchell.ai/detectgptfor code, data, and other project information.
prefix: 'e to 0.95 AUROC for Detect-GPT. '
suffix: 1. IntroductionLarge language mo
type: TextQuoteSelector
source: https://arxiv.org/pdf/2301.11305.pdf
text: Code and data available at https://ericmitchell.ai/detectgpt
updated: '2023-01-29T10:28:35.193967+00:00'
uri: https://arxiv.org/pdf/2301.11305.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/2301.11305.pdf
tags:
- chatgpt
- detecting gpt
- hypothesis
type: annotation
url: /annotations/2023/01/29/1674988115
---
<blockquote>See ericmitchell.ai/detectgptfor code, data, and other project information.</blockquote>Code and data available at https://ericmitchell.ai/detectgpt

View File

@ -1,61 +0,0 @@
---
date: '2023-01-29T10:35:56'
hypothesis-meta:
created: '2023-01-29T10:35:56.649264+00:00'
document:
title:
- 2301.11305.pdf
flagged: false
group: __world__
hidden: false
id: tr0lTp_AEe2k81d5ilJ0Xw
links:
html: https://hypothes.is/a/tr0lTp_AEe2k81d5ilJ0Xw
incontext: https://hyp.is/tr0lTp_AEe2k81d5ilJ0Xw/arxiv.org/pdf/2301.11305.pdf
json: https://hypothes.is/api/annotations/tr0lTp_AEe2k81d5ilJ0Xw
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- chatgpt
- detecting gpt
target:
- selector:
- end: 1096
start: 756
type: TextPositionSelector
- exact: his approach, which we call DetectGPT,does not require training a separate
classifier, col-lecting a dataset of real or generated passages, orexplicitly
watermarking generated text. It usesonly log probabilities computed by the
model ofinterest and random perturbations of the passagefrom another generic
pre-trained language model(e.g, T5)
prefix: ' is generated from a givenLLM. T'
suffix: . We find DetectGPT is more disc
type: TextQuoteSelector
source: https://arxiv.org/pdf/2301.11305.pdf
text: The novelty of this approach is that it is cheap to set up as long as you
have the log probabilities generated by the model of interest.
updated: '2023-01-29T10:35:56.649264+00:00'
uri: https://arxiv.org/pdf/2301.11305.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/2301.11305.pdf
tags:
- chatgpt
- detecting gpt
- hypothesis
type: annotation
url: /annotations/2023/01/29/1674988556
---
<blockquote>his approach, which we call DetectGPT,does not require training a separate classifier, col-lecting a dataset of real or generated passages, orexplicitly watermarking generated text. It usesonly log probabilities computed by the model ofinterest and random perturbations of the passagefrom another generic pre-trained language model(e.g, T5)</blockquote>The novelty of this approach is that it is cheap to set up as long as you have the log probabilities generated by the model of interest.

Some files were not shown because too many files have changed in this diff Show More