Merge branch 'main' of ssh://git.jamesravey.me:222/ravenscroftj/brainsteam.co.uk
continuous-integration/drone/push Build is passing Details

This commit is contained in:
James Ravenscroft 2022-12-08 21:32:16 +00:00
commit a403486025
9 changed files with 592 additions and 0 deletions

View File

@ -0,0 +1,68 @@
---
date: '2022-12-04T16:29:05'
hypothesis-meta:
created: '2022-12-04T16:29:05.263170+00:00'
document:
title:
- Exploring vs. exploiting - Herbert Lui
flagged: false
group: __world__
hidden: false
id: xQywjnPwEe2lk_tZfYP65Q
links:
html: https://hypothes.is/a/xQywjnPwEe2lk_tZfYP65Q
incontext: https://hyp.is/xQywjnPwEe2lk_tZfYP65Q/herbertlui.net/exploring-vs-exploiting/
json: https://hypothes.is/api/annotations/xQywjnPwEe2lk_tZfYP65Q
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- pkm
- tools for thought
target:
- selector:
- endContainer: /div[1]/div[1]/div[1]/main[1]/article[1]/div[1]/div[1]/p[6]
endOffset: 319
startContainer: /div[1]/div[1]/div[1]/main[1]/article[1]/div[1]/div[1]/p[6]
startOffset: 0
type: RangeSelector
- end: 2272
start: 1953
type: TextPositionSelector
- exact: "It\u2019s always worth gathering information, nurturing other projects,\
\ and putting together some backup plans. You\u2019ll need to define what\
\ success means to you for each of them, because you won\u2019t make overnight\
\ progress; instead, you\u2019re best served picking projects that you can\
\ learn critical lessons from, even if you fail"
prefix: "even better than their Plan A.\u201D\n"
suffix: ".\nEven if you\u2019re focused and mak"
type: TextQuoteSelector
source: https://herbertlui.net/exploring-vs-exploiting/
text: It's interesting because this way of thinking is eminently compatible with
the zettelkasten way of thinking e.g. don't necessarily set out with a hypothesis
in mind that you're trying to prove but rather explore until something interesting
emerges.
updated: '2022-12-04T16:29:05.263170+00:00'
uri: https://herbertlui.net/exploring-vs-exploiting/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://herbertlui.net/exploring-vs-exploiting/
tags:
- pkm
- tools for thought
- hypothesis
type: annotation
url: /annotations/2022/12/04/1670171345
---
<blockquote>Its always worth gathering information, nurturing other projects, and putting together some backup plans. Youll need to define what success means to you for each of them, because you wont make overnight progress; instead, youre best served picking projects that you can learn critical lessons from, even if you fail</blockquote>It's interesting because this way of thinking is eminently compatible with the zettelkasten way of thinking e.g. don't necessarily set out with a hypothesis in mind that you're trying to prove but rather explore until something interesting emerges.

View File

@ -0,0 +1,66 @@
---
date: '2022-12-04T20:14:02'
hypothesis-meta:
created: '2022-12-04T20:14:02.815622+00:00'
document:
title:
- Hyperbolic Distance Discounting
flagged: false
group: __world__
hidden: false
id: MjfCdHQQEe2XA6-Y-PXOtA
links:
html: https://hypothes.is/a/MjfCdHQQEe2XA6-Y-PXOtA
incontext: https://hyp.is/MjfCdHQQEe2XA6-Y-PXOtA/www.atvbt.com/hyperbolic/
json: https://hypothes.is/api/annotations/MjfCdHQQEe2XA6-Y-PXOtA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- psychology
- delayed gratification
- behaviour
target:
- selector:
- endContainer: /div[1]/main[1]/article[1]/div[1]/p[5]
endOffset: 292
startContainer: /div[1]/main[1]/article[1]/div[1]/p[5]
startOffset: 0
type: RangeSelector
- end: 1911
start: 1619
type: TextPositionSelector
- exact: 'You may have heard of hyperbolic discounting from behavioral economics:
people will generally disproportionally, i.e. hyperbolically, discount the
value of something the farther off it is. The average person judges $15 now
as equivalent to $30 in 3-months (an annual rate of return of 277%!).'
prefix: on center.Hyperbolic Discounting
suffix: "This excessive time-based or \u201Cte"
type: TextQuoteSelector
source: https://www.atvbt.com/hyperbolic/
text: this is fascinating and must relate to delayed gratification
updated: '2022-12-04T20:14:02.815622+00:00'
uri: https://www.atvbt.com/hyperbolic/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.atvbt.com/hyperbolic/
tags:
- psychology
- delayed gratification
- behaviour
- hypothesis
type: annotation
url: /annotations/2022/12/04/1670184842
---
<blockquote>You may have heard of hyperbolic discounting from behavioral economics: people will generally disproportionally, i.e. hyperbolically, discount the value of something the farther off it is. The average person judges $15 now as equivalent to $30 in 3-months (an annual rate of return of 277%!).</blockquote>this is fascinating and must relate to delayed gratification

View File

@ -0,0 +1,67 @@
---
date: '2022-12-04T20:15:19'
hypothesis-meta:
created: '2022-12-04T20:15:19.784065+00:00'
document:
title:
- Hyperbolic Distance Discounting
flagged: false
group: __world__
hidden: false
id: YBFyOnQQEe2WiKdsj1LCZg
links:
html: https://hypothes.is/a/YBFyOnQQEe2WiKdsj1LCZg
incontext: https://hyp.is/YBFyOnQQEe2WiKdsj1LCZg/www.atvbt.com/hyperbolic/
json: https://hypothes.is/api/annotations/YBFyOnQQEe2WiKdsj1LCZg
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- psychology
- delayed gratification
- behaviour
target:
- selector:
- endContainer: /div[1]/main[1]/article[1]/div[1]/p[14]
endOffset: 278
startContainer: /div[1]/main[1]/article[1]/div[1]/p[14]
startOffset: 0
type: RangeSelector
- end: 4013
start: 3735
type: TextPositionSelector
- exact: "Of course, the closest you can get is having the activity available\
\ in your own living space, but as unused home treadmills and exercise bikes\
\ demonstrate, this has its pitfalls. There could be something about a thing\
\ always being available that means there\u2019s never any urgency."
prefix: ay (and maybe worth paying for).
suffix: I think the ideal is to plan a r
type: TextQuoteSelector
source: https://www.atvbt.com/hyperbolic/
text: There seems to be a minimum at which hyperbolic discounting stops working
because things are too easy to access
updated: '2022-12-04T20:15:19.784065+00:00'
uri: https://www.atvbt.com/hyperbolic/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://www.atvbt.com/hyperbolic/
tags:
- psychology
- delayed gratification
- behaviour
- hypothesis
type: annotation
url: /annotations/2022/12/04/1670184919
---
<blockquote>Of course, the closest you can get is having the activity available in your own living space, but as unused home treadmills and exercise bikes demonstrate, this has its pitfalls. There could be something about a thing always being available that means theres never any urgency.</blockquote>There seems to be a minimum at which hyperbolic discounting stops working because things are too easy to access

View File

@ -0,0 +1,74 @@
---
date: '2022-12-04T20:26:10'
hypothesis-meta:
created: '2022-12-04T20:26:10.856094+00:00'
document:
title:
- Language builds culture - Herbert Lui
flagged: false
group: __world__
hidden: false
id: 5DIcYnQREe2NVTOF9GGXvA
links:
html: https://hypothes.is/a/5DIcYnQREe2NVTOF9GGXvA
incontext: https://hyp.is/5DIcYnQREe2NVTOF9GGXvA/herbertlui.net/language-builds-culture/
json: https://hypothes.is/api/annotations/5DIcYnQREe2NVTOF9GGXvA
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- linguistics
- behaviour
- learning-in-public
target:
- selector:
- endContainer: /div[1]/div[1]/div[1]/main[1]/article[1]/div[1]/div[1]/p[2]
endOffset: 278
startContainer: /div[1]/div[1]/div[1]/main[1]/article[1]/div[1]/div[1]/p[2]
startOffset: 0
type: RangeSelector
- end: 867
start: 589
type: TextPositionSelector
- exact: "Whether you want to call them mottos, memes, or manifestos, words can\
\ be the building blocks of how we think and transmit ideas. You can also\
\ gauge how well someone is grasping your concepts\u2014or at least making\
\ an effort to\u2014by the language they\u2019re responding to you with as\
\ well."
prefix: "falls, and favorable outcomes.\u201D\n"
suffix: '
Posted in Contentions, Life. '
type: TextQuoteSelector
source: https://herbertlui.net/language-builds-culture/
text: You can use the way that a person responds to your concepts as a metric for
how well they understand you. If they don't understand chances are they will retreat
back to jargon to try to hide the fact that they're struggling. If they're getting
on well they might have an insightful way to extend your metaphor
updated: '2022-12-04T20:26:10.856094+00:00'
uri: https://herbertlui.net/language-builds-culture/
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://herbertlui.net/language-builds-culture/
tags:
- linguistics
- behaviour
- learning-in-public
- hypothesis
type: annotation
url: /annotations/2022/12/04/1670185570
---
<blockquote>Whether you want to call them mottos, memes, or manifestos, words can be the building blocks of how we think and transmit ideas. You can also gauge how well someone is grasping your concepts—or at least making an effort to—by the language theyre responding to you with as well.</blockquote>You can use the way that a person responds to your concepts as a metric for how well they understand you. If they don't understand chances are they will retreat back to jargon to try to hide the fact that they're struggling. If they're getting on well they might have an insightful way to extend your metaphor

View File

@ -0,0 +1,64 @@
---
date: '2022-12-06T06:41:27'
hypothesis-meta:
created: '2022-12-06T06:41:27.851505+00:00'
document:
title:
- Ron DeSantis' Quiet Relationship with Amazon
flagged: false
group: __world__
hidden: false
id: AsgtBHUxEe2ilAfmS4q53w
links:
html: https://hypothes.is/a/AsgtBHUxEe2ilAfmS4q53w
incontext: https://hyp.is/AsgtBHUxEe2ilAfmS4q53w/mattstoller.substack.com/p/ron-desantis-quiet-relationship-with
json: https://hypothes.is/api/annotations/AsgtBHUxEe2ilAfmS4q53w
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- capitalism
target:
- selector:
- endContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[4]/div[1]/div[1]/p[12]/span[2]
endOffset: 141
startContainer: /div[1]/div[1]/div[2]/div[1]/div[1]/div[1]/article[1]/div[4]/div[1]/div[1]/p[12]/span[1]
startOffset: 0
type: RangeSelector
- end: 9023
start: 8736
type: TextPositionSelector
- exact: "Amazon is hated on the right as a bulwark of progressivism. For instance,\
\ to pick a random example, GOP icon Tucker Carlson recently characterized\
\ the firm\u2019s behavior as \u2018modern-day book burning.\u2019 And you\
\ can find an endless number of right-wing critiques. Conservatives distrust\
\ Amazon."
prefix: ne his relationship with Amazon.
suffix: An association with the tech gia
type: TextQuoteSelector
source: https://mattstoller.substack.com/p/ron-desantis-quiet-relationship-with
text: 'That is really interesting. Amazon is not exactly renowned as an m upholder
of progressive values by the left either. '
updated: '2022-12-06T06:41:27.851505+00:00'
uri: https://mattstoller.substack.com/p/ron-desantis-quiet-relationship-with
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://mattstoller.substack.com/p/ron-desantis-quiet-relationship-with
tags:
- capitalism
- hypothesis
type: annotation
url: /annotations/2022/12/06/1670308887
---
<blockquote>Amazon is hated on the right as a bulwark of progressivism. For instance, to pick a random example, GOP icon Tucker Carlson recently characterized the firms behavior as modern-day book burning. And you can find an endless number of right-wing critiques. Conservatives distrust Amazon.</blockquote>That is really interesting. Amazon is not exactly renowned as an m upholder of progressive values by the left either.

View File

@ -0,0 +1,66 @@
---
date: '2022-12-07T11:55:42'
hypothesis-meta:
created: '2022-12-07T11:55:42.527155+00:00'
document:
title:
- 2203.15556.pdf
flagged: false
group: __world__
hidden: false
id: E3TX9nYmEe2IOgdyjyKG9w
links:
html: https://hypothes.is/a/E3TX9nYmEe2IOgdyjyKG9w
incontext: https://hyp.is/E3TX9nYmEe2IOgdyjyKG9w/arxiv.org/pdf/2203.15556.pdf
json: https://hypothes.is/api/annotations/E3TX9nYmEe2IOgdyjyKG9w
permissions:
admin:
- acct:ravenscroftj@hypothes.is
delete:
- acct:ravenscroftj@hypothes.is
read:
- group:__world__
update:
- acct:ravenscroftj@hypothes.is
tags:
- nlproc
- efficient ml
target:
- selector:
- end: 1689
start: 1063
type: TextPositionSelector
- exact: "We test this hypothesis by training a predicted compute-optimal model,\
\ Chinchilla, that uses the same compute budget as Gopher but with 70B parameters\
\ and4\xD7 more more data. Chinchilla uniformly and significantly outperforms\
\ Gopher (280B), GPT-3 (175B),Jurassic-1 (178B), and Megatron-Turing NLG (530B)\
\ on a large range of downstream evaluation tasks.This also means that Chinchilla\
\ uses substantially less compute for fine-tuning and inference, greatlyfacilitating\
\ downstream usage. As a highlight, Chinchilla reaches a state-of-the-art\
\ average accuracy of67.5% on the MMLU benchmark, greater than a 7% improvement\
\ over Gopher"
prefix: ' tokens should also be doubled. '
suffix: .1. IntroductionRecently a serie
type: TextQuoteSelector
source: https://arxiv.org/pdf/2203.15556.pdf
text: By using more data on a smaller language model the authors were able to achieve
better performance than with the larger models - this reduces the cost of using
the model for inference.
updated: '2022-12-07T11:55:42.527155+00:00'
uri: https://arxiv.org/pdf/2203.15556.pdf
user: acct:ravenscroftj@hypothes.is
user_info:
display_name: James Ravenscroft
in-reply-to: https://arxiv.org/pdf/2203.15556.pdf
tags:
- nlproc
- efficient ml
- hypothesis
type: annotation
url: /annotations/2022/12/07/1670414142
---
<blockquote>We test this hypothesis by training a predicted compute-optimal model, Chinchilla, that uses the same compute budget as Gopher but with 70B parameters and4× more more data. Chinchilla uniformly and significantly outperforms Gopher (280B), GPT-3 (175B),Jurassic-1 (178B), and Megatron-Turing NLG (530B) on a large range of downstream evaluation tasks.This also means that Chinchilla uses substantially less compute for fine-tuning and inference, greatlyfacilitating downstream usage. As a highlight, Chinchilla reaches a state-of-the-art average accuracy of67.5% on the MMLU benchmark, greater than a 7% improvement over Gopher</blockquote>By using more data on a smaller language model the authors were able to achieve better performance than with the larger models - this reduces the cost of using the model for inference.

View File

@ -0,0 +1,11 @@
---
date: '2022-12-06T23:51:30.829006'
like-of: https://snarfed.org/2022-12-03_bridgy-fed-updates
tags:
- indieweb
title: Bridgy Fed updates | snarfed.org
type: like
url: /likes/2022/12/06/bridgy-fed-updates-snarfed-org1670370690
---

View File

@ -0,0 +1,27 @@
---
date: '2022-12-06T10:15:08.312352'
mp-syndicate-to:
- https://brid.gy/publish/mastodon
- https://brid.gy/publish/twitter
tags:
- nlp
title: Some Nuanced Thoughts on ChatGPT
description: I'm sure everyone's had enough of ChatGPT hot takes by now. Here's a more balanced view from an NLP specialist
type: post
url: /posts/2022/12/06/some-nuanced-thoughts-on-chatgpt1670321708
---
[ChatGPT](https://chat.openai.com/chat) is a really impressive tech demo and it shows us the power of large language models but it's important to remember that ChatGPT is a machine learning model and, like any AI, it's only as good as the data it's trained on. This means that it's prone to making errors, and it's important for humans to validate the answers it produces. I fully expect any executives wringing their hands with glee about "cutting resource" and making redundancies are going to have a real shock when they realise that they still need those people to supervise the model and verify its outputs. So maybe our relationship with coding changes and the quality and speed with which we can build systems increases. However, would you ask GPT "generate the control code for a pacemaker" and trust that device to help your own Grandma or would you prefer a team of medical systems engineer with 20+ years experience to review that code first?
Secondly, The company may be called OpenAI but GPT-3 is not open (sure they released [their scientific papers](https://arxiv.org/abs/2005.14165) but the trained model is locked away behind a paywall and you'd need ££££ to train your own from scratch by reproducing the paper). I'm expecting expect some competition between OpenAI, Google, Meta, Amazon et al but ultimately If your entire business model and IP is GPT + some postprocessing (i) you are at the beck and call of the pricing strategies the companies in this space set and (ii) your business has no moat. By all means use these models but make sure you have something defensible and unique in there and a backup plan for changing provider too. Incidentally given that the interface here is chat-based I suspect that vendor lock-in will be less of a thing - just send your prompts to a different endpoint!
So you might be thinking "Oh well you're an NLP specialist, you're bound to be against this tech, after all you're out of a job" - well no not at all - I'm really pleased to see this progress - it's the sort of tech I dreamed of as a kid and the amazing thing is that the #transformer models it's built on didn't even exist 7 years ago when I started my #PhD. There are still plenty of unsolved challenges that will keep me occupied (some of which I've just described) and I'm looking forward to getting stuck in!
Also, I even used ChatGPT to generate parts of this post - can you spot them?
<a href="https://brid.gy/publish/mastodon"></a>
<a href="https://brid.gy/publish/twitter"></a>

View File

@ -13402,6 +13402,155 @@
"content": null, "content": null,
"published": null "published": null
} }
},
{
"id": 1574601,
"source": "https:\/\/brid.gy\/like\/mastodon\/@jamesravey@fosstodon.org\/109450295865691859\/109456662404166206",
"target": "https:\/\/brainsteam.co.uk\/notes\/2022\/12\/03\/1670077694\/",
"activity": {
"type": "like"
},
"verified_date": "2022-12-04T18:39:32.899592",
"data": {
"author": {
"type": "card",
"name": "Daniel Duma",
"photo": "https:\/\/webmention.io\/avatar\/cdn.fosstodon.org\/a2cd4477ed7cd6498cfed52161a798e3aa2be0d00233ffb02019e51520ff10ba.jpg",
"url": "https:\/\/sigmoid.social\/@drdan"
},
"content": null,
"published": null
}
}
],
"\/2022\/12\/04\/joplin-hypothesis\/": [
{
"id": 1574561,
"source": "https:\/\/brid.gy\/repost\/mastodon\/@jamesravey@fosstodon.org\/109456308623863660\/109416049723899233",
"target": "https:\/\/brainsteam.co.uk\/2022\/12\/04\/joplin-hypothesis\/",
"activity": {
"type": "repost"
},
"verified_date": "2022-12-04T17:03:31.393232",
"data": {
"author": {
"type": "card",
"name": "Ton Zijlstra",
"photo": "https:\/\/webmention.io\/avatar\/cdn.fosstodon.org\/0706b808472286983f1d1689753864916548d81cf0cfd33728d74788c3a69a4e.jpg",
"url": "https:\/\/m.tzyl.eu\/@ton"
},
"content": null,
"published": null
}
},
{
"source": "https:\/\/brid.gy\/like\/mastodon\/@jamesravey@fosstodon.org\/109456308623863660\/109416049723899233",
"verified": true,
"verified_date": "2022-12-04T17:03:35+00:00",
"id": 1574560,
"private": false,
"data": {
"author": {
"name": "Ton Zijlstra",
"url": "https:\/\/m.tzyl.eu\/@ton",
"photo": "https:\/\/webmention.io\/avatar\/cdn.fosstodon.org\/0706b808472286983f1d1689753864916548d81cf0cfd33728d74788c3a69a4e.jpg"
},
"url": "https:\/\/fosstodon.org\/@jamesravey\/109456308623863660#favorited-by-109416049723899233",
"name": null,
"content": null,
"published": null,
"published_ts": null
},
"activity": {
"type": "like",
"sentence": "Ton Zijlstra liked a post https:\/\/brainsteam.co.uk\/2022\/12\/04\/joplin-hypothesis\/",
"sentence_html": "<a href=\"https:\/\/m.tzyl.eu\/@ton\">Ton Zijlstra<\/a> liked a post <a href=\"https:\/\/brainsteam.co.uk\/2022\/12\/04\/joplin-hypothesis\/\">https:\/\/brainsteam.co.uk\/2022\/12\/04\/joplin-hypothesis\/<\/a>"
},
"target": "https:\/\/brainsteam.co.uk\/2022\/12\/04\/joplin-hypothesis\/"
},
{
"source": "https:\/\/brid.gy\/like\/mastodon\/@jamesravey@fosstodon.org\/109456308623863660\/109456662404166206",
"verified": true,
"verified_date": "2022-12-04T18:39:36+00:00",
"id": 1574602,
"private": false,
"data": {
"author": {
"name": "Daniel Duma",
"url": "https:\/\/sigmoid.social\/@drdan",
"photo": "https:\/\/webmention.io\/avatar\/cdn.fosstodon.org\/a2cd4477ed7cd6498cfed52161a798e3aa2be0d00233ffb02019e51520ff10ba.jpg"
},
"url": "https:\/\/fosstodon.org\/@jamesravey\/109456308623863660#favorited-by-109456662404166206",
"name": null,
"content": null,
"published": null,
"published_ts": null
},
"activity": {
"type": "like",
"sentence": "Daniel Duma liked a post https:\/\/brainsteam.co.uk\/2022\/12\/04\/joplin-hypothesis\/",
"sentence_html": "<a href=\"https:\/\/sigmoid.social\/@drdan\">Daniel Duma<\/a> liked a post <a href=\"https:\/\/brainsteam.co.uk\/2022\/12\/04\/joplin-hypothesis\/\">https:\/\/brainsteam.co.uk\/2022\/12\/04\/joplin-hypothesis\/<\/a>"
},
"target": "https:\/\/brainsteam.co.uk\/2022\/12\/04\/joplin-hypothesis\/"
},
{
"id": 1574626,
"source": "https:\/\/brid.gy\/like\/mastodon\/@jamesravey@fosstodon.org\/109456308623863660\/109253482134385876",
"target": "https:\/\/brainsteam.co.uk\/2022\/12\/04\/joplin-hypothesis\/",
"activity": {
"type": "like"
},
"verified_date": "2022-12-04T20:10:47.989239",
"data": {
"author": {
"type": "card",
"name": "James Sutton",
"photo": "https:\/\/webmention.io\/avatar\/cdn.fosstodon.org\/e965cbd56b7dc097db583014c68c7fff3e61fba6f82dedca575bb7bb1d24463f.jpg",
"url": "https:\/\/mastodon.social\/@jpwsutton"
},
"content": null,
"published": null
}
},
{
"id": 1574689,
"source": "https:\/\/brid.gy\/like\/mastodon\/@jamesravey@fosstodon.org\/109456308623863660\/4703",
"target": "https:\/\/brainsteam.co.uk\/2022\/12\/04\/joplin-hypothesis\/",
"activity": {
"type": "like"
},
"verified_date": "2022-12-05T00:36:58.760727",
"data": {
"author": {
"type": "card",
"name": "Chris Aldrich",
"photo": "https:\/\/webmention.io\/avatar\/cdn.fosstodon.org\/4d8126a10a38f448a36824118fd3143ee85658d0ef9251e025537efd24197813.jpg",
"url": "https:\/\/mastodon.social\/@chrisaldrich"
},
"content": null,
"published": null
}
}
],
"\/posts\/2022\/12\/06\/some-nuanced-thoughts-on-chatgpt1670321708\/": [
{
"id": 1575800,
"source": "https:\/\/brid.gy\/like\/mastodon\/@jamesravey@fosstodon.org\/109466246414872687\/109253482134385876",
"target": "https:\/\/brainsteam.co.uk\/posts\/2022\/12\/06\/some-nuanced-thoughts-on-chatgpt1670321708\/",
"activity": {
"type": "like"
},
"verified_date": "2022-12-06T12:44:13.401793",
"data": {
"author": {
"type": "card",
"name": "James Sutton",
"photo": "https:\/\/webmention.io\/avatar\/cdn.fosstodon.org\/e965cbd56b7dc097db583014c68c7fff3e61fba6f82dedca575bb7bb1d24463f.jpg",
"url": "https:\/\/mastodon.social\/@jpwsutton"
},
"content": null,
"published": null
}
} }
] ]
} }