brainsteam.co.uk/brainsteam/content/posts/2024/01/25/Supporting the Underdog whe...

3.1 KiB

categories date draft tags title type
AI and Machine Learning
Philosophy and Thinking
2024-01-25 09:33:27 false
AI
ethics
nlp
Supporting the Underdog when it comes to Automated Decision Making and Power Dynamics posts

I enjoyed this recent blog post by acclaimed technologist Terence Eden proposing a thought experiment about the ethics of open sourcing a hypothetical LLM classifier trained on benefits sanction appeal letters.

Eden, himself a huge open source advocate, argues, quite compellingly that such a model should be kept closed to prevent the potential leakage of potentially confidential information in the training data or probing of the model for the purpose of abusing it.

However, as some of the post's commentators point out, there is a bigger question at play here: where is it appropriate to be using this kind of tech?

One of the key issues in my mind is the end-user's treatment and the power dynamic at play here. If you're making life-and-death decisions (tw suicide) about people who have few resources to challenge those decisions, then you should have appropriate systems in place to make sure that decisions are fair, explainable and rational. You must provide mechanisms that allow the party with everything to lose in this situation to understand what is happening and why. Finally, There must always be an adequate escape hatch mechanism for recourse if the computer gets it wrong.

Whether we're talking about AI in the context of "draw a line of best fit through these points on a graph" or whether we're talking about fancy language models with billions of parameters, my view is that stuff should always be an augmentative technology rather than a replacement for human intelligence. Wherever it is are deployed, AI should be helping a human being to do their job more effectively rather than letting them fall asleep at the wheel. From what I know about Terence I'd go out on a limb to assume he feels the same way and perhaps all this stuff I'm writing about is implicit in his thought experiment.

However, this all comes at a time when, in the UK, we've had a recent reminder about what happens when the computer says no and no option for recourse is provided. So I felt that it was worthwhile to fill in these gaps.