<divclass="wp-block-indieblocks-reply"><divclass="u-in-reply-to h-cite"><p><i>In reply to <aclass="u-url p-name"href="https://arstechnica.com/information-technology/2024/01/ai-poisoning-could-turn-open-models-into-destructive-sleeper-agents-says-anthropic/">AI poisoning could turn open models into destructive “sleeper agents,” says Anthropic</a>.</i></p></div></div>
<!-- /wp:indieblocks/reply -->
<!-- wp:paragraph -->
<p>This is definitely a bit of a hot take from Ars Technica on the recent Anthropic paper about sleeper agents. The article concludes with "...this means that an open source LLM could potentially become a security liability..." but neglects to mention two key things:</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>1) this attack vector isn't just for "open source LLMs" but for any LLM trained on publically scraped data. We're in the dark on the specifics but we know with some certainty that GPT and Claude are "really really big" transformer-decoders and the secret sauce is the scale and the mix of training data. That means they're just as susceptible to attack as any other LLM with this architecture when trained on scraped data.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>2) This isn't a new problem, its an extension of the "let's train on everything we could scrape without proper moderation and hope that we can fine-tune the bad stuff away" mentality. It's a problem that persists in any model, closed or open, which has been trained in this way.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>One thing I know for sure as a machine learning practitioner: performance discrepancies aside, I can probe, test and fine-tune open model weights to my heart's content. With a model behind an API I have a lot less scope to explore and probe and I have to trust, at face value, the promises of the model providers who are being embarassed by moderation fails on a weekly basis (like <ahref="https://gizmodo.com/ai-chevy-dealership-chatgpt-bot-customer-service-fail-1851111825">here</a> and <ahref="https://www.bbc.co.uk/news/technology-68025677">here</a>). I know which I'd prefer...</p>