brainsteam.co.uk/brainsteam/content/bookmarks/2024/10/31/1730397212.md

1.2 KiB
Raw Blame History

bookmark-of title date mp-syndicate-to post_meta tags type url
https://www.oligo.security/blog/more-models-more-probllms More Models, More ProbLLMs 2024-10-31T17:53:32.552674
https://brid.gy/publish/mastodon
date
ai
cybersecurity
bookmarks /bookmarks/2024/10/31/1730397212

Oligos research team recently uncovered 6 vulnerabilities in Ollama, one of the leading open-source frameworks for running AI models. Four of the flaws received CVEs and were patched in a recent version, while two were disputed by the applications maintainers, making them shadow vulnerabilities.

This work provides some concrete evidence that hosting public-facing Ollama instances is a bad idea. It's great to see that some of the vulnerabilities were already fixed but it also tracks that making an API that's allowed to consume a bunch of GPU time accessible to the public might allow baddies to take advantage of your systems. If you (or your organisation) are planning to use Ollama for model hosting, I recommend running it behind litellm