19 lines
1.2 KiB
Markdown
19 lines
1.2 KiB
Markdown
|
---
|
|||
|
bookmark-of: https://www.oligo.security/blog/more-models-more-probllms
|
|||
|
date: '2024-10-31T17:53:32.552674'
|
|||
|
mp-syndicate-to:
|
|||
|
- https://brid.gy/publish/mastodon
|
|||
|
post_meta:
|
|||
|
- date
|
|||
|
tags:
|
|||
|
- ai
|
|||
|
- cybersecurity
|
|||
|
type: bookmarks
|
|||
|
url: /bookmarks/2024/10/31/1730397212
|
|||
|
|
|||
|
---
|
|||
|
|
|||
|
> Oligo’s research team recently uncovered 6 vulnerabilities in Ollama, one of the leading open-source frameworks for running AI models. Four of the flaws received CVEs and were patched in a recent version, while two were disputed by the application’s maintainers, making them shadow vulnerabilities.
|
|||
|
|
|||
|
This work provides some concrete evidence that hosting public-facing Ollama instances is a bad idea. It's great to see that some of the vulnerabilities were already fixed but it also tracks that making an API that's allowed to consume a bunch of GPU time accessible to the public might allow baddies to take advantage of your systems. If you (or your organisation) are planning to use Ollama for model hosting, I recommend [running it behind litellm](https://brainsteam.co.uk/2024/07/08/ditch-that-chatgpt-subscription-moving-to-pay-as-you-go-ai-usage-with-open-web-ui/)
|
|||
|
<a href="https://brid.gy/publish/mastodon"></a>
|