Fuzzing Ollama¶
Ollama vendors its own copy of llama.cpp's ggml library. Vendor-specific patches may introduce unique bugs not present in upstream llama.cpp.
Why Fuzz Ollama Separately?¶
- Ollama is the most popular local LLM runtime
- It bundles a vendored fork of llama.cpp with custom patches
- Security bugs in Ollama's fork affect millions of users
- Patches applied during vendoring may not be upstream-tested
Setup¶
# Clone Ollama source (pinned to version from VERSIONS.env)
make -C targets/ollama clone OLLAMA_SRC=~/src/ollama
# Build the instrumented vendored llama.cpp library
make -C targets/ollama build-fuzz OLLAMA_SRC=~/src/ollama
# Build the harness against Ollama's vendored ggml
make -C targets/ollama libfuzzer OLLAMA_SRC=~/src/ollama
Locating vendored ggml
Ollama vendors llama.cpp under llama/ in its source tree. The Makefile automatically points to the correct ggml headers and sources.
Running¶
# Generate corpus if not already done
make generate
# Run campaign
make -C targets/ollama run OLLAMA_SRC=~/src/ollama
Comparing Results¶
Crashes found in Ollama's vendored ggml should be cross-checked against upstream llama.cpp:
- Run the same reproducer against upstream llama.cpp's harness
- If it crashes upstream too — report to llama.cpp maintainers
- If it's Ollama-only — report to Ollama's security team
Ollama Security Reporting
Ollama accepts vulnerability reports at their GitHub Security Advisories page.