crucible¶
The main Crucible CLI orchestrates fuzzing campaigns, corpus management, and crash triage.
Usage¶
Commands¶
run¶
Start a fuzzing campaign using a compiled harness.
crucible run \
--harness ./crucible-libfuzzer \
--corpus ./corpus \
--output ./crashes \
--jobs 8 \
--timeout 30s \
--max-len 10485760
| Flag | Default | Description |
|---|---|---|
--harness | (required) | Path to compiled fuzz harness binary |
--corpus | ./corpus | Seed corpus directory |
--output | ./crashes | Crash output directory |
--jobs | 4 | Number of parallel fuzzing jobs |
--timeout | 30s | Per-testcase timeout |
--max-len | 10485760 | Maximum input length in bytes (10 MB) |
--engine | libfuzzer | Fuzzing engine to use (libfuzzer or afl) |
--dry-run | false | Print configuration and exit without running |
The harness is invoked as a libFuzzer binary with the appropriate flags.
generate¶
Generate mutated GGUF files for use with external fuzzers.
# Default: generate built-in structural seeds + mutations
crucible generate \
--output ./corpus/generated \
--seed 42 \
--count 100
# Custom: mutate seeds from an existing corpus directory
crucible generate \
--corpus ./my-seeds \
--output ./corpus/generated \
--count 200
| Flag | Default | Description |
|---|---|---|
--corpus | ./corpus | Input seed corpus directory. When explicitly set, seeds are loaded from this directory instead of using built-in structural seeds. |
--output | ./corpus/generated | Output directory for generated files |
--seed | 0 | Random seed (0 = time-based) |
--count | 100 | Number of mutated files to generate |
When --corpus is not specified, produces built-in structural seeds (covering all parser paths) plus mutated variants. When --corpus points to a directory of .gguf files, those files are loaded and used as the mutation base instead.
status¶
Show campaign statistics.
Displays the number of crashes found in ./crashes.
minimize¶
Minimize the corpus by removing duplicate seeds.
Deduplicates by SHA-256 hash of serialized GGUF content.
triage¶
Process crash outputs and generate reports.
crucible triage \
--crashes ./crashes \
--output ./reports \
--harness ./crucible-libfuzzer \
--replay-timeout 60s \
--target gguf
| Flag | Default | Description |
|---|---|---|
--crashes | ./crashes | Crash directory |
--output | ./reports | Report output directory |
--harness | (none) | Path to harness binary (for replaying crash reproducers) |
--replay-timeout | 30s | Timeout per harness replay execution |
--replay-env | (none) | Extra env vars for replay (KEY=VALUE, repeatable) |
--target | (auto) | Target surface for reports (gguf, rpc, grammar, etc.); auto-detected from --harness if empty |
--minimize | false | Minimize crash reproducers before triaging (requires --harness) |
--sarif | (none) | Write SARIF 2.1.0 output to this file path |
For each unique crash (deduplicated by stack hash):
- Classifies the bug type
- Estimates CVSS score
- Generates a markdown report with CVE submission template
report¶
Full alias for triage — accepts the same flags and delegates to the same implementation.
crucible report \
--crashes ./crashes \
--output ./reports \
--harness ./crucible-libfuzzer \
--replay-timeout 60s
| Flag | Default | Description |
|---|---|---|
--crashes | ./crashes | Crash directory |
--output | ./reports | Report output directory |
--harness | (none) | Path to harness binary (for replaying crash reproducers) |
--replay-timeout | 30s | Timeout per harness replay execution |
--replay-env | (none) | Extra env vars for replay (KEY=VALUE, repeatable) |
--target | (auto) | Target surface for reports; auto-detected from --harness if empty |
--minimize | false | Minimize crash reproducers before triaging (requires --harness) |
--sarif | (none) | Write SARIF 2.1.0 output to this file path |
triage dedup¶
Deduplicate a crash directory by stack hash or content fingerprint.
# Full mode: replay each crash through the harness, group by stack hash
crucible triage dedup \
--crashes ./crashes \
--harness ./crucible-libfuzzer \
--delete
# Fast mode: group by file size + content hash (no harness required)
crucible triage dedup \
--crashes ./crashes \
--fast \
--delete
| Flag | Default | Description |
|---|---|---|
--crashes | ./crashes | Path to crash directory |
--harness | (none) | Path to harness binary (required for full mode) |
--replay-timeout | 30s | Timeout per harness replay execution |
--fast | false | Use fast content-hash mode (no harness required) |
--delete | false | Delete duplicate files (default: dry-run reporting only) |
--output | (none) | Move duplicates to this directory instead of deleting (implies --delete) |
Full mode replays each crash file through the harness to capture sanitizer output, computes a stack hash from the top 5 frames, and keeps the smallest reproducer per unique hash. Requires a harness binary.
Fast mode groups files by (file_size, SHA-256 of first 64 bytes) without replaying. Much faster but may over-deduplicate crashes that happen to share a prefix.
By default, runs in dry-run mode — reports what would be removed without deleting anything. Pass --delete to actually remove duplicates.
coverage report¶
Generate an HTML coverage report from corpus replay.
crucible coverage report \
--harness ./crucible-cov \
--corpus ./corpus/generated \
--output ./coverage-report
| Flag | Default | Description |
|---|---|---|
--harness | (required) | Path to instrumented harness binary |
--corpus | ./corpus | Path to corpus directory to replay |
--output | ./coverage-report | Output directory for HTML report |
--source-dir | (none) | Source directory for path mapping (optional) |
The harness must be built with -fprofile-instr-generate -fcoverage-mapping (no -fsanitize=fuzzer). Each corpus file is replayed through the harness to collect .profraw data, which is merged via llvm-profdata and rendered to HTML via llvm-cov show.
The corpus directory is traversed recursively — nested subdirectories (e.g. corpus/generated/arch/) are included automatically. Non-seed files such as dictionaries (.dict), logs, and scripts are skipped and reported in the log output.
After generating the HTML report, a per-file coverage summary is printed to stdout showing line and function coverage percentages, sorted by coverage (worst-covered files first).
Requires llvm-profdata and llvm-cov in PATH.