Files
strix-halo-optimizations/docs/benchmarking.md
Felipe Cardoso 5b81437637 docs: add README, CLAUDE.md, AGENTS.md, and full docs/ suite
- README.md: project overview, quick start, command reference, workflow
- CLAUDE.md: AI safety rules, technical details, conventions
- AGENTS.md: agent workflows, file responsibility map, dependency matrix
- docs/architecture.md: script layers, data flow, unified memory, JSON schemas
- docs/optimization.md: step-by-step optimization walkthrough
- docs/benchmarking.md: methodology, test params, result interpretation
- docs/troubleshooting.md: common issues and fixes
- docs/references.md: centralized external links (single source of truth)
- docs/bios-vram-guide.md: add back-link to optimization workflow

Cross-linked non-redundantly: each doc owns one layer, others link to it.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 20:50:00 +01:00

3.2 KiB

Benchmarking

What We Measure

All benchmarks use llama-bench (part of llama.cpp) running inside toolbox containers. Two test types:

Metric Meaning Test Params
pp (prompt processing) How fast the model ingests input tokens Default: 512 tokens
tg (token generation) How fast the model produces output tokens Default: 128 tokens

Results are in tokens/second (t/s). Higher is better.

Test Parameters

Standard Test

-ngl 99 -mmp 0 -fa 1 -r 5
  • -ngl 99 — all layers on GPU
  • -mmp 0 — disable memory mapping (--no-mmap)
  • -fa 1 — flash attention enabled
  • -r 5 — 5 repetitions for statistical confidence

Long-Context Test

-ngl 99 -mmp 0 -fa 1 -p 2048 -n 32 -d 32768 -ub SIZE -r 3
  • -p 2048 — 2048 prompt tokens
  • -n 32 — generate 32 tokens
  • -d 32768 — 32K context window
  • -ub SIZE — micro-batch size (512 for Vulkan, 2048 for ROCm)
  • -r 3 — 3 repetitions (long-context tests are slow)

The -fa 1 --no-mmap -ngl 999 flags are mandatory on Strix Halo to avoid crashes.

Available Backends

Backend Container Technology Notes
llama-vulkan-radv Mesa RADV Vulkan Most stable, recommended default
llama-vulkan-amdvlk AMDVLK Vulkan Fastest when it works, 2GB buffer limit
llama-rocm-6.4.4 ROCm 6.4.4 HIP Proven stable
llama-rocm-7.2 ROCm 7.2 HIP Latest, compiler fixes applied

Containers are from kyuz0/amd-strix-halo-toolboxes. Set up with make benchmark-setup.

Workflow

# 1. Setup (one-time)
make benchmark-setup

# 2. Capture baseline (before optimization)
make benchmark-baseline

# 3. After optimizing, run again
make benchmark              # or: bin/benchmark run --tag post-opt

# 4. Compare
make benchmark-compare BEFORE=data/baselines/20260325-120000 AFTER=data/benchmarks/post-opt-20260326-100000

Result Format

Each run produces a directory under data/baselines/ or data/benchmarks/:

TIMESTAMP/
  system-state.json    # Full system audit snapshot
  summary.json         # Parsed results (model, backend, test, t/s)
  metrics.csv          # GPU/CPU metrics during the run
  *.log                # Raw llama-bench output per backend+model+test

Comparison Output

Backend     | Model     | Test  | Before  | After   | Delta
vulkan-radv | qwen3-4b  | pp512 | 548 t/s | 612 t/s | +11.7%
vulkan-radv | qwen3-4b  | tg128 | 13.9    | 15.2    | +9.4%

Configuration changes between runs (VRAM, GTT, kernel params, tuned profile) are shown if system-state.json differs.

Size Model File Disk Use Case
Small Qwen3-4B Q4_K_M.gguf ~3 GB Quick smoke tests
Medium Qwen3-14B Q4_K_M.gguf ~9 GB Standard benchmarks
Large Qwen3-32B Q4_K_M.gguf ~20 GB Memory pressure tests

Place models in data/models/. The VRAM estimator from the toolboxes project (gguf-vram-estimator.py) can help plan which models fit.