Files
strix-halo-optimizations/docs/benchmarking.md
Felipe Cardoso da2c4c6b8a fix(docs): address review findings — accuracy, consistency, completeness
- architecture.md: fix kernel param math to match actual computed values,
  use cardN placeholder in sysfs paths, clarify system_ram_kb is OS-visible
- benchmarking.md: normalize flags to -ngl 99 / -mmp 0 (matching code),
  add llama-rocm7-nightlies backend
- CLAUDE.md: clarify HSA_OVERRIDE_GFX_VERSION is set in containers not
  scripts, fix lib sourcing description, specify which scripts need root
- detect.sh: document detect_cpu_cores returns threads not cores
- troubleshooting.md: add link to references.md
- README.md: remove unsupported Fedora 42 claim, describe configs/ content

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 21:44:16 +01:00

96 lines
3.4 KiB
Markdown

# Benchmarking
## What We Measure
All benchmarks use [llama-bench](https://github.com/ggml-org/llama.cpp) (part of llama.cpp) running inside toolbox containers. Two test types:
| Metric | Meaning | Test Params |
|--------|---------|-------------|
| **pp** (prompt processing) | How fast the model ingests input tokens | Default: 512 tokens |
| **tg** (token generation) | How fast the model produces output tokens | Default: 128 tokens |
Results are in **tokens/second (t/s)**. Higher is better.
## Test Parameters
### Standard Test
```
-ngl 99 -mmp 0 -fa 1 -r 5
```
- `-ngl 99` — all layers on GPU
- `-mmp 0` — disable memory mapping (`--no-mmap`)
- `-fa 1` — flash attention enabled
- `-r 5` — 5 repetitions for statistical confidence
### Long-Context Test
```
-ngl 99 -mmp 0 -fa 1 -p 2048 -n 32 -d 32768 -ub SIZE -r 3
```
- `-p 2048` — 2048 prompt tokens
- `-n 32` — generate 32 tokens
- `-d 32768` — 32K context window
- `-ub SIZE` — micro-batch size (512 for Vulkan, 2048 for ROCm)
- `-r 3` — 3 repetitions (long-context tests are slow)
The `-fa 1 -mmp 0 -ngl 99` flags are **mandatory** on Strix Halo to avoid crashes (`-fa 1` = flash attention, `-mmp 0` = no memory mapping, `-ngl 99` = all layers on GPU).
## Available Backends
| Backend | Container | Technology | Notes |
|---------|-----------|------------|-------|
| `llama-vulkan-radv` | Mesa RADV | Vulkan | Most stable, recommended default |
| `llama-vulkan-amdvlk` | AMDVLK | Vulkan | Fastest when it works, 2GB buffer limit |
| `llama-rocm-6.4.4` | ROCm 6.4.4 | HIP | Proven stable |
| `llama-rocm-7.2` | ROCm 7.2 | HIP | Latest, compiler fixes applied |
| `llama-rocm7-nightlies` | ROCm 7 nightly | HIP | Experimental/development builds |
Containers are from [kyuz0/amd-strix-halo-toolboxes](https://github.com/kyuz0/amd-strix-halo-toolboxes). Set up with `make benchmark-setup`.
## Workflow
```bash
# 1. Setup (one-time)
make benchmark-setup
# 2. Capture baseline (before optimization)
make benchmark-baseline
# 3. After optimizing, run again
make benchmark # or: bin/benchmark run --tag post-opt
# 4. Compare
make benchmark-compare BEFORE=data/baselines/20260325-120000 AFTER=data/benchmarks/post-opt-20260326-100000
```
## Result Format
Each run produces a directory under `data/baselines/` or `data/benchmarks/`:
```
TIMESTAMP/
system-state.json # Full system audit snapshot
summary.json # Parsed results (model, backend, test, t/s)
metrics.csv # GPU/CPU metrics during the run
*.log # Raw llama-bench output per backend+model+test
```
### Comparison Output
```
Backend | Model | Test | Before | After | Delta
vulkan-radv | qwen3-4b | pp512 | 548 t/s | 612 t/s | +11.7%
vulkan-radv | qwen3-4b | tg128 | 13.9 | 15.2 | +9.4%
```
Configuration changes between runs (VRAM, GTT, kernel params, tuned profile) are shown if system-state.json differs.
## Recommended Test Models
| Size | Model | File | Disk | Use Case |
|------|-------|------|------|----------|
| Small | Qwen3-4B | Q4_K_M.gguf | ~3 GB | Quick smoke tests |
| Medium | Qwen3-14B | Q4_K_M.gguf | ~9 GB | Standard benchmarks |
| Large | Qwen3-32B | Q4_K_M.gguf | ~20 GB | Memory pressure tests |
Place models in `data/models/`. The VRAM estimator from the [toolboxes project](https://github.com/kyuz0/amd-strix-halo-toolboxes) (`gguf-vram-estimator.py`) can help plan which models fit.