Serving: - make serve now launches Claude-distilled APEX 35B-A3B (16GB) with 2 parallel slots and 256K context as the daily driver - add serve-custom for ad-hoc model testing - add flush-gpu to reclaim unified memory after stuck runs Benchmarks: - default Vulkan-only backends (ROCm trails at long context) - add --backends filter to run-baseline.sh - fix backend filter substring bug (grep -qFx for exact line match) - fix model filter regex metacharacter bug (grep -qiF for literal) - respect --tg in long-context tests instead of hardcoded n=32 ROCm bump to 7.2.1 (kernel 6.18.4+ patch); keep 7.2 as optional. Catalog: - add mudler APEX I-Compact (Claude-distilled 35B, 17GB) - add 0xSero REAP-40 (pruned 122B-A10B, 46GB) - update download instructions: hf download (huggingface-cli is gone)
97 lines
3.4 KiB
Markdown
97 lines
3.4 KiB
Markdown
# Benchmarking
|
|
|
|
## What We Measure
|
|
|
|
All benchmarks use [llama-bench](https://github.com/ggml-org/llama.cpp) (part of llama.cpp) running inside toolbox containers. Two test types:
|
|
|
|
| Metric | Meaning | Test Params |
|
|
|--------|---------|-------------|
|
|
| **pp** (prompt processing) | How fast the model ingests input tokens | Default: 512 tokens |
|
|
| **tg** (token generation) | How fast the model produces output tokens | Default: 128 tokens |
|
|
|
|
Results are in **tokens/second (t/s)**. Higher is better.
|
|
|
|
## Test Parameters
|
|
|
|
### Standard Test
|
|
```
|
|
-ngl 99 -mmp 0 -fa 1 -r 5
|
|
```
|
|
- `-ngl 99` — all layers on GPU
|
|
- `-mmp 0` — disable memory mapping (`--no-mmap`)
|
|
- `-fa 1` — flash attention enabled
|
|
- `-r 5` — 5 repetitions for statistical confidence
|
|
|
|
### Long-Context Test
|
|
```
|
|
-ngl 99 -mmp 0 -fa 1 -p 2048 -n 32 -d 32768 -ub SIZE -r 3
|
|
```
|
|
- `-p 2048` — 2048 prompt tokens
|
|
- `-n 32` — generate 32 tokens
|
|
- `-d 32768` — 32K context window
|
|
- `-ub SIZE` — micro-batch size (512 for Vulkan, 2048 for ROCm)
|
|
- `-r 3` — 3 repetitions (long-context tests are slow)
|
|
|
|
The `-fa 1 -mmp 0 -ngl 99` flags are **mandatory** on Strix Halo to avoid crashes (`-fa 1` = flash attention, `-mmp 0` = no memory mapping, `-ngl 99` = all layers on GPU).
|
|
|
|
## Available Backends
|
|
|
|
| Backend | Container | Technology | Notes |
|
|
|---------|-----------|------------|-------|
|
|
| `llama-vulkan-radv` | Mesa RADV | Vulkan | Most stable, recommended default |
|
|
| `llama-vulkan-amdvlk` | AMDVLK | Vulkan | Fastest when it works, 2GB buffer limit |
|
|
| `llama-rocm-6.4.4` | ROCm 6.4.4 | HIP | Proven stable |
|
|
| `llama-rocm-7.2.1` | ROCm 7.2.1 | HIP | Current stable (kernel 6.18.4+ patch) |
|
|
| `llama-rocm-7.2` | ROCm 7.2 | HIP | Deprecated — use 7.2.1 |
|
|
| `llama-rocm7-nightlies` | ROCm 7 nightly | HIP | Experimental/development builds |
|
|
|
|
Containers are from [kyuz0/amd-strix-halo-toolboxes](https://github.com/kyuz0/amd-strix-halo-toolboxes). Set up with `make benchmark-setup`.
|
|
|
|
## Workflow
|
|
|
|
```bash
|
|
# 1. Setup (one-time)
|
|
make benchmark-setup
|
|
|
|
# 2. Capture baseline (before optimization)
|
|
make benchmark-baseline
|
|
|
|
# 3. After optimizing, run again
|
|
make benchmark # or: bin/benchmark run --tag post-opt
|
|
|
|
# 4. Compare
|
|
make benchmark-compare BEFORE=data/baselines/20260325-120000 AFTER=data/benchmarks/post-opt-20260326-100000
|
|
```
|
|
|
|
## Result Format
|
|
|
|
Each run produces a directory under `data/baselines/` or `data/benchmarks/`:
|
|
|
|
```
|
|
TIMESTAMP/
|
|
system-state.json # Full system audit snapshot
|
|
summary.json # Parsed results (model, backend, test, t/s)
|
|
metrics.csv # GPU/CPU metrics during the run
|
|
*.log # Raw llama-bench output per backend+model+test
|
|
```
|
|
|
|
### Comparison Output
|
|
|
|
```
|
|
Backend | Model | Test | Before | After | Delta
|
|
vulkan-radv | qwen3-4b | pp512 | 548 t/s | 612 t/s | +11.7%
|
|
vulkan-radv | qwen3-4b | tg128 | 13.9 | 15.2 | +9.4%
|
|
```
|
|
|
|
Configuration changes between runs (VRAM, GTT, kernel params, tuned profile) are shown if system-state.json differs.
|
|
|
|
## Recommended Test Models
|
|
|
|
| Size | Model | File | Disk | Use Case |
|
|
|------|-------|------|------|----------|
|
|
| Small | Qwen3-4B | Q4_K_M.gguf | ~3 GB | Quick smoke tests |
|
|
| Medium | Qwen3-14B | Q4_K_M.gguf | ~9 GB | Standard benchmarks |
|
|
| Large | Qwen3-32B | Q4_K_M.gguf | ~20 GB | Memory pressure tests |
|
|
|
|
Place models in `data/models/`. The VRAM estimator from the [toolboxes project](https://github.com/kyuz0/amd-strix-halo-toolboxes) (`gguf-vram-estimator.py`) can help plan which models fit.
|