- README.md: project overview, quick start, command reference, workflow - CLAUDE.md: AI safety rules, technical details, conventions - AGENTS.md: agent workflows, file responsibility map, dependency matrix - docs/architecture.md: script layers, data flow, unified memory, JSON schemas - docs/optimization.md: step-by-step optimization walkthrough - docs/benchmarking.md: methodology, test params, result interpretation - docs/troubleshooting.md: common issues and fixes - docs/references.md: centralized external links (single source of truth) - docs/bios-vram-guide.md: add back-link to optimization workflow Cross-linked non-redundantly: each doc owns one layer, others link to it. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2.9 KiB
2.9 KiB
External References
Single source of truth for all external links used across this project.
AMD Official
- ROCm Strix Halo Optimization Guide — BIOS, kernel params, GTT/TTM configuration
- ROCm System Optimization Index — General ROCm tuning
- ROCm Installation Guide (Linux) — Package installation
- AMD SMI Documentation — GPU monitoring API
- ROCm GitHub — Source and issue tracker
Strix Halo Toolboxes (Donato Capitella)
The most comprehensive community resource for Strix Halo LLM optimization.
- strix-halo-toolboxes.com — Documentation, benchmarks, guides
- GitHub: kyuz0/amd-strix-halo-toolboxes — Container images, benchmark scripts, VRAM estimator
- Benchmark Results Viewer — Interactive performance charts
Community
- Strix Halo Wiki — AI Capabilities — Community benchmarks, model compatibility
- Level1Techs Forum — HP G1a Guide — Laptop-specific configuration
- Framework Community — GPU Performance Tests — Framework Desktop results
- LLM Tracker — Strix Halo — Centralized performance database
Other Strix Halo Repos
- pablo-ross/strix-halo-gmktec-evo-x2 — GMKtec EVO X2 optimization
- kyuz0/amd-strix-halo-llm-finetuning — Fine-tuning guides (Gemma-3, Qwen-3)
Monitoring Tools
- amdgpu_top — Best AMD GPU monitor (TUI/GUI/JSON)
- nvtop — Cross-vendor GPU monitor
- btop — System resource monitor
LLM Inference
- llama.cpp — LLM inference engine (Vulkan + ROCm)
- ollama — LLM runtime with model management
- vLLM — High-throughput serving
- llama-benchy — Multi-backend LLM benchmarking
AMD GPU Profiling
- Radeon GPU Profiler (RGP) — Hardware-level Vulkan/HIP profiling
- Radeon GPU Analyzer (RGA) — Offline shader/kernel analysis