feat: add Qwen3.5 model catalog and agentic evaluation framework
Models: - configs/models.conf: catalog with Qwen3.5-35B-A3B (MoE, top pick), Qwen3.5-27B (dense), Qwen3-Coder-30B-A3B (agentic/coding) - Updated benchmark setup to show catalog with download status - docs/model-recommendations.md: memory planning, quantization guide Agentic evaluation: - scripts/agentic/setup.sh: installs inspect-ai, evalplus, bigcodebench in a Python venv - scripts/agentic/run-eval.sh: runs evaluations against local LLM server (ollama or llama.cpp). Suites: quick (HumanEval+IFEval), code (EvalPlus+BigCodeBench), tooluse (BFCL), full (all) - bin/agentic: dispatcher with help - docs/agentic-benchmarks.md: methodology, framework comparison, model recommendations for agentic use Updated: Makefile (6 new targets), README, CLAUDE.md, docs/references.md Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -41,8 +41,14 @@ make verify # 9-point optimization checklist
|
||||
bin/audit --json | python3 -m json.tool # Verify JSON output is valid
|
||||
```
|
||||
|
||||
## Agentic Evaluation
|
||||
|
||||
Scripts in `scripts/agentic/` with dispatcher at `bin/agentic`. Uses a Python venv at `data/venv/`. Eval frameworks: inspect-ai (all-in-one), evalplus (HumanEval+/MBPP+), bigcodebench. All target an OpenAI-compatible endpoint (ollama or llama.cpp server). Model catalog at `configs/models.conf`.
|
||||
|
||||
## External Resources
|
||||
|
||||
All external links are centralized in [docs/references.md](docs/references.md). Key ones:
|
||||
- AMD ROCm Strix Halo guide (kernel params, GTT configuration)
|
||||
- Donato Capitella toolboxes (container images, benchmarks, VRAM estimator)
|
||||
- Qwen3.5 model family (GGUF quants by Unsloth)
|
||||
- Agentic eval frameworks (Inspect AI, EvalPlus, BFCL, BigCodeBench)
|
||||
|
||||
Reference in New Issue
Block a user