- Update README.md with Syndarix vision, features, and architecture - Update CLAUDE.md with Syndarix-specific context - Create documentation directory structure: - docs/requirements/ for requirements documents - docs/architecture/ for architecture documentation - docs/adrs/ for Architecture Decision Records - docs/spikes/ for spike research documents Built on PragmaStack template. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
11 KiB
CLAUDE.md
Claude Code context for Syndarix - AI-Powered Software Consulting Agency.
Built on PragmaStack. See AGENTS.md for base template context.
Syndarix Project Context
Vision
Syndarix is an autonomous platform that orchestrates specialized AI agents to deliver complete software solutions with minimal human intervention. It acts as a virtual consulting agency with AI agents playing roles like Product Owner, Architect, Engineers, QA, etc.
Repository
- URL: https://gitea.pragmazest.com/cardosofelipe/syndarix
- Issue Tracker: Gitea Issues (primary)
- CI/CD: Gitea Actions
Core Concepts
Agent Types & Instances:
- Agent Type = Template (base model, failover, expertise, personality)
- Agent Instance = Spawned from type, assigned to project
- Multiple instances of same type can work together
Project Workflow:
- Requirements discovery with Product Owner agent
- Architecture spike (PO + BA + Architect brainstorm)
- Implementation planning and backlog creation
- Autonomous sprint execution with checkpoints
- Demo and client feedback
Autonomy Levels:
FULL_CONTROL: Approve every actionMILESTONE: Approve sprint boundariesAUTONOMOUS: Only major decisions
MCP-First Architecture: All integrations via Model Context Protocol servers with explicit scoping:
# All tools take project_id for scoping
search_knowledge(project_id="proj-123", query="auth flow")
create_issue(project_id="proj-123", title="Add login")
Syndarix-Specific Directories
docs/
├── requirements/ # Requirements documents
├── architecture/ # Architecture documentation
├── adrs/ # Architecture Decision Records
└── spikes/ # Spike research documents
Current Phase
Architecture Spikes - Validating key decisions before implementation.
Key Extensions to Add (from PragmaStack base)
- Celery + Redis for agent job queue
- WebSocket/SSE for real-time updates
- pgvector for RAG knowledge base
- MCP server integration layer
PragmaStack Development Guidelines
The following guidelines are inherited from PragmaStack and remain applicable.
Claude Code-Specific Guidance
Critical User Preferences
File Operations - NEVER Use Heredoc/Cat Append
ALWAYS use Read/Write/Edit tools instead of cat >> file << EOF commands.
This triggers manual approval dialogs and disrupts workflow.
# WRONG ❌
cat >> file.txt << EOF
content
EOF
# CORRECT ✅ - Use Read, then Write tools
Work Style
- User prefers autonomous operation without frequent interruptions
- Ask for batch permissions upfront for long work sessions
- Work independently, document decisions clearly
- Only use emojis if the user explicitly requests it
When Working with This Stack
Dependency Management:
- Backend uses uv (modern Python package manager), not pip
- Always use
uv runprefix:IS_TEST=True uv run pytest - Or use Makefile commands:
make test,make install-dev - Add dependencies:
uv add <package>oruv add --dev <package>
Database Migrations:
- Use the
migrate.pyhelper script, not Alembic directly - Generate + apply:
python migrate.py auto "message" - Never commit migrations without testing them first
- Check current state:
python migrate.py current
Frontend API Client Generation:
- Run
npm run generate:apiafter backend schema changes - Client is auto-generated from OpenAPI spec
- Located in
frontend/src/lib/api/generated/ - NEVER manually edit generated files
Testing Commands:
- Backend unit/integration:
IS_TEST=True uv run pytest(always prefix withIS_TEST=True) - Backend E2E (requires Docker):
make test-e2e - Frontend unit:
npm test - Frontend E2E:
npm run test:e2e - Use
make testormake test-covin backend for convenience
Backend E2E Testing (requires Docker):
- Install deps:
make install-e2e - Run all E2E tests:
make test-e2e - Run schema tests only:
make test-e2e-schema - Run all tests:
make test-all(unit + E2E) - Uses Testcontainers (real PostgreSQL) + Schemathesis (OpenAPI contract testing)
- Markers:
@pytest.mark.e2e,@pytest.mark.postgres,@pytest.mark.schemathesis - See:
backend/docs/E2E_TESTING.mdfor complete guide
🔴 CRITICAL: Auth Store Dependency Injection Pattern
ALWAYS use useAuth() from AuthContext, NEVER import useAuthStore directly!
// ❌ WRONG - Bypasses dependency injection
import { useAuthStore } from '@/lib/stores/authStore';
const { user, isAuthenticated } = useAuthStore();
// ✅ CORRECT - Uses dependency injection
import { useAuth } from '@/lib/auth/AuthContext';
const { user, isAuthenticated } = useAuth();
Why This Matters:
- E2E tests inject mock stores via
window.__TEST_AUTH_STORE__ - Unit tests inject via
<AuthProvider store={mockStore}> - Direct
useAuthStoreimports bypass this injection → tests fail - ESLint will catch violations (added Nov 2025)
Exceptions:
AuthContext.tsx- DI boundary, legitimately needs real storeclient.ts- Non-React context, uses dynamic import +__TEST_AUTH_STORE__check
E2E Test Best Practices
When writing or fixing Playwright tests:
Navigation Pattern:
// ✅ CORRECT - Use Promise.all for Next.js Link clicks
await Promise.all([
page.waitForURL('/target', { timeout: 10000 }),
link.click()
]);
Selectors:
- Use ID-based selectors for validation errors:
#email-error - Error IDs use dashes not underscores:
#new-password-error - Target
.border-destructive[role="alert"]to avoid Next.js route announcer conflicts - Avoid generic
[role="alert"]which matches multiple elements
URL Assertions:
// ✅ Use regex to handle query params
await expect(page).toHaveURL(/\/auth\/login/);
// ❌ Don't use exact strings (fails with query params)
await expect(page).toHaveURL('/auth/login');
Configuration:
- Uses 12 workers in non-CI mode (
playwright.config.ts) - Reduces to 2 workers in CI for stability
- Tests are designed to be non-flaky with proper waits
Important Implementation Details
Authentication Testing:
- Backend fixtures in
tests/conftest.py:async_test_db: Fresh SQLite per testasync_test_user/async_test_superuser: Pre-created usersuser_token/superuser_token: Access tokens for API calls
- Always use
@pytest.mark.asynciofor async tests - Use
@pytest_asyncio.fixturefor async fixtures
Database Testing:
# Mock database exceptions correctly
from unittest.mock import patch, AsyncMock
async def mock_commit():
raise OperationalError("Connection lost", {}, Exception())
with patch.object(session, 'commit', side_effect=mock_commit):
with patch.object(session, 'rollback', new_callable=AsyncMock) as mock_rollback:
with pytest.raises(OperationalError):
await crud_method(session, obj_in=data)
mock_rollback.assert_called_once()
Frontend Component Development:
- Follow design system docs in
frontend/docs/design-system/ - Read
08-ai-guidelines.mdfor AI code generation rules - Use parent-controlled spacing (see
04-spacing-philosophy.md) - WCAG AA compliance required (see
07-accessibility.md)
Security Considerations:
- Backend has comprehensive security tests (JWT attacks, session hijacking)
- Never skip security headers in production
- Rate limiting is configured in route decorators:
@limiter.limit("10/minute") - Session revocation is database-backed, not just JWT expiry
Common Workflows Guidance
When Adding a New Feature:
- Start with backend schema and CRUD
- Implement API route with proper authorization
- Write backend tests (aim for >90% coverage)
- Generate frontend API client:
npm run generate:api - Implement frontend components
- Write frontend unit tests
- Add E2E tests for critical flows
- Update relevant documentation
When Fixing Tests:
- Backend: Check test database isolation and async fixture usage
- Frontend unit: Verify mocking of
useAuth()notuseAuthStore - E2E: Use
Promise.all()pattern and regex URL assertions
When Debugging:
- Backend: Check
IS_TEST=Trueenvironment variable is set - Frontend: Run
npm run type-checkfirst - E2E: Use
npm run test:e2e:debugfor step-by-step debugging - Check logs: Backend has detailed error logging
Demo Mode (Frontend-Only Showcase):
- Enable:
echo "NEXT_PUBLIC_DEMO_MODE=true" > frontend/.env.local - Uses MSW (Mock Service Worker) to intercept API calls in browser
- Zero backend required - perfect for Vercel deployments
- Fully Automated: MSW handlers auto-generated from OpenAPI spec
- Run
npm run generate:api→ updates both API client AND MSW handlers - No manual synchronization needed!
- Run
- Demo credentials (any password ≥8 chars works):
- User:
demo@example.com/DemoPass123 - Admin:
admin@example.com/AdminPass123
- User:
- Safe: MSW never runs during tests (Jest or Playwright)
- Coverage: Mock files excluded from linting and coverage
- Documentation:
frontend/docs/DEMO_MODE.mdfor complete guide
Tool Usage Preferences
Prefer specialized tools over bash:
- Use Read/Write/Edit tools for file operations
- Never use
cat,echo >, or heredoc for file manipulation - Use Task tool with
subagent_type=Explorefor codebase exploration - Use Grep tool for code search, not bash
grep
When to use parallel tool calls:
- Independent git commands:
git status,git diff,git log - Reading multiple unrelated files
- Running multiple test suites simultaneously
- Independent validation steps
Custom Skills
No Claude Code Skills installed yet. To create one, invoke the built-in "skill-creator" skill.
Potential skill ideas for this project:
- API endpoint generator workflow (schema → CRUD → route → tests → frontend client)
- Component generator with design system compliance
- Database migration troubleshooting helper
- Test coverage analyzer and improvement suggester
- E2E test generator for new features
Additional Resources
Comprehensive Documentation:
- AGENTS.md - Framework-agnostic AI assistant context
- README.md - User-facing project overview
backend/docs/- Backend architecture, coding standards, common pitfallsfrontend/docs/design-system/- Complete design system guide
API Documentation (when running):
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
- OpenAPI JSON: http://localhost:8000/api/v1/openapi.json
Testing Documentation:
- Backend tests:
backend/tests/(97% coverage) - Frontend E2E:
frontend/e2e/README.md - Design system:
frontend/docs/design-system/08-ai-guidelines.md
For project architecture, development commands, and general context, see AGENTS.md.