Compare commits
6 Commits
dev
...
5594655fba
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
5594655fba | ||
|
|
ebd307cab4 | ||
|
|
6e3cdebbfb | ||
|
|
a6a336b66e | ||
|
|
9901dc7f51 | ||
|
|
ac64d9505e |
67
CLAUDE.md
67
CLAUDE.md
@@ -1,8 +1,71 @@
|
|||||||
# CLAUDE.md
|
# CLAUDE.md
|
||||||
|
|
||||||
Claude Code context for FastAPI + Next.js Full-Stack Template.
|
Claude Code context for **Syndarix** - AI-Powered Software Consulting Agency.
|
||||||
|
|
||||||
**See [AGENTS.md](./AGENTS.md) for project context, architecture, and development commands.**
|
**Built on PragmaStack.** See [AGENTS.md](./AGENTS.md) for base template context.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Syndarix Project Context
|
||||||
|
|
||||||
|
### Vision
|
||||||
|
Syndarix is an autonomous platform that orchestrates specialized AI agents to deliver complete software solutions with minimal human intervention. It acts as a virtual consulting agency with AI agents playing roles like Product Owner, Architect, Engineers, QA, etc.
|
||||||
|
|
||||||
|
### Repository
|
||||||
|
- **URL:** https://gitea.pragmazest.com/cardosofelipe/syndarix
|
||||||
|
- **Issue Tracker:** Gitea Issues (primary)
|
||||||
|
- **CI/CD:** Gitea Actions
|
||||||
|
|
||||||
|
### Core Concepts
|
||||||
|
|
||||||
|
**Agent Types & Instances:**
|
||||||
|
- Agent Type = Template (base model, failover, expertise, personality)
|
||||||
|
- Agent Instance = Spawned from type, assigned to project
|
||||||
|
- Multiple instances of same type can work together
|
||||||
|
|
||||||
|
**Project Workflow:**
|
||||||
|
1. Requirements discovery with Product Owner agent
|
||||||
|
2. Architecture spike (PO + BA + Architect brainstorm)
|
||||||
|
3. Implementation planning and backlog creation
|
||||||
|
4. Autonomous sprint execution with checkpoints
|
||||||
|
5. Demo and client feedback
|
||||||
|
|
||||||
|
**Autonomy Levels:**
|
||||||
|
- `FULL_CONTROL`: Approve every action
|
||||||
|
- `MILESTONE`: Approve sprint boundaries
|
||||||
|
- `AUTONOMOUS`: Only major decisions
|
||||||
|
|
||||||
|
**MCP-First Architecture:**
|
||||||
|
All integrations via Model Context Protocol servers with explicit scoping:
|
||||||
|
```python
|
||||||
|
# All tools take project_id for scoping
|
||||||
|
search_knowledge(project_id="proj-123", query="auth flow")
|
||||||
|
create_issue(project_id="proj-123", title="Add login")
|
||||||
|
```
|
||||||
|
|
||||||
|
### Syndarix-Specific Directories
|
||||||
|
```
|
||||||
|
docs/
|
||||||
|
├── requirements/ # Requirements documents
|
||||||
|
├── architecture/ # Architecture documentation
|
||||||
|
├── adrs/ # Architecture Decision Records
|
||||||
|
└── spikes/ # Spike research documents
|
||||||
|
```
|
||||||
|
|
||||||
|
### Current Phase
|
||||||
|
**Architecture Spikes** - Validating key decisions before implementation.
|
||||||
|
|
||||||
|
### Key Extensions to Add (from PragmaStack base)
|
||||||
|
- Celery + Redis for agent job queue
|
||||||
|
- WebSocket/SSE for real-time updates
|
||||||
|
- pgvector for RAG knowledge base
|
||||||
|
- MCP server integration layer
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PragmaStack Development Guidelines
|
||||||
|
|
||||||
|
*The following guidelines are inherited from PragmaStack and remain applicable.*
|
||||||
|
|
||||||
## Claude Code-Specific Guidance
|
## Claude Code-Specific Guidance
|
||||||
|
|
||||||
|
|||||||
724
README.md
724
README.md
@@ -1,659 +1,175 @@
|
|||||||
# <img src="frontend/public/logo.svg" alt="PragmaStack" width="32" height="32" style="vertical-align: middle" /> PragmaStack
|
# Syndarix
|
||||||
|
|
||||||
> **The Pragmatic Full-Stack Template. Production-ready, security-first, and opinionated.**
|
> **Your AI-Powered Software Consulting Agency**
|
||||||
|
>
|
||||||
|
> An autonomous platform that orchestrates specialized AI agents to deliver complete software solutions with minimal human intervention.
|
||||||
|
|
||||||
[](./backend/tests)
|
[](https://gitea.pragmazest.com/cardosofelipe/fast-next-template)
|
||||||
[](./frontend/tests)
|
|
||||||
[](./frontend/e2e)
|
|
||||||
[](./LICENSE)
|
[](./LICENSE)
|
||||||
[](./CONTRIBUTING.md)
|
|
||||||
|
|
||||||

|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## Why PragmaStack?
|
## Vision
|
||||||
|
|
||||||
Building a modern full-stack application often leads to "analysis paralysis" or "boilerplate fatigue". You spend weeks setting up authentication, testing, and linting before writing a single line of business logic.
|
Syndarix transforms the software development lifecycle by providing a **virtual consulting team** of AI agents that collaboratively plan, design, implement, test, and deliver complete software solutions.
|
||||||
|
|
||||||
**PragmaStack cuts through the noise.**
|
**The Problem:** Even with AI coding assistants, developers spend as much time managing AI as doing the work themselves. Context switching, babysitting, and knowledge fragmentation limit productivity.
|
||||||
|
|
||||||
We provide a **pragmatic**, opinionated foundation that prioritizes:
|
**The Solution:** A structured, autonomous agency where specialized AI agents handle different roles (Product Owner, Architect, Engineers, QA, etc.) with proper workflows, reviews, and quality gates.
|
||||||
- **Speed**: Ship features, not config files.
|
|
||||||
- **Robustness**: Security and testing are not optional.
|
|
||||||
- **Clarity**: Code that is easy to read and maintain.
|
|
||||||
|
|
||||||
Whether you're building a SaaS, an internal tool, or a side project, PragmaStack gives you a solid starting point without the bloat.
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## ✨ Features
|
## Key Features
|
||||||
|
|
||||||
### 🔐 **Authentication & Security**
|
### Multi-Agent Orchestration
|
||||||
- JWT-based authentication with access + refresh tokens
|
- Configurable agent **types** with base model, failover, expertise, and personality
|
||||||
- **OAuth/Social Login** (Google, GitHub) with PKCE support
|
- Spawn multiple **instances** from the same type (e.g., Dave, Ellis, Kate as Software Developers)
|
||||||
- **OAuth 2.0 Authorization Server** (MCP-ready) for third-party integrations
|
- Agent-to-agent communication and collaboration
|
||||||
- Session management with device tracking and revocation
|
- Per-instance customization with domain-specific knowledge
|
||||||
- Password reset flow (email integration ready)
|
|
||||||
- Secure password hashing (bcrypt)
|
|
||||||
- CSRF protection, rate limiting, and security headers
|
|
||||||
- Comprehensive security tests (JWT algorithm attacks, session hijacking, privilege escalation)
|
|
||||||
|
|
||||||
### 🔌 **OAuth Provider Mode (MCP Integration)**
|
### Complete SDLC Support
|
||||||
Full OAuth 2.0 Authorization Server for Model Context Protocol (MCP) and third-party clients:
|
- **Requirements Discovery** → **Architecture Spike** → **Implementation Planning**
|
||||||
- **RFC 7636**: Authorization Code Flow with PKCE (S256 only)
|
- **Sprint Management** with automated ceremonies
|
||||||
- **RFC 8414**: Server metadata discovery at `/.well-known/oauth-authorization-server`
|
- **Issue Tracking** with Epic/Story/Task hierarchy
|
||||||
- **RFC 7662**: Token introspection endpoint
|
- **Git Integration** with proper branch/PR workflows
|
||||||
- **RFC 7009**: Token revocation endpoint
|
- **CI/CD Pipelines** with automated testing
|
||||||
- **JWT access tokens**: Self-contained, configurable lifetime
|
|
||||||
- **Opaque refresh tokens**: Secure rotation, database-backed revocation
|
|
||||||
- **Consent management**: Users can review and revoke app permissions
|
|
||||||
- **Client management**: Admin endpoints for registering OAuth clients
|
|
||||||
- **Scopes**: `openid`, `profile`, `email`, `read:users`, `write:users`, `admin`
|
|
||||||
|
|
||||||
### 👥 **Multi-Tenancy & Organizations**
|
### Configurable Autonomy
|
||||||
- Full organization system with role-based access control (Owner, Admin, Member)
|
- From `FULL_CONTROL` (approve everything) to `AUTONOMOUS` (only major milestones)
|
||||||
- Invite/remove members, manage permissions
|
- Client can intervene at any point
|
||||||
- Organization-scoped data access
|
- Transparent progress visibility
|
||||||
- User can belong to multiple organizations
|
|
||||||
|
|
||||||
### 🛠️ **Admin Panel**
|
### MCP-First Architecture
|
||||||
- Complete user management (CRUD, activate/deactivate, bulk operations)
|
- All integrations via **Model Context Protocol (MCP)** servers
|
||||||
- Organization management (create, edit, delete, member management)
|
- Unified Knowledge Base with project/agent scoping
|
||||||
- Session monitoring across all users
|
- Git providers (Gitea, GitHub, GitLab) via MCP
|
||||||
- Real-time statistics dashboard
|
- Extensible through custom MCP tools
|
||||||
- Admin-only routes with proper authorization
|
|
||||||
|
|
||||||
### 🎨 **Modern Frontend**
|
### Project Complexity Wizard
|
||||||
- Next.js 16 with App Router and React 19
|
- **Script** → Minimal process, no repo needed
|
||||||
- **PragmaStack Design System** built on shadcn/ui + TailwindCSS
|
- **Simple** → Single sprint, basic backlog
|
||||||
- Pre-configured theme with dark mode support (coming soon)
|
- **Medium/Complex** → Full AGILE workflow with multiple sprints
|
||||||
- Responsive, accessible components (WCAG AA compliant)
|
|
||||||
- Rich marketing landing page with animated components
|
|
||||||
- Live component showcase and documentation at `/dev`
|
|
||||||
|
|
||||||
### 🌍 **Internationalization (i18n)**
|
|
||||||
- Built-in multi-language support with next-intl v4
|
|
||||||
- Locale-based routing (`/en/*`, `/it/*`)
|
|
||||||
- Seamless language switching with LocaleSwitcher component
|
|
||||||
- SEO-friendly URLs and metadata per locale
|
|
||||||
- Translation files for English and Italian (easily extensible)
|
|
||||||
- Type-safe translations throughout the app
|
|
||||||
|
|
||||||
### 🎯 **Content & UX Features**
|
|
||||||
- **Toast notifications** with Sonner for elegant user feedback
|
|
||||||
- **Smooth animations** powered by Framer Motion
|
|
||||||
- **Markdown rendering** with syntax highlighting (GitHub Flavored Markdown)
|
|
||||||
- **Charts and visualizations** ready with Recharts
|
|
||||||
- **SEO optimization** with dynamic sitemap and robots.txt generation
|
|
||||||
- **Session tracking UI** with device information and revocation controls
|
|
||||||
|
|
||||||
### 🧪 **Comprehensive Testing**
|
|
||||||
- **Backend Testing**: ~97% unit test coverage
|
|
||||||
- Unit, integration, and security tests
|
|
||||||
- Async database testing with SQLAlchemy
|
|
||||||
- API endpoint testing with fixtures
|
|
||||||
- Security vulnerability tests (JWT attacks, session hijacking, privilege escalation)
|
|
||||||
- **Frontend Unit Tests**: ~97% coverage with Jest
|
|
||||||
- Component testing
|
|
||||||
- Hook testing
|
|
||||||
- Utility function testing
|
|
||||||
- **End-to-End Tests**: Playwright with zero flaky tests
|
|
||||||
- Complete user flows (auth, navigation, settings)
|
|
||||||
- Parallel execution for speed
|
|
||||||
- Visual regression testing ready
|
|
||||||
|
|
||||||
### 📚 **Developer Experience**
|
|
||||||
- Auto-generated TypeScript API client from OpenAPI spec
|
|
||||||
- Interactive API documentation (Swagger + ReDoc)
|
|
||||||
- Database migrations with Alembic helper script
|
|
||||||
- Hot reload in development for both frontend and backend
|
|
||||||
- Comprehensive code documentation and design system docs
|
|
||||||
- Live component playground at `/dev` with code examples
|
|
||||||
- Docker support for easy deployment
|
|
||||||
- VSCode workspace settings included
|
|
||||||
|
|
||||||
### 📊 **Ready for Production**
|
|
||||||
- Docker + docker-compose setup
|
|
||||||
- Environment-based configuration
|
|
||||||
- Database connection pooling
|
|
||||||
- Error handling and logging
|
|
||||||
- Health check endpoints
|
|
||||||
- Production security headers
|
|
||||||
- Rate limiting on sensitive endpoints
|
|
||||||
- SEO optimization with dynamic sitemaps and robots.txt
|
|
||||||
- Multi-language SEO with locale-specific metadata
|
|
||||||
- Performance monitoring and bundle analysis
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 📸 Screenshots
|
## Technology Stack
|
||||||
|
|
||||||
<details>
|
Built on [PragmaStack](https://gitea.pragmazest.com/cardosofelipe/fast-next-template):
|
||||||
<summary>Click to view screenshots</summary>
|
|
||||||
|
|
||||||
### Landing Page
|
| Component | Technology |
|
||||||

|
|-----------|------------|
|
||||||
|
| Backend | FastAPI 0.115+ (Python 3.11+) |
|
||||||
|
| Frontend | Next.js 16 (React 19) |
|
||||||
|
| Database | PostgreSQL 15+ with pgvector |
|
||||||
|
| ORM | SQLAlchemy 2.0 |
|
||||||
|
| State Management | Zustand + TanStack Query |
|
||||||
|
| UI | shadcn/ui + Tailwind 4 |
|
||||||
|
| Auth | JWT dual-token + OAuth 2.0 |
|
||||||
|
| Testing | pytest + Jest + Playwright |
|
||||||
|
|
||||||
|
### Syndarix Extensions
|
||||||
|
| Component | Technology |
|
||||||
### Authentication
|
|-----------|------------|
|
||||||

|
| Task Queue | Celery + Redis |
|
||||||
|
| Real-time | FastAPI WebSocket / SSE |
|
||||||
|
| Vector DB | pgvector (PostgreSQL extension) |
|
||||||
|
| MCP SDK | Anthropic MCP SDK |
|
||||||
### Admin Dashboard
|
|
||||||

|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
### Design System
|
|
||||||

|
|
||||||
|
|
||||||
</details>
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🎭 Demo Mode
|
## Project Status
|
||||||
|
|
||||||
**Try the frontend without a backend!** Perfect for:
|
**Phase:** Architecture & Planning
|
||||||
- **Free deployment** on Vercel (no backend costs)
|
|
||||||
- **Portfolio showcasing** with live demos
|
See [docs/requirements/](./docs/requirements/) for the comprehensive requirements document.
|
||||||
- **Client presentations** without infrastructure setup
|
|
||||||
|
### Current Milestones
|
||||||
|
- [x] Fork PragmaStack as foundation
|
||||||
|
- [x] Create requirements document
|
||||||
|
- [ ] Execute architecture spikes
|
||||||
|
- [ ] Create ADRs for key decisions
|
||||||
|
- [ ] Begin MVP implementation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Documentation
|
||||||
|
|
||||||
|
- [Requirements Document](./docs/requirements/SYNDARIX_REQUIREMENTS.md)
|
||||||
|
- [Architecture Decisions](./docs/adrs/) (coming soon)
|
||||||
|
- [Spike Research](./docs/spikes/) (coming soon)
|
||||||
|
- [Architecture Overview](./docs/architecture/) (coming soon)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Getting Started
|
||||||
|
|
||||||
|
### Prerequisites
|
||||||
|
- Docker & Docker Compose
|
||||||
|
- Node.js 20+
|
||||||
|
- Python 3.11+
|
||||||
|
- PostgreSQL 15+ (or use Docker)
|
||||||
|
|
||||||
### Quick Start
|
### Quick Start
|
||||||
|
|
||||||
```bash
|
|
||||||
cd frontend
|
|
||||||
echo "NEXT_PUBLIC_DEMO_MODE=true" > .env.local
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
**Demo Credentials:**
|
|
||||||
- Regular user: `demo@example.com` / `DemoPass123`
|
|
||||||
- Admin user: `admin@example.com` / `AdminPass123`
|
|
||||||
|
|
||||||
Demo mode uses [Mock Service Worker (MSW)](https://mswjs.io/) to intercept API calls in the browser. Your code remains unchanged - the same components work with both real and mocked backends.
|
|
||||||
|
|
||||||
**Key Features:**
|
|
||||||
- ✅ Zero backend required
|
|
||||||
- ✅ All features functional (auth, admin, stats)
|
|
||||||
- ✅ Realistic network delays and errors
|
|
||||||
- ✅ Does NOT interfere with tests (97%+ coverage maintained)
|
|
||||||
- ✅ One-line toggle: `NEXT_PUBLIC_DEMO_MODE=true`
|
|
||||||
|
|
||||||
📖 **[Complete Demo Mode Documentation](./frontend/docs/DEMO_MODE.md)**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🚀 Tech Stack
|
|
||||||
|
|
||||||
### Backend
|
|
||||||
- **[FastAPI](https://fastapi.tiangolo.com/)** - Modern async Python web framework
|
|
||||||
- **[SQLAlchemy 2.0](https://www.sqlalchemy.org/)** - Powerful ORM with async support
|
|
||||||
- **[PostgreSQL](https://www.postgresql.org/)** - Robust relational database
|
|
||||||
- **[Alembic](https://alembic.sqlalchemy.org/)** - Database migrations
|
|
||||||
- **[Pydantic v2](https://docs.pydantic.dev/)** - Data validation with type hints
|
|
||||||
- **[pytest](https://pytest.org/)** - Testing framework with async support
|
|
||||||
|
|
||||||
### Frontend
|
|
||||||
- **[Next.js 16](https://nextjs.org/)** - React framework with App Router
|
|
||||||
- **[React 19](https://react.dev/)** - UI library
|
|
||||||
- **[TypeScript](https://www.typescriptlang.org/)** - Type-safe JavaScript
|
|
||||||
- **[TailwindCSS](https://tailwindcss.com/)** - Utility-first CSS framework
|
|
||||||
- **[shadcn/ui](https://ui.shadcn.com/)** - Beautiful, accessible component library
|
|
||||||
- **[next-intl](https://next-intl.dev/)** - Internationalization (i18n) with type safety
|
|
||||||
- **[TanStack Query](https://tanstack.com/query)** - Powerful data fetching/caching
|
|
||||||
- **[Zustand](https://zustand-demo.pmnd.rs/)** - Lightweight state management
|
|
||||||
- **[Framer Motion](https://www.framer.com/motion/)** - Production-ready animation library
|
|
||||||
- **[Sonner](https://sonner.emilkowal.ski/)** - Beautiful toast notifications
|
|
||||||
- **[Recharts](https://recharts.org/)** - Composable charting library
|
|
||||||
- **[React Markdown](https://github.com/remarkjs/react-markdown)** - Markdown rendering with GFM support
|
|
||||||
- **[Playwright](https://playwright.dev/)** - End-to-end testing
|
|
||||||
|
|
||||||
### DevOps
|
|
||||||
- **[Docker](https://www.docker.com/)** - Containerization
|
|
||||||
- **[docker-compose](https://docs.docker.com/compose/)** - Multi-container orchestration
|
|
||||||
- **GitHub Actions** (coming soon) - CI/CD pipelines
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📋 Prerequisites
|
|
||||||
|
|
||||||
- **Docker & Docker Compose** (recommended) - [Install Docker](https://docs.docker.com/get-docker/)
|
|
||||||
- **OR manually:**
|
|
||||||
- Python 3.12+
|
|
||||||
- Node.js 18+ (Node 20+ recommended)
|
|
||||||
- PostgreSQL 15+
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🏃 Quick Start (Docker)
|
|
||||||
|
|
||||||
The fastest way to get started is with Docker:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Clone the repository
|
# Clone the repository
|
||||||
git clone https://github.com/cardosofelipe/pragma-stack.git
|
git clone https://gitea.pragmazest.com/cardosofelipe/syndarix.git
|
||||||
cd fast-next-template
|
cd syndarix
|
||||||
|
|
||||||
# Copy environment file
|
# Copy environment template
|
||||||
cp .env.template .env
|
cp .env.template .env
|
||||||
|
|
||||||
# Start all services (backend, frontend, database)
|
# Start development environment
|
||||||
docker-compose up
|
docker-compose -f docker-compose.dev.yml up -d
|
||||||
|
|
||||||
# In another terminal, run database migrations
|
# Run database migrations
|
||||||
docker-compose exec backend alembic upgrade head
|
make migrate
|
||||||
|
|
||||||
# Create first superuser (optional)
|
# Start the development servers
|
||||||
docker-compose exec backend python -c "from app.init_db import init_db; import asyncio; asyncio.run(init_db())"
|
make dev
|
||||||
```
|
|
||||||
|
|
||||||
**That's it! 🎉**
|
|
||||||
|
|
||||||
- Frontend: http://localhost:3000
|
|
||||||
- Backend API: http://localhost:8000
|
|
||||||
- API Docs: http://localhost:8000/docs
|
|
||||||
|
|
||||||
Default superuser credentials:
|
|
||||||
- Email: `admin@example.com`
|
|
||||||
- Password: `admin123`
|
|
||||||
|
|
||||||
**⚠️ Change these immediately in production!**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🛠️ Manual Setup (Development)
|
|
||||||
|
|
||||||
### Backend Setup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd backend
|
|
||||||
|
|
||||||
# Create virtual environment
|
|
||||||
python -m venv .venv
|
|
||||||
source .venv/bin/activate # On Windows: .venv\Scripts\activate
|
|
||||||
|
|
||||||
# Install dependencies
|
|
||||||
pip install -r requirements.txt
|
|
||||||
|
|
||||||
# Setup environment
|
|
||||||
cp .env.example .env
|
|
||||||
# Edit .env with your database credentials
|
|
||||||
|
|
||||||
# Run migrations
|
|
||||||
alembic upgrade head
|
|
||||||
|
|
||||||
# Initialize database with first superuser
|
|
||||||
python -c "from app.init_db import init_db; import asyncio; asyncio.run(init_db())"
|
|
||||||
|
|
||||||
# Start development server
|
|
||||||
uvicorn app.main:app --reload --host 0.0.0.0 --port 8000
|
|
||||||
```
|
|
||||||
|
|
||||||
### Frontend Setup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd frontend
|
|
||||||
|
|
||||||
# Install dependencies
|
|
||||||
npm install
|
|
||||||
|
|
||||||
# Setup environment
|
|
||||||
cp .env.local.example .env.local
|
|
||||||
# Edit .env.local with your backend URL
|
|
||||||
|
|
||||||
# Generate API client
|
|
||||||
npm run generate:api
|
|
||||||
|
|
||||||
# Start development server
|
|
||||||
npm run dev
|
|
||||||
```
|
|
||||||
|
|
||||||
Visit http://localhost:3000 to see your app!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📂 Project Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
├── backend/ # FastAPI backend
|
|
||||||
│ ├── app/
|
|
||||||
│ │ ├── api/ # API routes and dependencies
|
|
||||||
│ │ ├── core/ # Core functionality (auth, config, database)
|
|
||||||
│ │ ├── crud/ # Database operations
|
|
||||||
│ │ ├── models/ # SQLAlchemy models
|
|
||||||
│ │ ├── schemas/ # Pydantic schemas
|
|
||||||
│ │ ├── services/ # Business logic
|
|
||||||
│ │ └── utils/ # Utilities
|
|
||||||
│ ├── tests/ # Backend tests (97% coverage)
|
|
||||||
│ ├── alembic/ # Database migrations
|
|
||||||
│ └── docs/ # Backend documentation
|
|
||||||
│
|
|
||||||
├── frontend/ # Next.js frontend
|
|
||||||
│ ├── src/
|
|
||||||
│ │ ├── app/ # Next.js App Router pages
|
|
||||||
│ │ ├── components/ # React components
|
|
||||||
│ │ ├── lib/ # Libraries and utilities
|
|
||||||
│ │ │ ├── api/ # API client (auto-generated)
|
|
||||||
│ │ │ └── stores/ # Zustand stores
|
|
||||||
│ │ └── hooks/ # Custom React hooks
|
|
||||||
│ ├── e2e/ # Playwright E2E tests
|
|
||||||
│ ├── tests/ # Unit tests (Jest)
|
|
||||||
│ └── docs/ # Frontend documentation
|
|
||||||
│ └── design-system/ # Comprehensive design system docs
|
|
||||||
│
|
|
||||||
├── docker-compose.yml # Docker orchestration
|
|
||||||
├── docker-compose.dev.yml # Development with hot reload
|
|
||||||
└── README.md # You are here!
|
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🧪 Testing
|
## Architecture Overview
|
||||||
|
|
||||||
This template takes testing seriously with comprehensive coverage across all layers:
|
|
||||||
|
|
||||||
### Backend Unit & Integration Tests
|
|
||||||
|
|
||||||
**High coverage (~97%)** across all critical paths including security-focused tests.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd backend
|
|
||||||
|
|
||||||
# Run all tests
|
|
||||||
IS_TEST=True pytest
|
|
||||||
|
|
||||||
# Run with coverage report
|
|
||||||
IS_TEST=True pytest --cov=app --cov-report=term-missing
|
|
||||||
|
|
||||||
# Run specific test file
|
|
||||||
IS_TEST=True pytest tests/api/test_auth.py -v
|
|
||||||
|
|
||||||
# Generate HTML coverage report
|
|
||||||
IS_TEST=True pytest --cov=app --cov-report=html
|
|
||||||
open htmlcov/index.html
|
|
||||||
```
|
```
|
||||||
|
+====================================================================+
|
||||||
**Test types:**
|
| SYNDARIX CORE |
|
||||||
- **Unit tests**: CRUD operations, utilities, business logic
|
+====================================================================+
|
||||||
- **Integration tests**: API endpoints with database
|
| +------------------+ +------------------+ +------------------+ |
|
||||||
- **Security tests**: JWT algorithm attacks, session hijacking, privilege escalation
|
| | Agent Orchestrator| | Project Manager | | Workflow Engine | |
|
||||||
- **Error handling tests**: Database failures, validation errors
|
| +------------------+ +------------------+ +------------------+ |
|
||||||
|
+====================================================================+
|
||||||
### Frontend Unit Tests
|
|
|
||||||
|
v
|
||||||
**High coverage (~97%)** with Jest and React Testing Library.
|
+====================================================================+
|
||||||
|
| MCP ORCHESTRATION LAYER |
|
||||||
```bash
|
| All integrations via unified MCP servers with project scoping |
|
||||||
cd frontend
|
+====================================================================+
|
||||||
|
|
|
||||||
# Run unit tests
|
+------------------------+------------------------+
|
||||||
npm test
|
| | |
|
||||||
|
+----v----+ +----v----+ +----v----+ +----v----+ +----v----+
|
||||||
# Run with coverage
|
| LLM | | Git | |Knowledge| | File | | Code |
|
||||||
npm run test:coverage
|
| Providers| | MCP | |Base MCP | |Sys. MCP | |Analysis |
|
||||||
|
+---------+ +---------+ +---------+ +---------+ +---------+
|
||||||
# Watch mode
|
|
||||||
npm run test:watch
|
|
||||||
```
|
|
||||||
|
|
||||||
**Test types:**
|
|
||||||
- Component rendering and interactions
|
|
||||||
- Custom hooks behavior
|
|
||||||
- State management
|
|
||||||
- Utility functions
|
|
||||||
- API integration mocks
|
|
||||||
|
|
||||||
### End-to-End Tests
|
|
||||||
|
|
||||||
**Zero flaky tests** with Playwright covering complete user journeys.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd frontend
|
|
||||||
|
|
||||||
# Run E2E tests
|
|
||||||
npm run test:e2e
|
|
||||||
|
|
||||||
# Run E2E tests in UI mode (recommended for development)
|
|
||||||
npm run test:e2e:ui
|
|
||||||
|
|
||||||
# Run specific test file
|
|
||||||
npx playwright test auth-login.spec.ts
|
|
||||||
|
|
||||||
# Generate test report
|
|
||||||
npx playwright show-report
|
|
||||||
```
|
|
||||||
|
|
||||||
**Test coverage:**
|
|
||||||
- Complete authentication flows
|
|
||||||
- Navigation and routing
|
|
||||||
- Form submissions and validation
|
|
||||||
- Settings and profile management
|
|
||||||
- Session management
|
|
||||||
- Admin panel workflows (in progress)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🤖 AI-Friendly Documentation
|
|
||||||
|
|
||||||
This project includes comprehensive documentation designed for AI coding assistants:
|
|
||||||
|
|
||||||
- **[AGENTS.md](./AGENTS.md)** - Framework-agnostic AI assistant context for PragmaStack
|
|
||||||
- **[CLAUDE.md](./CLAUDE.md)** - Claude Code-specific guidance
|
|
||||||
|
|
||||||
These files provide AI assistants with the **PragmaStack** architecture, patterns, and best practices.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🗄️ Database Migrations
|
|
||||||
|
|
||||||
The template uses Alembic for database migrations:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd backend
|
|
||||||
|
|
||||||
# Generate migration from model changes
|
|
||||||
python migrate.py generate "description of changes"
|
|
||||||
|
|
||||||
# Apply migrations
|
|
||||||
python migrate.py apply
|
|
||||||
|
|
||||||
# Or do both in one command
|
|
||||||
python migrate.py auto "description"
|
|
||||||
|
|
||||||
# View migration history
|
|
||||||
python migrate.py list
|
|
||||||
|
|
||||||
# Check current revision
|
|
||||||
python migrate.py current
|
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 📖 Documentation
|
## Contributing
|
||||||
|
|
||||||
### AI Assistant Documentation
|
See [CONTRIBUTING.md](./CONTRIBUTING.md) for guidelines.
|
||||||
|
|
||||||
- **[AGENTS.md](./AGENTS.md)** - Framework-agnostic AI coding assistant context
|
|
||||||
- **[CLAUDE.md](./CLAUDE.md)** - Claude Code-specific guidance and preferences
|
|
||||||
|
|
||||||
### Backend Documentation
|
|
||||||
|
|
||||||
- **[ARCHITECTURE.md](./backend/docs/ARCHITECTURE.md)** - System architecture and design patterns
|
|
||||||
- **[CODING_STANDARDS.md](./backend/docs/CODING_STANDARDS.md)** - Code quality standards
|
|
||||||
- **[COMMON_PITFALLS.md](./backend/docs/COMMON_PITFALLS.md)** - Common mistakes to avoid
|
|
||||||
- **[FEATURE_EXAMPLE.md](./backend/docs/FEATURE_EXAMPLE.md)** - Step-by-step feature guide
|
|
||||||
|
|
||||||
### Frontend Documentation
|
|
||||||
|
|
||||||
- **[PragmaStack Design System](./frontend/docs/design-system/)** - Complete design system guide
|
|
||||||
- Quick start, foundations (colors, typography, spacing)
|
|
||||||
- Component library guide
|
|
||||||
- Layout patterns, spacing philosophy
|
|
||||||
- Forms, accessibility, AI guidelines
|
|
||||||
- **[E2E Testing Guide](./frontend/e2e/README.md)** - E2E testing setup and best practices
|
|
||||||
|
|
||||||
### API Documentation
|
|
||||||
|
|
||||||
When the backend is running:
|
|
||||||
- **Swagger UI**: http://localhost:8000/docs
|
|
||||||
- **ReDoc**: http://localhost:8000/redoc
|
|
||||||
- **OpenAPI JSON**: http://localhost:8000/api/v1/openapi.json
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🚢 Deployment
|
## License
|
||||||
|
|
||||||
### Docker Production Deployment
|
MIT License - see [LICENSE](./LICENSE) for details.
|
||||||
|
|
||||||
```bash
|
|
||||||
# Build and start all services
|
|
||||||
docker-compose up -d
|
|
||||||
|
|
||||||
# Run migrations
|
|
||||||
docker-compose exec backend alembic upgrade head
|
|
||||||
|
|
||||||
# View logs
|
|
||||||
docker-compose logs -f
|
|
||||||
|
|
||||||
# Stop services
|
|
||||||
docker-compose down
|
|
||||||
```
|
|
||||||
|
|
||||||
### Production Checklist
|
|
||||||
|
|
||||||
- [ ] Change default superuser credentials
|
|
||||||
- [ ] Set strong `SECRET_KEY` in backend `.env`
|
|
||||||
- [ ] Configure production database (PostgreSQL)
|
|
||||||
- [ ] Set `ENVIRONMENT=production` in backend
|
|
||||||
- [ ] Configure CORS origins for your domain
|
|
||||||
- [ ] Setup SSL/TLS certificates
|
|
||||||
- [ ] Configure email service for password resets
|
|
||||||
- [ ] Setup monitoring and logging
|
|
||||||
- [ ] Configure backup strategy
|
|
||||||
- [ ] Review and adjust rate limits
|
|
||||||
- [ ] Test security headers
|
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
## 🛣️ Roadmap & Status
|
## Acknowledgments
|
||||||
|
|
||||||
### ✅ Completed
|
- Built on [PragmaStack](https://gitea.pragmazest.com/cardosofelipe/fast-next-template)
|
||||||
- [x] Authentication system (JWT, refresh tokens, session management, OAuth)
|
- Powered by Claude and the Anthropic API
|
||||||
- [x] User management (CRUD, profile, password change)
|
|
||||||
- [x] Organization system with RBAC (Owner, Admin, Member)
|
|
||||||
- [x] Admin panel (users, organizations, sessions, statistics)
|
|
||||||
- [x] **Internationalization (i18n)** with next-intl (English + Italian)
|
|
||||||
- [x] Backend testing infrastructure (~97% coverage)
|
|
||||||
- [x] Frontend unit testing infrastructure (~97% coverage)
|
|
||||||
- [x] Frontend E2E testing (Playwright, zero flaky tests)
|
|
||||||
- [x] Design system documentation
|
|
||||||
- [x] **Marketing landing page** with animated components
|
|
||||||
- [x] **`/dev` documentation portal** with live component examples
|
|
||||||
- [x] **Toast notifications** system (Sonner)
|
|
||||||
- [x] **Charts and visualizations** (Recharts)
|
|
||||||
- [x] **Animation system** (Framer Motion)
|
|
||||||
- [x] **Markdown rendering** with syntax highlighting
|
|
||||||
- [x] **SEO optimization** (sitemap, robots.txt, locale-aware metadata)
|
|
||||||
- [x] Database migrations with helper script
|
|
||||||
- [x] Docker deployment
|
|
||||||
- [x] API documentation (OpenAPI/Swagger)
|
|
||||||
|
|
||||||
### 🚧 In Progress
|
|
||||||
- [ ] Email integration (templates ready, SMTP pending)
|
|
||||||
|
|
||||||
### 🔮 Planned
|
|
||||||
- [ ] GitHub Actions CI/CD pipelines
|
|
||||||
- [ ] Dynamic test coverage badges from CI
|
|
||||||
- [ ] E2E test coverage reporting
|
|
||||||
- [ ] OAuth token encryption at rest (security hardening)
|
|
||||||
- [ ] Additional languages (Spanish, French, German, etc.)
|
|
||||||
- [ ] SSO/SAML authentication
|
|
||||||
- [ ] Real-time notifications with WebSockets
|
|
||||||
- [ ] Webhook system
|
|
||||||
- [ ] File upload/storage (S3-compatible)
|
|
||||||
- [ ] Audit logging system
|
|
||||||
- [ ] API versioning example
|
|
||||||
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🤝 Contributing
|
|
||||||
|
|
||||||
Contributions are welcome! Whether you're fixing bugs, improving documentation, or proposing new features, we'd love your help.
|
|
||||||
|
|
||||||
### How to Contribute
|
|
||||||
|
|
||||||
1. **Fork the repository**
|
|
||||||
2. **Create a feature branch** (`git checkout -b feature/amazing-feature`)
|
|
||||||
3. **Make your changes**
|
|
||||||
- Follow existing code style
|
|
||||||
- Add tests for new features
|
|
||||||
- Update documentation as needed
|
|
||||||
4. **Run tests** to ensure everything works
|
|
||||||
5. **Commit your changes** (`git commit -m 'Add amazing feature'`)
|
|
||||||
6. **Push to your branch** (`git push origin feature/amazing-feature`)
|
|
||||||
7. **Open a Pull Request**
|
|
||||||
|
|
||||||
### Development Guidelines
|
|
||||||
|
|
||||||
- Write tests for new features (aim for >90% coverage)
|
|
||||||
- Follow the existing architecture patterns
|
|
||||||
- Update documentation when adding features
|
|
||||||
- Keep commits atomic and well-described
|
|
||||||
- Be respectful and constructive in discussions
|
|
||||||
|
|
||||||
### Reporting Issues
|
|
||||||
|
|
||||||
Found a bug? Have a suggestion? [Open an issue](https://github.com/cardosofelipe/pragma-stack/issues)!
|
|
||||||
|
|
||||||
Please include:
|
|
||||||
- Clear description of the issue/suggestion
|
|
||||||
- Steps to reproduce (for bugs)
|
|
||||||
- Expected vs. actual behavior
|
|
||||||
- Environment details (OS, Python/Node version, etc.)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 📄 License
|
|
||||||
|
|
||||||
This project is licensed under the **MIT License** - see the [LICENSE](./LICENSE) file for details.
|
|
||||||
|
|
||||||
**TL;DR**: You can use this template for any purpose, commercial or non-commercial. Attribution is appreciated but not required!
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 🙏 Acknowledgments
|
|
||||||
|
|
||||||
This template is built on the shoulders of giants:
|
|
||||||
|
|
||||||
- [FastAPI](https://fastapi.tiangolo.com/) by Sebastián Ramírez
|
|
||||||
- [Next.js](https://nextjs.org/) by Vercel
|
|
||||||
- [shadcn/ui](https://ui.shadcn.com/) by shadcn
|
|
||||||
- [TanStack Query](https://tanstack.com/query) by Tanner Linsley
|
|
||||||
- [Playwright](https://playwright.dev/) by Microsoft
|
|
||||||
- And countless other open-source projects that make modern development possible
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## 💬 Questions?
|
|
||||||
|
|
||||||
- **Documentation**: Check the `/docs` folders in backend and frontend
|
|
||||||
- **Issues**: [GitHub Issues](https://github.com/cardosofelipe/pragma-stack/issues)
|
|
||||||
- **Discussions**: [GitHub Discussions](https://github.com/cardosofelipe/pragma-stack/discussions)
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⭐ Star This Repo
|
|
||||||
|
|
||||||
If this template saves you time, consider giving it a star! It helps others discover the project and motivates continued development.
|
|
||||||
|
|
||||||
**Happy coding! 🚀**
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
<div align="center">
|
|
||||||
Made with ❤️ by a developer who got tired of rebuilding the same boilerplate
|
|
||||||
</div>
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# PragmaStack Backend API
|
# Syndarix Backend API
|
||||||
|
|
||||||
> The pragmatic, production-ready FastAPI backend for PragmaStack.
|
> The pragmatic, production-ready FastAPI backend for Syndarix.
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
|
|||||||
@@ -5,7 +5,7 @@ from pydantic_settings import BaseSettings
|
|||||||
|
|
||||||
|
|
||||||
class Settings(BaseSettings):
|
class Settings(BaseSettings):
|
||||||
PROJECT_NAME: str = "PragmaStack"
|
PROJECT_NAME: str = "Syndarix"
|
||||||
VERSION: str = "1.0.0"
|
VERSION: str = "1.0.0"
|
||||||
API_V1_STR: str = "/api/v1"
|
API_V1_STR: str = "/api/v1"
|
||||||
|
|
||||||
|
|||||||
0
docs/adrs/.gitkeep
Normal file
0
docs/adrs/.gitkeep
Normal file
134
docs/adrs/ADR-001-mcp-integration-architecture.md
Normal file
134
docs/adrs/ADR-001-mcp-integration-architecture.md
Normal file
@@ -0,0 +1,134 @@
|
|||||||
|
# ADR-001: MCP Integration Architecture
|
||||||
|
|
||||||
|
**Status:** Accepted
|
||||||
|
**Date:** 2025-12-29
|
||||||
|
**Deciders:** Architecture Team
|
||||||
|
**Related Spikes:** SPIKE-001
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
Syndarix requires integration with multiple external services (LLM providers, Git, issue trackers, file systems, CI/CD). The Model Context Protocol (MCP) was identified as the standard for tool integration in AI applications. We need to decide on:
|
||||||
|
|
||||||
|
1. The MCP framework to use
|
||||||
|
2. Server deployment pattern (singleton vs per-project)
|
||||||
|
3. Scoping mechanism for multi-project/multi-agent access
|
||||||
|
|
||||||
|
## Decision Drivers
|
||||||
|
|
||||||
|
- **Simplicity:** Minimize operational complexity
|
||||||
|
- **Resource Efficiency:** Avoid spawning redundant processes
|
||||||
|
- **Consistency:** Unified interface across all integrations
|
||||||
|
- **Scalability:** Support 10+ concurrent projects
|
||||||
|
- **Maintainability:** Easy to add new MCP servers
|
||||||
|
|
||||||
|
## Considered Options
|
||||||
|
|
||||||
|
### Option 1: Per-Project MCP Servers
|
||||||
|
Spawn dedicated MCP server instances for each project.
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Complete isolation between projects
|
||||||
|
- Simple access control (project owns server)
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- Resource heavy (7 servers × N projects)
|
||||||
|
- Complex orchestration
|
||||||
|
- Difficult to share cross-project resources
|
||||||
|
|
||||||
|
### Option 2: Unified Singleton MCP Servers (Selected)
|
||||||
|
Single instance of each MCP server type, with explicit project/agent scoping.
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Resource efficient (7 total servers)
|
||||||
|
- Simpler deployment
|
||||||
|
- Enables cross-project learning (if desired)
|
||||||
|
- Consistent management
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- Requires explicit scoping in all tools
|
||||||
|
- Shared state requires careful design
|
||||||
|
|
||||||
|
### Option 3: Hybrid (MCP Proxy)
|
||||||
|
Single proxy that routes to per-project backends.
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Balance of isolation and efficiency
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- Added complexity
|
||||||
|
- Routing overhead
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
**Adopt Option 2: Unified Singleton MCP Servers with explicit scoping.**
|
||||||
|
|
||||||
|
All MCP servers are deployed as singletons. Every tool accepts `project_id` and `agent_id` parameters for:
|
||||||
|
- Access control validation
|
||||||
|
- Audit logging
|
||||||
|
- Context filtering
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### MCP Server Registry
|
||||||
|
|
||||||
|
| Server | Port | Purpose |
|
||||||
|
|--------|------|---------|
|
||||||
|
| LLM Gateway | 9001 | Route LLM requests with failover |
|
||||||
|
| Git MCP | 9002 | Git operations across providers |
|
||||||
|
| Knowledge Base MCP | 9003 | RAG and document search |
|
||||||
|
| Issues MCP | 9004 | Issue tracking operations |
|
||||||
|
| File System MCP | 9005 | Workspace file operations |
|
||||||
|
| Code Analysis MCP | 9006 | Static analysis, linting |
|
||||||
|
| CI/CD MCP | 9007 | Pipeline operations |
|
||||||
|
|
||||||
|
### Framework Selection
|
||||||
|
|
||||||
|
Use **FastMCP 2.0** for all MCP server implementations:
|
||||||
|
- Decorator-based tool registration
|
||||||
|
- Built-in async support
|
||||||
|
- Compatible with SSE transport
|
||||||
|
- Type-safe with Pydantic
|
||||||
|
|
||||||
|
### Tool Signature Pattern
|
||||||
|
|
||||||
|
```python
|
||||||
|
@mcp.tool()
|
||||||
|
def tool_name(
|
||||||
|
project_id: str, # Required: project scope
|
||||||
|
agent_id: str, # Required: calling agent
|
||||||
|
# ... tool-specific params
|
||||||
|
) -> Result:
|
||||||
|
validate_access(agent_id, project_id)
|
||||||
|
log_tool_usage(agent_id, project_id, "tool_name")
|
||||||
|
# ... implementation
|
||||||
|
```
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
### Positive
|
||||||
|
- Single deployment per MCP type simplifies operations
|
||||||
|
- Consistent interface across all tools
|
||||||
|
- Easy to add monitoring/logging centrally
|
||||||
|
- Cross-project analytics possible
|
||||||
|
|
||||||
|
### Negative
|
||||||
|
- All tools must include scoping parameters
|
||||||
|
- Shared state requires careful design
|
||||||
|
- Single point of failure per MCP type (mitigated by multiple instances)
|
||||||
|
|
||||||
|
### Neutral
|
||||||
|
- Requires MCP client manager in FastAPI backend
|
||||||
|
- Authentication handled internally (service tokens for v1)
|
||||||
|
|
||||||
|
## Compliance
|
||||||
|
|
||||||
|
This decision aligns with:
|
||||||
|
- FR-802: MCP-first architecture requirement
|
||||||
|
- NFR-201: Horizontal scalability requirement
|
||||||
|
- NFR-602: Centralized logging requirement
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This ADR supersedes any previous decisions regarding MCP architecture.*
|
||||||
160
docs/adrs/ADR-002-realtime-communication.md
Normal file
160
docs/adrs/ADR-002-realtime-communication.md
Normal file
@@ -0,0 +1,160 @@
|
|||||||
|
# ADR-002: Real-time Communication Architecture
|
||||||
|
|
||||||
|
**Status:** Accepted
|
||||||
|
**Date:** 2025-12-29
|
||||||
|
**Deciders:** Architecture Team
|
||||||
|
**Related Spikes:** SPIKE-003
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
Syndarix requires real-time communication for:
|
||||||
|
- Agent activity streams
|
||||||
|
- Project progress updates
|
||||||
|
- Build/pipeline status
|
||||||
|
- Client approval requests
|
||||||
|
- Issue change notifications
|
||||||
|
- Interactive chat with agents
|
||||||
|
|
||||||
|
We need to decide between WebSocket and Server-Sent Events (SSE) for real-time data delivery.
|
||||||
|
|
||||||
|
## Decision Drivers
|
||||||
|
|
||||||
|
- **Simplicity:** Minimize implementation complexity
|
||||||
|
- **Reliability:** Built-in reconnection handling
|
||||||
|
- **Scalability:** Support 200+ concurrent connections
|
||||||
|
- **Compatibility:** Work through proxies and load balancers
|
||||||
|
- **Use Case Fit:** Match communication patterns
|
||||||
|
|
||||||
|
## Considered Options
|
||||||
|
|
||||||
|
### Option 1: WebSocket Only
|
||||||
|
Use WebSocket for all real-time communication.
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Bidirectional communication
|
||||||
|
- Single protocol to manage
|
||||||
|
- Well-supported in FastAPI
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- Manual reconnection logic required
|
||||||
|
- More complex through proxies
|
||||||
|
- Overkill for server-to-client streams
|
||||||
|
|
||||||
|
### Option 2: SSE Only
|
||||||
|
Use Server-Sent Events for all real-time communication.
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Built-in automatic reconnection
|
||||||
|
- Native HTTP (proxy-friendly)
|
||||||
|
- Simpler implementation
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- Unidirectional only
|
||||||
|
- Browser connection limits per domain
|
||||||
|
|
||||||
|
### Option 3: SSE Primary + WebSocket for Chat (Selected)
|
||||||
|
Use SSE for server-to-client events, WebSocket for bidirectional chat.
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Best tool for each use case
|
||||||
|
- SSE simplicity for 90% of needs
|
||||||
|
- WebSocket only where truly needed
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- Two protocols to manage
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
**Adopt Option 3: SSE as primary transport, WebSocket for interactive chat.**
|
||||||
|
|
||||||
|
### SSE Use Cases (90%)
|
||||||
|
- Agent activity streams
|
||||||
|
- Project progress updates
|
||||||
|
- Build/pipeline status
|
||||||
|
- Approval request notifications
|
||||||
|
- Issue change notifications
|
||||||
|
|
||||||
|
### WebSocket Use Cases (10%)
|
||||||
|
- Interactive chat with agents
|
||||||
|
- Real-time debugging sessions
|
||||||
|
- Future collaboration features
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Event Bus with Redis Pub/Sub
|
||||||
|
|
||||||
|
```
|
||||||
|
FastAPI Backend ──publish──> Redis Pub/Sub ──subscribe──> SSE Endpoints
|
||||||
|
│
|
||||||
|
└──> Other Backend Instances
|
||||||
|
```
|
||||||
|
|
||||||
|
### SSE Endpoint Pattern
|
||||||
|
|
||||||
|
```python
|
||||||
|
@router.get("/projects/{project_id}/events")
|
||||||
|
async def project_events(project_id: str, request: Request):
|
||||||
|
async def event_generator():
|
||||||
|
subscriber = await event_bus.subscribe(f"project:{project_id}")
|
||||||
|
try:
|
||||||
|
while not await request.is_disconnected():
|
||||||
|
event = await asyncio.wait_for(
|
||||||
|
subscriber.get_event(), timeout=30.0
|
||||||
|
)
|
||||||
|
yield f"event: {event.type}\ndata: {event.json()}\n\n"
|
||||||
|
finally:
|
||||||
|
await subscriber.unsubscribe()
|
||||||
|
|
||||||
|
return StreamingResponse(
|
||||||
|
event_generator(),
|
||||||
|
media_type="text/event-stream"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Event Types
|
||||||
|
|
||||||
|
| Category | Event Types |
|
||||||
|
|----------|-------------|
|
||||||
|
| Agent | `agent_started`, `agent_activity`, `agent_completed`, `agent_error` |
|
||||||
|
| Project | `issue_created`, `issue_updated`, `issue_closed` |
|
||||||
|
| Git | `branch_created`, `commit_pushed`, `pr_created`, `pr_merged` |
|
||||||
|
| Workflow | `approval_required`, `sprint_started`, `sprint_completed` |
|
||||||
|
| Pipeline | `pipeline_started`, `pipeline_completed`, `pipeline_failed` |
|
||||||
|
|
||||||
|
### Client Implementation
|
||||||
|
|
||||||
|
- Single SSE connection per project
|
||||||
|
- Event multiplexing through event types
|
||||||
|
- Exponential backoff on reconnection
|
||||||
|
- Native `EventSource` API with automatic reconnect
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
### Positive
|
||||||
|
- Simpler implementation for server-to-client streams
|
||||||
|
- Automatic reconnection reduces client complexity
|
||||||
|
- Works through all HTTP proxies
|
||||||
|
- Reduced server resource usage vs WebSocket
|
||||||
|
|
||||||
|
### Negative
|
||||||
|
- Two protocols to maintain
|
||||||
|
- WebSocket requires manual reconnect logic
|
||||||
|
- SSE limited to ~6 connections per domain (HTTP/1.1)
|
||||||
|
|
||||||
|
### Mitigation
|
||||||
|
- Use HTTP/2 where possible (higher connection limits)
|
||||||
|
- Multiplex all project events on single connection
|
||||||
|
- WebSocket only for interactive chat sessions
|
||||||
|
|
||||||
|
## Compliance
|
||||||
|
|
||||||
|
This decision aligns with:
|
||||||
|
- FR-105: Real-time agent activity monitoring
|
||||||
|
- NFR-102: 200+ concurrent connections requirement
|
||||||
|
- NFR-501: Responsive UI updates
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This ADR supersedes any previous decisions regarding real-time communication.*
|
||||||
179
docs/adrs/ADR-003-background-task-architecture.md
Normal file
179
docs/adrs/ADR-003-background-task-architecture.md
Normal file
@@ -0,0 +1,179 @@
|
|||||||
|
# ADR-003: Background Task Architecture
|
||||||
|
|
||||||
|
**Status:** Accepted
|
||||||
|
**Date:** 2025-12-29
|
||||||
|
**Deciders:** Architecture Team
|
||||||
|
**Related Spikes:** SPIKE-004
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
Syndarix requires background task processing for:
|
||||||
|
- Agent actions (LLM calls, code generation)
|
||||||
|
- Git operations (clone, commit, push, PR creation)
|
||||||
|
- External synchronization (issue sync with Gitea/GitHub/GitLab)
|
||||||
|
- CI/CD pipeline triggers
|
||||||
|
- Long-running workflows (sprints, story implementation)
|
||||||
|
|
||||||
|
These tasks are too slow for synchronous API responses and need proper queuing, retry, and monitoring.
|
||||||
|
|
||||||
|
## Decision Drivers
|
||||||
|
|
||||||
|
- **Reliability:** Tasks must complete even if workers restart
|
||||||
|
- **Visibility:** Progress tracking for long-running operations
|
||||||
|
- **Scalability:** Handle concurrent agent operations
|
||||||
|
- **Rate Limiting:** Respect LLM API rate limits
|
||||||
|
- **Async Compatibility:** Work with async FastAPI
|
||||||
|
|
||||||
|
## Considered Options
|
||||||
|
|
||||||
|
### Option 1: FastAPI BackgroundTasks
|
||||||
|
Use FastAPI's built-in background tasks.
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Simple, no additional infrastructure
|
||||||
|
- Direct async integration
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- No persistence (lost on restart)
|
||||||
|
- No retry mechanism
|
||||||
|
- No distributed workers
|
||||||
|
|
||||||
|
### Option 2: Celery + Redis (Selected)
|
||||||
|
Use Celery for task queue with Redis as broker/backend.
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Mature, battle-tested
|
||||||
|
- Persistent task queue
|
||||||
|
- Built-in retry with backoff
|
||||||
|
- Distributed workers
|
||||||
|
- Task chaining and workflows
|
||||||
|
- Monitoring with Flower
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- Additional infrastructure
|
||||||
|
- Sync-only task execution (bridge needed for async)
|
||||||
|
|
||||||
|
### Option 3: Dramatiq + Redis
|
||||||
|
Use Dramatiq as a simpler Celery alternative.
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Simpler API than Celery
|
||||||
|
- Good async support
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- Less mature ecosystem
|
||||||
|
- Fewer monitoring tools
|
||||||
|
|
||||||
|
### Option 4: ARQ (Async Redis Queue)
|
||||||
|
Use ARQ for native async task processing.
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Native async
|
||||||
|
- Simple API
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- Less feature-rich
|
||||||
|
- Smaller community
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
**Adopt Option 2: Celery + Redis.**
|
||||||
|
|
||||||
|
Celery provides the reliability, monitoring, and ecosystem maturity needed for production workloads. Redis serves as both broker and result backend.
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Queue Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────┐
|
||||||
|
│ Redis (Broker + Backend) │
|
||||||
|
├─────────────┬─────────────┬─────────────────────┤
|
||||||
|
│ agent_queue │ git_queue │ sync_queue │
|
||||||
|
│ (prefetch=1)│ (prefetch=4)│ (prefetch=4) │
|
||||||
|
└──────┬──────┴──────┬──────┴──────────┬──────────┘
|
||||||
|
│ │ │
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌─────────┐ ┌─────────┐ ┌─────────┐
|
||||||
|
│ Agent │ │ Git │ │ Sync │
|
||||||
|
│ Workers │ │ Workers │ │ Workers │
|
||||||
|
└─────────┘ └─────────┘ └─────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Queue Configuration
|
||||||
|
|
||||||
|
| Queue | Prefetch | Concurrency | Purpose |
|
||||||
|
|-------|----------|-------------|---------|
|
||||||
|
| `agent_queue` | 1 | 4 | LLM-based tasks (rate limited) |
|
||||||
|
| `git_queue` | 4 | 8 | Git operations |
|
||||||
|
| `sync_queue` | 4 | 4 | External sync |
|
||||||
|
| `cicd_queue` | 4 | 4 | Pipeline operations |
|
||||||
|
|
||||||
|
### Task Patterns
|
||||||
|
|
||||||
|
**Progress Reporting:**
|
||||||
|
```python
|
||||||
|
@celery_app.task(bind=True)
|
||||||
|
def implement_story(self, story_id: str, agent_id: str, project_id: str):
|
||||||
|
for i, step in enumerate(steps):
|
||||||
|
self.update_state(
|
||||||
|
state="PROGRESS",
|
||||||
|
meta={"current": i + 1, "total": len(steps)}
|
||||||
|
)
|
||||||
|
# Publish SSE event for real-time UI update
|
||||||
|
event_bus.publish(f"project:{project_id}", {
|
||||||
|
"type": "agent_progress",
|
||||||
|
"step": i + 1,
|
||||||
|
"total": len(steps)
|
||||||
|
})
|
||||||
|
execute_step(step)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Task Chaining:**
|
||||||
|
```python
|
||||||
|
workflow = chain(
|
||||||
|
analyze_requirements.s(story_id),
|
||||||
|
design_solution.s(),
|
||||||
|
implement_code.s(),
|
||||||
|
run_tests.s(),
|
||||||
|
create_pr.s()
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Monitoring
|
||||||
|
|
||||||
|
- **Flower:** Web UI for task monitoring (port 5555)
|
||||||
|
- **Prometheus:** Metrics export for alerting
|
||||||
|
- **Dead Letter Queue:** Failed tasks for investigation
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
### Positive
|
||||||
|
- Reliable task execution with persistence
|
||||||
|
- Automatic retry with exponential backoff
|
||||||
|
- Progress tracking for long operations
|
||||||
|
- Distributed workers for scalability
|
||||||
|
- Rich monitoring and debugging tools
|
||||||
|
|
||||||
|
### Negative
|
||||||
|
- Additional infrastructure (Redis, workers)
|
||||||
|
- Celery is synchronous (event_loop bridge for async calls)
|
||||||
|
- Learning curve for task patterns
|
||||||
|
|
||||||
|
### Mitigation
|
||||||
|
- Use existing Redis instance (already needed for SSE)
|
||||||
|
- Wrap async calls with `asyncio.run()` or `sync_to_async`
|
||||||
|
- Document common task patterns
|
||||||
|
|
||||||
|
## Compliance
|
||||||
|
|
||||||
|
This decision aligns with:
|
||||||
|
- FR-304: Long-running implementation workflow
|
||||||
|
- NFR-102: 500+ background jobs per minute
|
||||||
|
- NFR-402: Task reliability and fault tolerance
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This ADR supersedes any previous decisions regarding background task processing.*
|
||||||
189
docs/adrs/ADR-004-llm-provider-abstraction.md
Normal file
189
docs/adrs/ADR-004-llm-provider-abstraction.md
Normal file
@@ -0,0 +1,189 @@
|
|||||||
|
# ADR-004: LLM Provider Abstraction
|
||||||
|
|
||||||
|
**Status:** Accepted
|
||||||
|
**Date:** 2025-12-29
|
||||||
|
**Deciders:** Architecture Team
|
||||||
|
**Related Spikes:** SPIKE-005
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
Syndarix agents require access to large language models (LLMs) from multiple providers:
|
||||||
|
- **Anthropic** (Claude) - Primary provider
|
||||||
|
- **OpenAI** (GPT-4) - Fallback provider
|
||||||
|
- **Local models** (Ollama/Llama) - Cost optimization, privacy
|
||||||
|
|
||||||
|
We need a unified abstraction layer that provides:
|
||||||
|
- Consistent API across providers
|
||||||
|
- Automatic failover on errors
|
||||||
|
- Usage tracking and cost management
|
||||||
|
- Rate limiting compliance
|
||||||
|
|
||||||
|
## Decision Drivers
|
||||||
|
|
||||||
|
- **Reliability:** Automatic failover on provider outages
|
||||||
|
- **Cost Control:** Track and limit API spending
|
||||||
|
- **Flexibility:** Easy to add/swap providers
|
||||||
|
- **Consistency:** Single interface for all agents
|
||||||
|
- **Async Support:** Compatible with async FastAPI
|
||||||
|
|
||||||
|
## Considered Options
|
||||||
|
|
||||||
|
### Option 1: Direct Provider SDKs
|
||||||
|
Use Anthropic and OpenAI SDKs directly with custom abstraction.
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Full control over implementation
|
||||||
|
- No external dependencies
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- Significant development effort
|
||||||
|
- Must maintain failover logic
|
||||||
|
- Must track token costs manually
|
||||||
|
|
||||||
|
### Option 2: LiteLLM (Selected)
|
||||||
|
Use LiteLLM as unified abstraction layer.
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Unified API for 100+ providers
|
||||||
|
- Built-in failover and routing
|
||||||
|
- Automatic token counting
|
||||||
|
- Cost tracking built-in
|
||||||
|
- Redis caching support
|
||||||
|
- Active community
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- External dependency
|
||||||
|
- May lag behind provider SDK updates
|
||||||
|
|
||||||
|
### Option 3: LangChain
|
||||||
|
Use LangChain's LLM abstraction.
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Large ecosystem
|
||||||
|
- Many integrations
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- Heavy dependency
|
||||||
|
- Overkill for just LLM abstraction
|
||||||
|
- Complexity overhead
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
**Adopt Option 2: LiteLLM for unified LLM provider abstraction.**
|
||||||
|
|
||||||
|
LiteLLM provides the reliability, monitoring, and multi-provider support needed with minimal overhead.
|
||||||
|
|
||||||
|
## Implementation
|
||||||
|
|
||||||
|
### Model Groups
|
||||||
|
|
||||||
|
| Group Name | Use Case | Primary Model | Fallback |
|
||||||
|
|------------|----------|---------------|----------|
|
||||||
|
| `high-reasoning` | Complex analysis, architecture | Claude 3.5 Sonnet | GPT-4 Turbo |
|
||||||
|
| `fast-response` | Quick tasks, simple queries | Claude 3 Haiku | GPT-4o Mini |
|
||||||
|
| `cost-optimized` | High-volume, non-critical | Local Llama 3 | Claude 3 Haiku |
|
||||||
|
|
||||||
|
### Failover Chain
|
||||||
|
|
||||||
|
```
|
||||||
|
Claude 3.5 Sonnet (Anthropic)
|
||||||
|
│
|
||||||
|
▼ (on failure)
|
||||||
|
GPT-4 Turbo (OpenAI)
|
||||||
|
│
|
||||||
|
▼ (on failure)
|
||||||
|
Llama 3 (Ollama/Local)
|
||||||
|
│
|
||||||
|
▼ (on failure)
|
||||||
|
Error with retry
|
||||||
|
```
|
||||||
|
|
||||||
|
### LLM Gateway Service
|
||||||
|
|
||||||
|
```python
|
||||||
|
class LLMGateway:
|
||||||
|
def __init__(self):
|
||||||
|
self.router = Router(
|
||||||
|
model_list=model_list,
|
||||||
|
fallbacks=[
|
||||||
|
{"high-reasoning": ["high-reasoning", "local-fallback"]},
|
||||||
|
],
|
||||||
|
routing_strategy="latency-based-routing",
|
||||||
|
num_retries=3,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def complete(
|
||||||
|
self,
|
||||||
|
agent_id: str,
|
||||||
|
project_id: str,
|
||||||
|
messages: list[dict],
|
||||||
|
model_preference: str = "high-reasoning",
|
||||||
|
) -> dict:
|
||||||
|
response = await self.router.acompletion(
|
||||||
|
model=model_preference,
|
||||||
|
messages=messages,
|
||||||
|
)
|
||||||
|
await self._track_usage(agent_id, project_id, response)
|
||||||
|
return response
|
||||||
|
```
|
||||||
|
|
||||||
|
### Cost Tracking
|
||||||
|
|
||||||
|
| Model | Input (per 1M tokens) | Output (per 1M tokens) |
|
||||||
|
|-------|----------------------|------------------------|
|
||||||
|
| Claude 3.5 Sonnet | $3.00 | $15.00 |
|
||||||
|
| Claude 3 Haiku | $0.25 | $1.25 |
|
||||||
|
| GPT-4 Turbo | $10.00 | $30.00 |
|
||||||
|
| GPT-4o Mini | $0.15 | $0.60 |
|
||||||
|
| Ollama (local) | $0.00 | $0.00 |
|
||||||
|
|
||||||
|
### Agent Type Mapping
|
||||||
|
|
||||||
|
| Agent Type | Model Preference | Rationale |
|
||||||
|
|------------|------------------|-----------|
|
||||||
|
| Product Owner | high-reasoning | Complex requirements analysis |
|
||||||
|
| Software Architect | high-reasoning | Architecture decisions |
|
||||||
|
| Software Engineer | high-reasoning | Code generation |
|
||||||
|
| QA Engineer | fast-response | Test case generation |
|
||||||
|
| DevOps Engineer | fast-response | Config generation |
|
||||||
|
| Project Manager | fast-response | Status updates |
|
||||||
|
|
||||||
|
### Caching Strategy
|
||||||
|
|
||||||
|
- **Redis-backed cache** for repeated queries
|
||||||
|
- **TTL:** 1 hour for general queries
|
||||||
|
- **Skip cache:** For context-dependent generation
|
||||||
|
- **Cache key:** Hash of (model, messages, temperature)
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
### Positive
|
||||||
|
- Single interface for all LLM operations
|
||||||
|
- Automatic failover improves reliability
|
||||||
|
- Built-in cost tracking and budgeting
|
||||||
|
- Easy to add new providers
|
||||||
|
- Caching reduces API costs
|
||||||
|
|
||||||
|
### Negative
|
||||||
|
- Dependency on LiteLLM library
|
||||||
|
- May lag behind provider SDK features
|
||||||
|
- Additional abstraction layer
|
||||||
|
|
||||||
|
### Mitigation
|
||||||
|
- Pin LiteLLM version, test before upgrades
|
||||||
|
- Direct SDK access available if needed
|
||||||
|
- Monitor LiteLLM updates for breaking changes
|
||||||
|
|
||||||
|
## Compliance
|
||||||
|
|
||||||
|
This decision aligns with:
|
||||||
|
- FR-101: Agent type model configuration
|
||||||
|
- NFR-103: Agent response time targets
|
||||||
|
- NFR-402: Failover requirements
|
||||||
|
- TR-001: LLM API unavailability mitigation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This ADR supersedes any previous decisions regarding LLM integration.*
|
||||||
156
docs/adrs/ADR-005-tech-stack-selection.md
Normal file
156
docs/adrs/ADR-005-tech-stack-selection.md
Normal file
@@ -0,0 +1,156 @@
|
|||||||
|
# ADR-005: Technology Stack Selection
|
||||||
|
|
||||||
|
**Status:** Accepted
|
||||||
|
**Date:** 2025-12-29
|
||||||
|
**Deciders:** Architecture Team
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
Syndarix needs a robust, modern technology stack that can support:
|
||||||
|
- Multi-agent orchestration with real-time communication
|
||||||
|
- Full-stack web application with API backend
|
||||||
|
- Background task processing for long-running operations
|
||||||
|
- Vector search for RAG (Retrieval-Augmented Generation)
|
||||||
|
- Multiple external integrations via MCP
|
||||||
|
|
||||||
|
The decision was made to build upon **PragmaStack** as the foundation, extending it with Syndarix-specific components.
|
||||||
|
|
||||||
|
## Decision Drivers
|
||||||
|
|
||||||
|
- **Productivity:** Rapid development with modern frameworks
|
||||||
|
- **Type Safety:** Minimize runtime errors
|
||||||
|
- **Async Performance:** Handle concurrent agent operations
|
||||||
|
- **Ecosystem:** Rich library support
|
||||||
|
- **Familiarity:** Team expertise with selected technologies
|
||||||
|
- **Production-Ready:** Proven technologies for production workloads
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
**Adopt PragmaStack as foundation with Syndarix-specific extensions.**
|
||||||
|
|
||||||
|
### Core Stack (from PragmaStack)
|
||||||
|
|
||||||
|
| Layer | Technology | Version | Rationale |
|
||||||
|
|-------|------------|---------|-----------|
|
||||||
|
| **Backend** | FastAPI | 0.115+ | Async, OpenAPI, type hints |
|
||||||
|
| **Backend Language** | Python | 3.11+ | Type hints, async/await, ecosystem |
|
||||||
|
| **Frontend** | Next.js | 16 | React 19, server components, App Router |
|
||||||
|
| **Frontend Language** | TypeScript | 5.0+ | Type safety, IDE support |
|
||||||
|
| **Database** | PostgreSQL | 15+ | Robust, extensible, pgvector |
|
||||||
|
| **ORM** | SQLAlchemy | 2.0+ | Async support, type hints |
|
||||||
|
| **Validation** | Pydantic | 2.0+ | Data validation, serialization |
|
||||||
|
| **State Management** | Zustand | 4.0+ | Simple, performant |
|
||||||
|
| **Data Fetching** | TanStack Query | 5.0+ | Caching, invalidation |
|
||||||
|
| **UI Components** | shadcn/ui | Latest | Accessible, customizable |
|
||||||
|
| **CSS** | Tailwind CSS | 4.0+ | Utility-first, fast styling |
|
||||||
|
| **Auth** | JWT | - | Dual-token (access + refresh) |
|
||||||
|
|
||||||
|
### Syndarix Extensions
|
||||||
|
|
||||||
|
| Component | Technology | Version | Purpose |
|
||||||
|
|-----------|------------|---------|---------|
|
||||||
|
| **Task Queue** | Celery | 5.3+ | Background job processing |
|
||||||
|
| **Message Broker** | Redis | 7.0+ | Celery broker, caching, pub/sub |
|
||||||
|
| **Vector Store** | pgvector | Latest | Embeddings for RAG |
|
||||||
|
| **MCP Framework** | FastMCP | 2.0+ | MCP server development |
|
||||||
|
| **LLM Abstraction** | LiteLLM | Latest | Multi-provider LLM access |
|
||||||
|
| **Real-time** | SSE + WebSocket | - | Event streaming, chat |
|
||||||
|
|
||||||
|
### Testing Stack
|
||||||
|
|
||||||
|
| Type | Technology | Purpose |
|
||||||
|
|------|------------|---------|
|
||||||
|
| **Backend Unit** | pytest | 8.0+ | Python testing |
|
||||||
|
| **Backend Async** | pytest-asyncio | Async test support |
|
||||||
|
| **Backend Coverage** | coverage.py | Code coverage |
|
||||||
|
| **Frontend Unit** | Jest | 29+ | React testing |
|
||||||
|
| **Frontend Components** | React Testing Library | Component testing |
|
||||||
|
| **E2E** | Playwright | 1.40+ | Browser automation |
|
||||||
|
|
||||||
|
### DevOps Stack
|
||||||
|
|
||||||
|
| Component | Technology | Purpose |
|
||||||
|
|-----------|------------|---------|
|
||||||
|
| **Containerization** | Docker | 24+ | Application packaging |
|
||||||
|
| **Orchestration** | Docker Compose | Local development |
|
||||||
|
| **CI/CD** | Gitea Actions | Automated pipelines |
|
||||||
|
| **Database Migrations** | Alembic | Schema versioning |
|
||||||
|
|
||||||
|
## Architecture Overview
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Frontend (Next.js 16) │
|
||||||
|
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||||
|
│ │ Pages │ │ Components │ │ Stores │ │
|
||||||
|
│ │ (App Router)│ │ (shadcn/ui) │ │ (Zustand) │ │
|
||||||
|
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||||
|
└────────────────────────────┬────────────────────────────────────┘
|
||||||
|
│ REST + SSE + WebSocket
|
||||||
|
▼
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Backend (FastAPI 0.115+) │
|
||||||
|
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||||
|
│ │ API │ │ Services │ │ CRUD │ │
|
||||||
|
│ │ Routes │ │ Layer │ │ Layer │ │
|
||||||
|
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||||
|
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||||
|
│ │ LLM Gateway │ │ MCP Client │ │ Event Bus │ │
|
||||||
|
│ │ (LiteLLM) │ │ Manager │ │ (Redis) │ │
|
||||||
|
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||||
|
└────────────────────────────┬────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
┌────────────────────┼────────────────────┐
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌───────────────┐ ┌───────────────┐ ┌───────────────────────────┐
|
||||||
|
│ PostgreSQL │ │ Redis │ │ MCP Servers │
|
||||||
|
│ + pgvector │ │ (Cache/Queue) │ │ (LLM, Git, KB, Issues...) │
|
||||||
|
└───────────────┘ └───────────────┘ └───────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌───────────────┐
|
||||||
|
│ Celery │
|
||||||
|
│ Workers │
|
||||||
|
└───────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
### Positive
|
||||||
|
- Proven, production-ready stack
|
||||||
|
- Strong typing throughout (Python + TypeScript)
|
||||||
|
- Excellent async performance
|
||||||
|
- Rich ecosystem for extensions
|
||||||
|
- Team familiarity reduces learning curve
|
||||||
|
|
||||||
|
### Negative
|
||||||
|
- Python GIL limits CPU-bound concurrency (mitigated by Celery)
|
||||||
|
- Multiple languages (Python + TypeScript) to maintain
|
||||||
|
- PostgreSQL requires management (vs serverless options)
|
||||||
|
|
||||||
|
### Neutral
|
||||||
|
- PragmaStack provides solid foundation but may include unused features
|
||||||
|
- Stack is opinionated, limiting some technology choices
|
||||||
|
|
||||||
|
## Version Pinning Strategy
|
||||||
|
|
||||||
|
| Component | Strategy | Rationale |
|
||||||
|
|-----------|----------|-----------|
|
||||||
|
| Python | 3.11+ (specific minor) | Stability |
|
||||||
|
| Node.js | 20 LTS | Long-term support |
|
||||||
|
| FastAPI | 0.115+ | Latest stable |
|
||||||
|
| Next.js | 16 | Current major |
|
||||||
|
| PostgreSQL | 15+ | Required for features |
|
||||||
|
|
||||||
|
## Compliance
|
||||||
|
|
||||||
|
This decision aligns with:
|
||||||
|
- NFR-601: Code quality standards (TypeScript, type hints)
|
||||||
|
- NFR-603: Docker containerization requirement
|
||||||
|
- TC-001 through TC-006: Technical constraints
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This ADR establishes the foundational technology choices for Syndarix.*
|
||||||
260
docs/adrs/ADR-006-agent-orchestration.md
Normal file
260
docs/adrs/ADR-006-agent-orchestration.md
Normal file
@@ -0,0 +1,260 @@
|
|||||||
|
# ADR-006: Agent Orchestration Architecture
|
||||||
|
|
||||||
|
**Status:** Accepted
|
||||||
|
**Date:** 2025-12-29
|
||||||
|
**Deciders:** Architecture Team
|
||||||
|
**Related Spikes:** SPIKE-002
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
Syndarix requires an agent orchestration system that can:
|
||||||
|
- Define reusable agent types with specific capabilities
|
||||||
|
- Spawn multiple instances of the same type with unique identities
|
||||||
|
- Manage agent state, context, and conversation history
|
||||||
|
- Route messages between agents
|
||||||
|
- Handle agent failover and recovery
|
||||||
|
- Track resource usage per agent
|
||||||
|
|
||||||
|
## Decision Drivers
|
||||||
|
|
||||||
|
- **Flexibility:** Support diverse agent roles and capabilities
|
||||||
|
- **Scalability:** Handle 50+ concurrent agent instances
|
||||||
|
- **Isolation:** Each instance maintains separate state
|
||||||
|
- **Observability:** Full visibility into agent activities
|
||||||
|
- **Reliability:** Graceful handling of failures
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
**Adopt a Type-Instance pattern** where:
|
||||||
|
- **Agent Types** define templates (model, expertise, personality)
|
||||||
|
- **Agent Instances** are spawned from types with unique identities
|
||||||
|
- **Agent Orchestrator** manages lifecycle and communication
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Agent Type Definition
|
||||||
|
|
||||||
|
```python
|
||||||
|
class AgentType(Base):
|
||||||
|
id = Column(UUID, primary_key=True)
|
||||||
|
name = Column(String(50), unique=True) # "Software Engineer"
|
||||||
|
role = Column(Enum(AgentRole)) # ENGINEER
|
||||||
|
base_model = Column(String(100)) # "claude-3-5-sonnet-20241022"
|
||||||
|
failover_model = Column(String(100)) # "gpt-4-turbo"
|
||||||
|
expertise = Column(ARRAY(String)) # ["python", "fastapi", "testing"]
|
||||||
|
personality = Column(JSONB) # {"style": "detailed", "tone": "professional"}
|
||||||
|
system_prompt = Column(Text) # Base system prompt template
|
||||||
|
capabilities = Column(ARRAY(String)) # ["code_generation", "code_review"]
|
||||||
|
is_active = Column(Boolean, default=True)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Agent Instance Definition
|
||||||
|
|
||||||
|
```python
|
||||||
|
class AgentInstance(Base):
|
||||||
|
id = Column(UUID, primary_key=True)
|
||||||
|
name = Column(String(50)) # "Dave"
|
||||||
|
agent_type_id = Column(UUID, ForeignKey)
|
||||||
|
project_id = Column(UUID, ForeignKey)
|
||||||
|
status = Column(Enum(InstanceStatus)) # ACTIVE, IDLE, TERMINATED
|
||||||
|
context = Column(JSONB) # Current working context
|
||||||
|
conversation_id = Column(UUID) # Active conversation
|
||||||
|
rag_collection_id = Column(String) # Domain knowledge collection
|
||||||
|
token_usage = Column(JSONB) # {"prompt": 0, "completion": 0}
|
||||||
|
last_active_at = Column(DateTime)
|
||||||
|
created_at = Column(DateTime)
|
||||||
|
terminated_at = Column(DateTime)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Orchestrator Service
|
||||||
|
|
||||||
|
```python
|
||||||
|
class AgentOrchestrator:
|
||||||
|
"""Central service for agent lifecycle management."""
|
||||||
|
|
||||||
|
async def spawn_agent(
|
||||||
|
self,
|
||||||
|
agent_type_id: UUID,
|
||||||
|
project_id: UUID,
|
||||||
|
name: str,
|
||||||
|
domain_knowledge: list[str] = None
|
||||||
|
) -> AgentInstance:
|
||||||
|
"""Spawn a new agent instance from a type definition."""
|
||||||
|
agent_type = await self.get_agent_type(agent_type_id)
|
||||||
|
|
||||||
|
instance = AgentInstance(
|
||||||
|
name=name,
|
||||||
|
agent_type_id=agent_type_id,
|
||||||
|
project_id=project_id,
|
||||||
|
status=InstanceStatus.ACTIVE,
|
||||||
|
context={"initialized_at": datetime.utcnow().isoformat()},
|
||||||
|
)
|
||||||
|
|
||||||
|
# Initialize RAG collection if domain knowledge provided
|
||||||
|
if domain_knowledge:
|
||||||
|
instance.rag_collection_id = await self._init_rag_collection(
|
||||||
|
instance.id, domain_knowledge
|
||||||
|
)
|
||||||
|
|
||||||
|
await self.db.add(instance)
|
||||||
|
await self.db.commit()
|
||||||
|
|
||||||
|
# Publish spawn event
|
||||||
|
await self.event_bus.publish(f"project:{project_id}", {
|
||||||
|
"type": "agent_spawned",
|
||||||
|
"agent_id": str(instance.id),
|
||||||
|
"name": name,
|
||||||
|
"role": agent_type.role.value
|
||||||
|
})
|
||||||
|
|
||||||
|
return instance
|
||||||
|
|
||||||
|
async def terminate_agent(self, instance_id: UUID) -> None:
|
||||||
|
"""Terminate an agent instance and release resources."""
|
||||||
|
instance = await self.get_instance(instance_id)
|
||||||
|
instance.status = InstanceStatus.TERMINATED
|
||||||
|
instance.terminated_at = datetime.utcnow()
|
||||||
|
|
||||||
|
# Cleanup RAG collection
|
||||||
|
if instance.rag_collection_id:
|
||||||
|
await self._cleanup_rag_collection(instance.rag_collection_id)
|
||||||
|
|
||||||
|
await self.db.commit()
|
||||||
|
|
||||||
|
async def send_message(
|
||||||
|
self,
|
||||||
|
from_id: UUID,
|
||||||
|
to_id: UUID,
|
||||||
|
message: AgentMessage
|
||||||
|
) -> None:
|
||||||
|
"""Route a message from one agent to another."""
|
||||||
|
# Validate both agents exist and are active
|
||||||
|
sender = await self.get_instance(from_id)
|
||||||
|
recipient = await self.get_instance(to_id)
|
||||||
|
|
||||||
|
# Persist message
|
||||||
|
await self.message_store.save(message)
|
||||||
|
|
||||||
|
# If recipient is idle, trigger action
|
||||||
|
if recipient.status == InstanceStatus.IDLE:
|
||||||
|
await self._trigger_agent_action(recipient.id, message)
|
||||||
|
|
||||||
|
# Publish for real-time tracking
|
||||||
|
await self.event_bus.publish(f"project:{sender.project_id}", {
|
||||||
|
"type": "agent_message",
|
||||||
|
"from": str(from_id),
|
||||||
|
"to": str(to_id),
|
||||||
|
"preview": message.content[:100]
|
||||||
|
})
|
||||||
|
|
||||||
|
async def broadcast(
|
||||||
|
self,
|
||||||
|
from_id: UUID,
|
||||||
|
target_role: AgentRole,
|
||||||
|
message: AgentMessage
|
||||||
|
) -> None:
|
||||||
|
"""Broadcast a message to all agents of a specific role."""
|
||||||
|
sender = await self.get_instance(from_id)
|
||||||
|
recipients = await self.get_instances_by_role(
|
||||||
|
sender.project_id, target_role
|
||||||
|
)
|
||||||
|
|
||||||
|
for recipient in recipients:
|
||||||
|
await self.send_message(from_id, recipient.id, message)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Agent Execution Pattern
|
||||||
|
|
||||||
|
```python
|
||||||
|
class AgentRunner:
|
||||||
|
"""Executes agent actions using LLM."""
|
||||||
|
|
||||||
|
def __init__(self, instance: AgentInstance, llm_gateway: LLMGateway):
|
||||||
|
self.instance = instance
|
||||||
|
self.llm = llm_gateway
|
||||||
|
|
||||||
|
async def execute(self, action: str, context: dict) -> dict:
|
||||||
|
"""Execute an action using the agent's configured model."""
|
||||||
|
agent_type = await self.get_agent_type(self.instance.agent_type_id)
|
||||||
|
|
||||||
|
# Build messages with system prompt and context
|
||||||
|
messages = [
|
||||||
|
{"role": "system", "content": self._build_system_prompt(agent_type)},
|
||||||
|
*self._get_conversation_history(),
|
||||||
|
{"role": "user", "content": self._build_action_prompt(action, context)}
|
||||||
|
]
|
||||||
|
|
||||||
|
# Add RAG context if available
|
||||||
|
if self.instance.rag_collection_id:
|
||||||
|
rag_context = await self._query_rag(action, context)
|
||||||
|
messages.insert(1, {
|
||||||
|
"role": "system",
|
||||||
|
"content": f"Relevant context:\n{rag_context}"
|
||||||
|
})
|
||||||
|
|
||||||
|
# Execute with failover
|
||||||
|
response = await self.llm.complete(
|
||||||
|
agent_id=str(self.instance.id),
|
||||||
|
project_id=str(self.instance.project_id),
|
||||||
|
messages=messages,
|
||||||
|
model_preference=self._get_model_preference(agent_type)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Update instance context
|
||||||
|
self.instance.context = {
|
||||||
|
**self.instance.context,
|
||||||
|
"last_action": action,
|
||||||
|
"last_response_at": datetime.utcnow().isoformat()
|
||||||
|
}
|
||||||
|
|
||||||
|
return response
|
||||||
|
```
|
||||||
|
|
||||||
|
### Agent Roles
|
||||||
|
|
||||||
|
| Role | Instances | Primary Capabilities |
|
||||||
|
|------|-----------|---------------------|
|
||||||
|
| Product Owner | 1 | requirements, prioritization, client_communication |
|
||||||
|
| Project Manager | 1 | planning, tracking, coordination |
|
||||||
|
| Business Analyst | 1 | analysis, documentation, process_modeling |
|
||||||
|
| Software Architect | 1 | design, architecture_decisions, tech_selection |
|
||||||
|
| Software Engineer | 1-5 | code_generation, code_review, testing |
|
||||||
|
| UI/UX Designer | 1 | design, wireframes, accessibility |
|
||||||
|
| QA Engineer | 1-2 | test_planning, test_automation, bug_reporting |
|
||||||
|
| DevOps Engineer | 1 | cicd, infrastructure, deployment |
|
||||||
|
| AI/ML Engineer | 1 | ml_development, model_training, mlops |
|
||||||
|
| Security Expert | 1 | security_review, vulnerability_assessment |
|
||||||
|
|
||||||
|
## Consequences
|
||||||
|
|
||||||
|
### Positive
|
||||||
|
- Clear separation between type definition and instance runtime
|
||||||
|
- Multiple instances share type configuration (DRY)
|
||||||
|
- Easy to add new agent roles
|
||||||
|
- Full observability through events
|
||||||
|
- Graceful failure handling with model failover
|
||||||
|
|
||||||
|
### Negative
|
||||||
|
- Complexity in managing instance lifecycle
|
||||||
|
- State synchronization across instances
|
||||||
|
- Memory overhead for context storage
|
||||||
|
|
||||||
|
### Mitigation
|
||||||
|
- Context archival for long-running instances
|
||||||
|
- Periodic cleanup of terminated instances
|
||||||
|
- State compression for large contexts
|
||||||
|
|
||||||
|
## Compliance
|
||||||
|
|
||||||
|
This decision aligns with:
|
||||||
|
- FR-101: Agent type configuration
|
||||||
|
- FR-102: Agent instance spawning
|
||||||
|
- FR-103: Agent domain knowledge (RAG)
|
||||||
|
- FR-104: Inter-agent communication
|
||||||
|
- FR-105: Agent activity monitoring
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This ADR establishes the agent orchestration architecture for Syndarix.*
|
||||||
0
docs/architecture/.gitkeep
Normal file
0
docs/architecture/.gitkeep
Normal file
680
docs/architecture/ARCHITECTURE_DEEP_ANALYSIS.md
Normal file
680
docs/architecture/ARCHITECTURE_DEEP_ANALYSIS.md
Normal file
@@ -0,0 +1,680 @@
|
|||||||
|
# Syndarix Architecture Deep Analysis
|
||||||
|
|
||||||
|
**Version:** 1.0
|
||||||
|
**Date:** 2025-12-29
|
||||||
|
**Status:** Draft - Architectural Thinking
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
|
||||||
|
This document captures deep architectural thinking about Syndarix beyond the immediate spikes. It addresses complex challenges that arise when building a truly autonomous multi-agent system and proposes solutions based on first principles.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Agent Memory and Context Management
|
||||||
|
|
||||||
|
### The Challenge
|
||||||
|
|
||||||
|
Agents in Syndarix may work on projects for weeks or months. LLM context windows are finite (128K-200K tokens), but project context grows unboundedly. How do we maintain coherent agent "memory" over time?
|
||||||
|
|
||||||
|
### Analysis
|
||||||
|
|
||||||
|
**Context Window Constraints:**
|
||||||
|
| Model | Context Window | Practical Limit (with tools) |
|
||||||
|
|-------|---------------|------------------------------|
|
||||||
|
| Claude 3.5 Sonnet | 200K tokens | ~150K usable |
|
||||||
|
| GPT-4 Turbo | 128K tokens | ~100K usable |
|
||||||
|
| Llama 3 (70B) | 8K-128K tokens | ~80K usable |
|
||||||
|
|
||||||
|
**Memory Types Needed:**
|
||||||
|
1. **Working Memory** - Current task context (fits in context window)
|
||||||
|
2. **Short-term Memory** - Recent conversation history (RAG-retrievable)
|
||||||
|
3. **Long-term Memory** - Project knowledge, past decisions (RAG + summarization)
|
||||||
|
4. **Episodic Memory** - Specific past events/mistakes to learn from
|
||||||
|
|
||||||
|
### Proposed Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Agent Memory System │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||||
|
│ │ Working │ │ Short-term │ │ Long-term │ │
|
||||||
|
│ │ Memory │ │ Memory │ │ Memory │ │
|
||||||
|
│ │ (Context) │ │ (Redis) │ │ (pgvector) │ │
|
||||||
|
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
|
||||||
|
│ │ │ │ │
|
||||||
|
│ └───────────────────┼──────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ Context Assembler │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ 1. System prompt (agent personality, role) │ │
|
||||||
|
│ │ 2. Project context (from long-term memory) │ │
|
||||||
|
│ │ 3. Task context (current issue, requirements) │ │
|
||||||
|
│ │ 4. Relevant history (from short-term memory) │ │
|
||||||
|
│ │ 5. User message │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ Total: Fit within context window limits │ │
|
||||||
|
│ └──────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Context Compression Strategy:**
|
||||||
|
```python
|
||||||
|
class ContextManager:
|
||||||
|
"""Manages agent context to fit within LLM limits."""
|
||||||
|
|
||||||
|
MAX_CONTEXT_TOKENS = 100_000 # Leave room for response
|
||||||
|
|
||||||
|
async def build_context(
|
||||||
|
self,
|
||||||
|
agent: AgentInstance,
|
||||||
|
task: Task,
|
||||||
|
user_message: str
|
||||||
|
) -> list[Message]:
|
||||||
|
# Fixed costs
|
||||||
|
system_prompt = self._get_system_prompt(agent) # ~2K tokens
|
||||||
|
task_context = self._get_task_context(task) # ~1K tokens
|
||||||
|
|
||||||
|
# Variable budget
|
||||||
|
remaining = self.MAX_CONTEXT_TOKENS - token_count(system_prompt, task_context, user_message)
|
||||||
|
|
||||||
|
# Allocate remaining to memories
|
||||||
|
long_term = await self._query_long_term(agent, task, budget=remaining * 0.4)
|
||||||
|
short_term = await self._get_short_term(agent, budget=remaining * 0.4)
|
||||||
|
episodic = await self._get_relevant_episodes(agent, task, budget=remaining * 0.2)
|
||||||
|
|
||||||
|
return self._assemble_messages(
|
||||||
|
system_prompt, task_context, long_term, short_term, episodic, user_message
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Conversation Summarization:**
|
||||||
|
- After every N turns (e.g., 10), summarize conversation and archive
|
||||||
|
- Use smaller/cheaper model for summarization
|
||||||
|
- Store summaries in pgvector for semantic retrieval
|
||||||
|
|
||||||
|
### Recommendation
|
||||||
|
|
||||||
|
Implement a **tiered memory system** with automatic context compression and semantic retrieval. Use Redis for hot short-term memory, pgvector for cold long-term memory, and automatic summarization to prevent context overflow.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. Cross-Project Knowledge Sharing
|
||||||
|
|
||||||
|
### The Challenge
|
||||||
|
|
||||||
|
Each project has isolated knowledge, but agents could benefit from cross-project learnings:
|
||||||
|
- Common patterns (authentication, testing, CI/CD)
|
||||||
|
- Technology expertise (how to configure Kubernetes)
|
||||||
|
- Anti-patterns (what didn't work before)
|
||||||
|
|
||||||
|
### Analysis
|
||||||
|
|
||||||
|
**Privacy Considerations:**
|
||||||
|
- Client data must remain isolated (contractual, legal)
|
||||||
|
- Technical patterns are generally shareable
|
||||||
|
- Need clear data classification
|
||||||
|
|
||||||
|
**Knowledge Categories:**
|
||||||
|
| Category | Scope | Examples |
|
||||||
|
|----------|-------|----------|
|
||||||
|
| **Client Data** | Project-only | Requirements, business logic, code |
|
||||||
|
| **Technical Patterns** | Global | Best practices, configurations |
|
||||||
|
| **Agent Learnings** | Global | What approaches worked/failed |
|
||||||
|
| **Anti-patterns** | Global | Common mistakes to avoid |
|
||||||
|
|
||||||
|
### Proposed Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Knowledge Graph │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ GLOBAL KNOWLEDGE │ │
|
||||||
|
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
|
||||||
|
│ │ │ Patterns │ │ Anti-patterns│ │ Expertise │ │ │
|
||||||
|
│ │ │ Library │ │ Library │ │ Index │ │ │
|
||||||
|
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────┘ │
|
||||||
|
│ ▲ │
|
||||||
|
│ │ Curated extraction │
|
||||||
|
│ │ │
|
||||||
|
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||||
|
│ │ Project A │ │ Project B │ │ Project C │ │
|
||||||
|
│ │ Knowledge │ │ Knowledge │ │ Knowledge │ │
|
||||||
|
│ │ (Isolated) │ │ (Isolated) │ │ (Isolated) │ │
|
||||||
|
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Knowledge Extraction Pipeline:**
|
||||||
|
```python
|
||||||
|
class KnowledgeExtractor:
|
||||||
|
"""Extracts shareable learnings from project work."""
|
||||||
|
|
||||||
|
async def extract_learnings(self, project_id: str) -> list[Learning]:
|
||||||
|
"""
|
||||||
|
Run periodically or after sprints to extract learnings.
|
||||||
|
Human review required before promoting to global.
|
||||||
|
"""
|
||||||
|
# Get completed work
|
||||||
|
completed_issues = await self.get_completed_issues(project_id)
|
||||||
|
|
||||||
|
# Extract patterns using LLM
|
||||||
|
patterns = await self.llm.extract_patterns(
|
||||||
|
completed_issues,
|
||||||
|
categories=["architecture", "testing", "deployment", "security"]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Classify privacy
|
||||||
|
for pattern in patterns:
|
||||||
|
pattern.privacy_level = await self.llm.classify_privacy(pattern)
|
||||||
|
|
||||||
|
# Return only shareable patterns for review
|
||||||
|
return [p for p in patterns if p.privacy_level == "public"]
|
||||||
|
```
|
||||||
|
|
||||||
|
### Recommendation
|
||||||
|
|
||||||
|
Implement **privacy-aware knowledge extraction** with human review gate. Project knowledge stays isolated by default; only explicitly approved patterns flow to global knowledge.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. Agent Specialization vs Generalization Trade-offs
|
||||||
|
|
||||||
|
### The Challenge
|
||||||
|
|
||||||
|
Should each agent type be highly specialized (depth) or have overlapping capabilities (breadth)?
|
||||||
|
|
||||||
|
### Analysis
|
||||||
|
|
||||||
|
**Specialization Benefits:**
|
||||||
|
- Deeper expertise in domain
|
||||||
|
- Cleaner system prompts
|
||||||
|
- Less confusion about responsibilities
|
||||||
|
- Easier to optimize prompts per role
|
||||||
|
|
||||||
|
**Generalization Benefits:**
|
||||||
|
- Fewer agent types to maintain
|
||||||
|
- Smoother handoffs (shared context)
|
||||||
|
- More flexible team composition
|
||||||
|
- Graceful degradation if agent unavailable
|
||||||
|
|
||||||
|
**Current Agent Types (10):**
|
||||||
|
| Role | Primary Domain | Potential Overlap |
|
||||||
|
|------|---------------|-------------------|
|
||||||
|
| Product Owner | Requirements | Business Analyst |
|
||||||
|
| Business Analyst | Documentation | Product Owner |
|
||||||
|
| Project Manager | Planning | Product Owner |
|
||||||
|
| Software Architect | Design | Senior Engineer |
|
||||||
|
| Software Engineer | Coding | Architect, QA |
|
||||||
|
| UI/UX Designer | Interface | Frontend Engineer |
|
||||||
|
| QA Engineer | Testing | Software Engineer |
|
||||||
|
| DevOps Engineer | Infrastructure | Senior Engineer |
|
||||||
|
| AI/ML Engineer | ML/AI | Software Engineer |
|
||||||
|
| Security Expert | Security | All |
|
||||||
|
|
||||||
|
### Proposed Approach: Layered Specialization
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Agent Capability Layers │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Layer 3: Role-Specific Expertise │
|
||||||
|
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
|
||||||
|
│ │ Product │ │ Architect│ │Engineer │ │ QA │ │
|
||||||
|
│ │ Owner │ │ │ │ │ │ │ │
|
||||||
|
│ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │
|
||||||
|
│ │ │ │ │ │
|
||||||
|
│ Layer 2: Shared Professional Skills │
|
||||||
|
│ ┌──────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ Technical Communication | Code Understanding | Git │ │
|
||||||
|
│ │ Documentation | Research | Problem Decomposition │ │
|
||||||
|
│ └──────────────────────────────────────────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ Layer 1: Foundation Model Capabilities │
|
||||||
|
│ ┌──────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ Reasoning | Analysis | Writing | Coding (LLM Base) │ │
|
||||||
|
│ └──────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Capability Inheritance:**
|
||||||
|
```python
|
||||||
|
class AgentTypeBuilder:
|
||||||
|
"""Builds agent types with layered capabilities."""
|
||||||
|
|
||||||
|
BASE_CAPABILITIES = [
|
||||||
|
"reasoning", "analysis", "writing", "coding_assist"
|
||||||
|
]
|
||||||
|
|
||||||
|
PROFESSIONAL_SKILLS = [
|
||||||
|
"technical_communication", "code_understanding",
|
||||||
|
"git_operations", "documentation", "research"
|
||||||
|
]
|
||||||
|
|
||||||
|
ROLE_SPECIFIC = {
|
||||||
|
"ENGINEER": ["code_generation", "code_review", "testing", "debugging"],
|
||||||
|
"ARCHITECT": ["system_design", "adr_writing", "tech_selection"],
|
||||||
|
"QA": ["test_planning", "test_automation", "bug_reporting"],
|
||||||
|
# ...
|
||||||
|
}
|
||||||
|
|
||||||
|
def build_capabilities(self, role: AgentRole) -> list[str]:
|
||||||
|
return (
|
||||||
|
self.BASE_CAPABILITIES +
|
||||||
|
self.PROFESSIONAL_SKILLS +
|
||||||
|
self.ROLE_SPECIFIC[role]
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Recommendation
|
||||||
|
|
||||||
|
Adopt **layered specialization** where all agents share foundational and professional capabilities, with role-specific expertise on top. This enables smooth collaboration while maintaining clear responsibilities.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Human-Agent Collaboration Model
|
||||||
|
|
||||||
|
### The Challenge
|
||||||
|
|
||||||
|
Beyond approval gates, how do humans effectively collaborate with autonomous agents during active work?
|
||||||
|
|
||||||
|
### Interaction Patterns
|
||||||
|
|
||||||
|
| Pattern | Use Case | Frequency |
|
||||||
|
|---------|----------|-----------|
|
||||||
|
| **Approval** | Confirm before action | Per checkpoint |
|
||||||
|
| **Guidance** | Steer direction | On-demand |
|
||||||
|
| **Override** | Correct mistake | Rare |
|
||||||
|
| **Pair Working** | Work together | Optional |
|
||||||
|
| **Review** | Evaluate output | Post-completion |
|
||||||
|
|
||||||
|
### Proposed Collaboration Interface
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Human-Agent Collaboration Dashboard │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ Activity Stream │ │
|
||||||
|
│ │ ────────────────────────────────────────────────────── │ │
|
||||||
|
│ │ [10:23] Dave (Engineer) is implementing login API │ │
|
||||||
|
│ │ [10:24] Dave created auth/service.py │ │
|
||||||
|
│ │ [10:25] Dave is writing unit tests │ │
|
||||||
|
│ │ [LIVE] Dave: "I'm adding JWT validation. Using HS256..." │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ Intervention Panel │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ [💬 Chat] [⏸️ Pause] [↩️ Undo Last] [📝 Guide] │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ Quick Guidance: │ │
|
||||||
|
│ │ ┌─────────────────────────────────────────────────┐ │ │
|
||||||
|
│ │ │ "Use RS256 instead of HS256 for JWT signing" │ │ │
|
||||||
|
│ │ │ [Send] 📤 │ │ │
|
||||||
|
│ │ └─────────────────────────────────────────────────┘ │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Intervention API:**
|
||||||
|
```python
|
||||||
|
@router.post("/agents/{agent_id}/intervene")
|
||||||
|
async def intervene(
|
||||||
|
agent_id: UUID,
|
||||||
|
intervention: InterventionRequest,
|
||||||
|
current_user: User = Depends(get_current_user)
|
||||||
|
):
|
||||||
|
"""Allow human to intervene in agent work."""
|
||||||
|
match intervention.type:
|
||||||
|
case "pause":
|
||||||
|
await orchestrator.pause_agent(agent_id)
|
||||||
|
case "resume":
|
||||||
|
await orchestrator.resume_agent(agent_id)
|
||||||
|
case "guide":
|
||||||
|
await orchestrator.send_guidance(agent_id, intervention.message)
|
||||||
|
case "undo":
|
||||||
|
await orchestrator.undo_last_action(agent_id)
|
||||||
|
case "override":
|
||||||
|
await orchestrator.override_decision(agent_id, intervention.decision)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Recommendation
|
||||||
|
|
||||||
|
Build a **real-time collaboration dashboard** with intervention capabilities. Humans should be able to observe, guide, pause, and correct agents without stopping the entire workflow.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Testing Strategy for Autonomous AI Systems
|
||||||
|
|
||||||
|
### The Challenge
|
||||||
|
|
||||||
|
Traditional testing (unit, integration, E2E) doesn't capture autonomous agent behavior. How do we ensure quality?
|
||||||
|
|
||||||
|
### Testing Pyramid for AI Agents
|
||||||
|
|
||||||
|
```
|
||||||
|
▲
|
||||||
|
╱ ╲
|
||||||
|
╱ ╲
|
||||||
|
╱ E2E ╲ Agent Scenarios
|
||||||
|
╱ Agent ╲ (Full workflows)
|
||||||
|
╱─────────╲
|
||||||
|
╱ Integration╲ Tool + LLM Integration
|
||||||
|
╱ (with mocks) ╲ (Deterministic responses)
|
||||||
|
╱─────────────────╲
|
||||||
|
╱ Unit Tests ╲ Orchestrator, Services
|
||||||
|
╱ (no LLM needed) ╲ (Pure logic)
|
||||||
|
╱───────────────────────╲
|
||||||
|
╱ Prompt Testing ╲ System prompt evaluation
|
||||||
|
╱ (LLM evals) ╲(Quality metrics)
|
||||||
|
╱─────────────────────────────╲
|
||||||
|
```
|
||||||
|
|
||||||
|
### Test Categories
|
||||||
|
|
||||||
|
**1. Prompt Testing (Eval Framework):**
|
||||||
|
```python
|
||||||
|
class PromptEvaluator:
|
||||||
|
"""Evaluate system prompt quality."""
|
||||||
|
|
||||||
|
TEST_CASES = [
|
||||||
|
EvalCase(
|
||||||
|
name="requirement_extraction",
|
||||||
|
input="Client wants a mobile app for food delivery",
|
||||||
|
expected_behaviors=[
|
||||||
|
"asks clarifying questions",
|
||||||
|
"identifies stakeholders",
|
||||||
|
"considers non-functional requirements"
|
||||||
|
]
|
||||||
|
),
|
||||||
|
EvalCase(
|
||||||
|
name="code_review_thoroughness",
|
||||||
|
input="Review this PR: [vulnerable SQL code]",
|
||||||
|
expected_behaviors=[
|
||||||
|
"identifies SQL injection",
|
||||||
|
"suggests parameterized queries",
|
||||||
|
"mentions security best practices"
|
||||||
|
]
|
||||||
|
)
|
||||||
|
]
|
||||||
|
|
||||||
|
async def evaluate(self, agent_type: AgentType) -> EvalReport:
|
||||||
|
results = []
|
||||||
|
for case in self.TEST_CASES:
|
||||||
|
response = await self.llm.complete(
|
||||||
|
system=agent_type.system_prompt,
|
||||||
|
user=case.input
|
||||||
|
)
|
||||||
|
score = await self.judge_response(response, case.expected_behaviors)
|
||||||
|
results.append(score)
|
||||||
|
return EvalReport(results)
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Integration Testing (Mock LLM):**
|
||||||
|
```python
|
||||||
|
@pytest.fixture
|
||||||
|
def mock_llm():
|
||||||
|
"""Deterministic LLM responses for integration tests."""
|
||||||
|
responses = {
|
||||||
|
"analyze requirements": "...",
|
||||||
|
"generate code": "def hello(): return 'world'",
|
||||||
|
"review code": "LGTM"
|
||||||
|
}
|
||||||
|
return MockLLM(responses)
|
||||||
|
|
||||||
|
async def test_story_implementation_workflow(mock_llm):
|
||||||
|
"""Test full workflow with predictable responses."""
|
||||||
|
orchestrator = AgentOrchestrator(llm=mock_llm)
|
||||||
|
|
||||||
|
result = await orchestrator.execute_workflow(
|
||||||
|
workflow="implement_story",
|
||||||
|
inputs={"story_id": "TEST-123"}
|
||||||
|
)
|
||||||
|
|
||||||
|
assert result.status == "completed"
|
||||||
|
assert "hello" in result.artifacts["code"]
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. Agent Scenario Testing:**
|
||||||
|
```python
|
||||||
|
class AgentScenarioTest:
|
||||||
|
"""End-to-end agent behavior testing."""
|
||||||
|
|
||||||
|
@scenario("engineer_handles_bug_report")
|
||||||
|
async def test_bug_resolution(self):
|
||||||
|
"""Engineer agent should fix bugs correctly."""
|
||||||
|
# Setup
|
||||||
|
project = await create_test_project()
|
||||||
|
engineer = await spawn_agent("engineer", project)
|
||||||
|
|
||||||
|
# Act
|
||||||
|
bug = await create_issue(
|
||||||
|
project,
|
||||||
|
title="Login button not working",
|
||||||
|
type="bug"
|
||||||
|
)
|
||||||
|
result = await engineer.handle(bug)
|
||||||
|
|
||||||
|
# Assert
|
||||||
|
assert result.pr_created
|
||||||
|
assert result.tests_pass
|
||||||
|
assert "button" in result.changes_summary.lower()
|
||||||
|
```
|
||||||
|
|
||||||
|
### Recommendation
|
||||||
|
|
||||||
|
Implement a **multi-layer testing strategy** with prompt evals, deterministic integration tests, and scenario-based agent testing. Use LLM-as-judge for evaluating open-ended responses.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Rollback and Recovery
|
||||||
|
|
||||||
|
### The Challenge
|
||||||
|
|
||||||
|
Autonomous agents will make mistakes. How do we recover gracefully?
|
||||||
|
|
||||||
|
### Error Categories
|
||||||
|
|
||||||
|
| Category | Example | Recovery Strategy |
|
||||||
|
|----------|---------|-------------------|
|
||||||
|
| **Reversible** | Wrong code generated | Revert commit, regenerate |
|
||||||
|
| **Partially Reversible** | Merged bad PR | Revert PR, fix, re-merge |
|
||||||
|
| **Non-reversible** | Deployed to production | Forward-fix or rollback deploy |
|
||||||
|
| **External Side Effects** | Email sent to client | Apology + correction |
|
||||||
|
|
||||||
|
### Recovery Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Recovery System │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ Action Log │ │
|
||||||
|
│ │ ┌──────────────────────────────────────────────────┐ │ │
|
||||||
|
│ │ │ Action ID | Agent | Type | Reversible | State │ │ │
|
||||||
|
│ │ ├──────────────────────────────────────────────────┤ │ │
|
||||||
|
│ │ │ a-001 | Dave | commit | Yes | completed │ │ │
|
||||||
|
│ │ │ a-002 | Dave | push | Yes | completed │ │ │
|
||||||
|
│ │ │ a-003 | Dave | create_pr | Yes | completed │ │ │
|
||||||
|
│ │ │ a-004 | Kate | merge_pr | Partial | completed │ │ │
|
||||||
|
│ │ └──────────────────────────────────────────────────┘ │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ Rollback Engine │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ rollback_to(action_id) -> Reverses all actions after │ │
|
||||||
|
│ │ undo_action(action_id) -> Reverses single action │ │
|
||||||
|
│ │ compensate(action_id) -> Creates compensating action │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Action Logging:**
|
||||||
|
```python
|
||||||
|
class ActionLog:
|
||||||
|
"""Immutable log of all agent actions for recovery."""
|
||||||
|
|
||||||
|
async def record(
|
||||||
|
self,
|
||||||
|
agent_id: UUID,
|
||||||
|
action_type: str,
|
||||||
|
inputs: dict,
|
||||||
|
outputs: dict,
|
||||||
|
reversible: bool,
|
||||||
|
reverse_action: str | None = None
|
||||||
|
) -> ActionRecord:
|
||||||
|
record = ActionRecord(
|
||||||
|
id=uuid4(),
|
||||||
|
agent_id=agent_id,
|
||||||
|
action_type=action_type,
|
||||||
|
inputs=inputs,
|
||||||
|
outputs=outputs,
|
||||||
|
reversible=reversible,
|
||||||
|
reverse_action=reverse_action,
|
||||||
|
timestamp=datetime.utcnow()
|
||||||
|
)
|
||||||
|
await self.db.add(record)
|
||||||
|
return record
|
||||||
|
|
||||||
|
async def rollback_to(self, action_id: UUID) -> RollbackResult:
|
||||||
|
"""Rollback all actions after the given action."""
|
||||||
|
actions = await self.get_actions_after(action_id)
|
||||||
|
|
||||||
|
results = []
|
||||||
|
for action in reversed(actions):
|
||||||
|
if action.reversible:
|
||||||
|
result = await self._execute_reverse(action)
|
||||||
|
results.append(result)
|
||||||
|
else:
|
||||||
|
results.append(RollbackSkipped(action, reason="non-reversible"))
|
||||||
|
|
||||||
|
return RollbackResult(results)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Compensation Pattern:**
|
||||||
|
```python
|
||||||
|
class CompensationEngine:
|
||||||
|
"""Handles compensating actions for non-reversible operations."""
|
||||||
|
|
||||||
|
COMPENSATIONS = {
|
||||||
|
"email_sent": "send_correction_email",
|
||||||
|
"deployment": "rollback_deployment",
|
||||||
|
"external_api_call": "create_reversal_request"
|
||||||
|
}
|
||||||
|
|
||||||
|
async def compensate(self, action: ActionRecord) -> CompensationResult:
|
||||||
|
if action.action_type in self.COMPENSATIONS:
|
||||||
|
compensation = self.COMPENSATIONS[action.action_type]
|
||||||
|
return await self._execute_compensation(compensation, action)
|
||||||
|
else:
|
||||||
|
return CompensationResult(
|
||||||
|
status="manual_required",
|
||||||
|
message=f"No automatic compensation for {action.action_type}"
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Recommendation
|
||||||
|
|
||||||
|
Implement **comprehensive action logging** with rollback capabilities. Define compensation strategies for non-reversible actions. Enable point-in-time recovery for project state.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Security Considerations for Autonomous Agents
|
||||||
|
|
||||||
|
### Threat Model
|
||||||
|
|
||||||
|
| Threat | Risk | Mitigation |
|
||||||
|
|--------|------|------------|
|
||||||
|
| Agent executes malicious code | High | Sandboxed execution, code review gates |
|
||||||
|
| Agent exfiltrates data | High | Network isolation, output filtering |
|
||||||
|
| Prompt injection via user input | Medium | Input sanitization, prompt hardening |
|
||||||
|
| Agent credential abuse | Medium | Least-privilege tokens, short TTL |
|
||||||
|
| Agent collusion | Low | Independent agent instances, monitoring |
|
||||||
|
|
||||||
|
### Security Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Security Layers │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ Layer 4: Output Filtering │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ - Code scan before commit │ │
|
||||||
|
│ │ - Secrets detection │ │
|
||||||
|
│ │ - Policy compliance check │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
│ Layer 3: Action Authorization │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ - Role-based permissions │ │
|
||||||
|
│ │ - Project scope enforcement │ │
|
||||||
|
│ │ - Sensitive action approval │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
│ Layer 2: Input Sanitization │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ - Prompt injection detection │ │
|
||||||
|
│ │ - Content filtering │ │
|
||||||
|
│ │ - Schema validation │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
│ Layer 1: Infrastructure Isolation │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ - Container sandboxing │ │
|
||||||
|
│ │ - Network segmentation │ │
|
||||||
|
│ │ - File system restrictions │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Recommendation
|
||||||
|
|
||||||
|
Implement **defense-in-depth** with multiple security layers. Assume agents can be compromised and design for containment.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Summary of Recommendations
|
||||||
|
|
||||||
|
| Area | Recommendation | Priority |
|
||||||
|
|------|----------------|----------|
|
||||||
|
| Memory | Tiered memory with context compression | High |
|
||||||
|
| Knowledge | Privacy-aware extraction with human gate | Medium |
|
||||||
|
| Specialization | Layered capabilities with role-specific top | Medium |
|
||||||
|
| Collaboration | Real-time dashboard with intervention | High |
|
||||||
|
| Testing | Multi-layer with prompt evals | High |
|
||||||
|
| Recovery | Action logging with rollback engine | High |
|
||||||
|
| Security | Defense-in-depth, assume compromise | High |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. **Validate with spike research** - Update based on spike findings
|
||||||
|
2. **Create detailed ADRs** - For memory, recovery, security
|
||||||
|
3. **Prototype critical paths** - Memory system, rollback engine
|
||||||
|
4. **Security review** - External audit before production
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This document captures architectural thinking to guide implementation. It should be updated as spikes complete and design evolves.*
|
||||||
487
docs/architecture/ARCHITECTURE_OVERVIEW.md
Normal file
487
docs/architecture/ARCHITECTURE_OVERVIEW.md
Normal file
@@ -0,0 +1,487 @@
|
|||||||
|
# Syndarix Architecture Overview
|
||||||
|
|
||||||
|
**Version:** 1.0
|
||||||
|
**Date:** 2025-12-29
|
||||||
|
**Status:** Draft
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
1. [Executive Summary](#1-executive-summary)
|
||||||
|
2. [System Context](#2-system-context)
|
||||||
|
3. [High-Level Architecture](#3-high-level-architecture)
|
||||||
|
4. [Core Components](#4-core-components)
|
||||||
|
5. [Data Architecture](#5-data-architecture)
|
||||||
|
6. [Integration Architecture](#6-integration-architecture)
|
||||||
|
7. [Security Architecture](#7-security-architecture)
|
||||||
|
8. [Deployment Architecture](#8-deployment-architecture)
|
||||||
|
9. [Cross-Cutting Concerns](#9-cross-cutting-concerns)
|
||||||
|
10. [Architecture Decisions](#10-architecture-decisions)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 1. Executive Summary
|
||||||
|
|
||||||
|
Syndarix is an AI-powered software consulting agency platform that orchestrates specialized AI agents to deliver complete software solutions autonomously. This document describes the technical architecture that enables:
|
||||||
|
|
||||||
|
- **Multi-Agent Orchestration:** 10 specialized agent roles collaborating on projects
|
||||||
|
- **MCP-First Integration:** All external tools via Model Context Protocol
|
||||||
|
- **Real-time Visibility:** SSE-based event streaming for progress tracking
|
||||||
|
- **Autonomous Workflows:** Configurable autonomy levels from full control to autonomous
|
||||||
|
- **Full Artifact Delivery:** Code, documentation, tests, and ADRs
|
||||||
|
|
||||||
|
### Architecture Principles
|
||||||
|
|
||||||
|
1. **MCP-First:** All integrations through unified MCP servers
|
||||||
|
2. **Event-Driven:** Async communication via Redis Pub/Sub
|
||||||
|
3. **Type-Safe:** Full typing in Python and TypeScript
|
||||||
|
4. **Stateless Services:** Horizontal scaling through stateless design
|
||||||
|
5. **Explicit Scoping:** All operations scoped to project/agent
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 2. System Context
|
||||||
|
|
||||||
|
### Context Diagram
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ EXTERNAL ACTORS │
|
||||||
|
├─────────────────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||||
|
│ │ Client │ │ Admin │ │ LLM APIs │ │ Git Hosts │ │
|
||||||
|
│ │ (Human) │ │ (Human) │ │ (Anthropic) │ │ (Gitea) │ │
|
||||||
|
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
|
||||||
|
│ │ │ │ │ │
|
||||||
|
└─────────│──────────────────│──────────────────│──────────────────│──────────┘
|
||||||
|
│ │ │ │
|
||||||
|
│ Web UI │ Admin UI │ API │ API
|
||||||
|
│ SSE │ │ │
|
||||||
|
▼ ▼ ▼ ▼
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ │
|
||||||
|
│ SYNDARIX PLATFORM │
|
||||||
|
│ │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ Agent Orchestration │ │
|
||||||
|
│ │ ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐ │ │
|
||||||
|
│ │ │ PO │ │ PM │ │ Arch │ │ Eng │ │ QA │ ... │ │
|
||||||
|
│ │ └────────┘ └────────┘ └────────┘ └────────┘ └────────┘ │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
│ │ │ │
|
||||||
|
│ Storage │ Events │ Tasks │
|
||||||
|
▼ ▼ ▼ ▼
|
||||||
|
┌─────────────────────────────────────────────────────────────────────────────┐
|
||||||
|
│ INFRASTRUCTURE │
|
||||||
|
├─────────────────────────────────────────────────────────────────────────────┤
|
||||||
|
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||||
|
│ │ PostgreSQL │ │ Redis │ │ Celery │ │MCP Servers │ │
|
||||||
|
│ │ + pgvector │ │ Pub/Sub │ │ Workers │ │ (7 types) │ │
|
||||||
|
│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||||
|
└─────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Key Actors
|
||||||
|
|
||||||
|
| Actor | Type | Interaction |
|
||||||
|
|-------|------|-------------|
|
||||||
|
| Client | Human | Web UI, approvals, feedback |
|
||||||
|
| Admin | Human | Configuration, monitoring |
|
||||||
|
| LLM Providers | External | Claude, GPT-4, local models |
|
||||||
|
| Git Hosts | External | Gitea, GitHub, GitLab |
|
||||||
|
| CI/CD Systems | External | Gitea Actions, etc. |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 3. High-Level Architecture
|
||||||
|
|
||||||
|
### Layered Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌───────────────────────────────────────────────────────────────────┐
|
||||||
|
│ PRESENTATION LAYER │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ Next.js 16 Frontend │ │
|
||||||
|
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
|
||||||
|
│ │ │Dashboard │ │ Projects │ │ Agents │ │ Issues │ │ │
|
||||||
|
│ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────────┘ │
|
||||||
|
└───────────────────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
│ REST + SSE + WebSocket
|
||||||
|
▼
|
||||||
|
┌───────────────────────────────────────────────────────────────────┐
|
||||||
|
│ APPLICATION LAYER │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ FastAPI Backend │ │
|
||||||
|
│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
|
||||||
|
│ │ │ Auth │ │ API │ │ Services │ │ Events │ │ │
|
||||||
|
│ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────────┘ │
|
||||||
|
└───────────────────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌───────────────────────────────────────────────────────────────────┐
|
||||||
|
│ ORCHESTRATION LAYER │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ │ │
|
||||||
|
│ │ │ Agent │ │ Workflow │ │ Project │ │ │
|
||||||
|
│ │ │ Orchestrator │ │ Engine │ │ Manager │ │ │
|
||||||
|
│ │ └───────────────┘ └───────────────┘ └───────────────┘ │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────────┘ │
|
||||||
|
└───────────────────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌───────────────────────────────────────────────────────────────────┐
|
||||||
|
│ INTEGRATION LAYER │
|
||||||
|
│ ┌─────────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ MCP Client Manager │ │
|
||||||
|
│ │ Connects to: LLM, Git, KB, Issues, FS, Code, CI/CD MCPs │ │
|
||||||
|
│ └─────────────────────────────────────────────────────────────┘ │
|
||||||
|
└───────────────────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌───────────────────────────────────────────────────────────────────┐
|
||||||
|
│ DATA LAYER │
|
||||||
|
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||||
|
│ │ PostgreSQL │ │ Redis │ │ File Store │ │
|
||||||
|
│ │ + pgvector │ │ │ │ │ │
|
||||||
|
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||||
|
└───────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 4. Core Components
|
||||||
|
|
||||||
|
### 4.1 Agent Orchestrator
|
||||||
|
|
||||||
|
**Purpose:** Manages agent lifecycle, spawning, communication, and coordination.
|
||||||
|
|
||||||
|
**Responsibilities:**
|
||||||
|
- Spawn agent instances from type definitions
|
||||||
|
- Route messages between agents
|
||||||
|
- Manage agent context and memory
|
||||||
|
- Handle agent failover
|
||||||
|
- Track resource usage
|
||||||
|
|
||||||
|
**Key Patterns:**
|
||||||
|
- Type-Instance pattern (types define templates, instances are runtime)
|
||||||
|
- Message routing with priority queues
|
||||||
|
- Context compression for long-running agents
|
||||||
|
|
||||||
|
See: [ADR-006: Agent Orchestration](../adrs/ADR-006-agent-orchestration.md)
|
||||||
|
|
||||||
|
### 4.2 Workflow Engine
|
||||||
|
|
||||||
|
**Purpose:** Orchestrates multi-step workflows and agent collaboration.
|
||||||
|
|
||||||
|
**Responsibilities:**
|
||||||
|
- Execute workflow templates (requirements discovery, sprint, etc.)
|
||||||
|
- Track workflow state and progress
|
||||||
|
- Handle branching and conditions
|
||||||
|
- Manage approval gates
|
||||||
|
|
||||||
|
**Workflow Types:**
|
||||||
|
- Requirements Discovery
|
||||||
|
- Architecture Spike
|
||||||
|
- Sprint Planning
|
||||||
|
- Implementation
|
||||||
|
- Sprint Demo
|
||||||
|
|
||||||
|
### 4.3 Project Manager (Component)
|
||||||
|
|
||||||
|
**Purpose:** Manages project lifecycle, configuration, and state.
|
||||||
|
|
||||||
|
**Responsibilities:**
|
||||||
|
- Create and configure projects
|
||||||
|
- Manage complexity levels
|
||||||
|
- Track project status
|
||||||
|
- Generate reports
|
||||||
|
|
||||||
|
### 4.4 LLM Gateway
|
||||||
|
|
||||||
|
**Purpose:** Unified LLM access with failover and cost tracking.
|
||||||
|
|
||||||
|
**Implementation:** LiteLLM-based router with:
|
||||||
|
- Multiple model groups (high-reasoning, fast-response)
|
||||||
|
- Automatic failover chain
|
||||||
|
- Per-agent token tracking
|
||||||
|
- Redis-backed caching
|
||||||
|
|
||||||
|
See: [ADR-004: LLM Provider Abstraction](../adrs/ADR-004-llm-provider-abstraction.md)
|
||||||
|
|
||||||
|
### 4.5 MCP Client Manager
|
||||||
|
|
||||||
|
**Purpose:** Connects to all MCP servers and routes tool calls.
|
||||||
|
|
||||||
|
**Implementation:**
|
||||||
|
- SSE connections to 7 MCP server types
|
||||||
|
- Automatic reconnection
|
||||||
|
- Request/response correlation
|
||||||
|
- Scoped tool calls with project_id/agent_id
|
||||||
|
|
||||||
|
See: [ADR-001: MCP Integration Architecture](../adrs/ADR-001-mcp-integration-architecture.md)
|
||||||
|
|
||||||
|
### 4.6 Event Bus
|
||||||
|
|
||||||
|
**Purpose:** Real-time event distribution using Redis Pub/Sub.
|
||||||
|
|
||||||
|
**Channels:**
|
||||||
|
- `project:{project_id}` - Project-scoped events
|
||||||
|
- `agent:{agent_id}` - Agent-specific events
|
||||||
|
- `system` - System-wide announcements
|
||||||
|
|
||||||
|
See: [ADR-002: Real-time Communication](../adrs/ADR-002-realtime-communication.md)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 5. Data Architecture
|
||||||
|
|
||||||
|
### 5.1 Entity Model
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||||
|
│ User │───1:N─│ Project │───1:N─│ Sprint │
|
||||||
|
└─────────────┘ └─────────────┘ └─────────────┘
|
||||||
|
│ 1:N │ 1:N
|
||||||
|
│ │
|
||||||
|
┌──────┴──────┐ ┌──────┴──────┐
|
||||||
|
│ │ │ │
|
||||||
|
┌──────┴──────┐ ┌────┴────┐ │ ┌─────┴─────┐
|
||||||
|
│ AgentInstance│ │Repository│ │ │ Issue │
|
||||||
|
└─────────────┘ └─────────┘ │ └───────────┘
|
||||||
|
│ │ │ │
|
||||||
|
│ 1:N │ 1:N │ │ 1:N
|
||||||
|
┌──────┴──────┐ ┌──────┴────┐│ ┌──────┴──────┐
|
||||||
|
│ Message │ │PullRequest│└───────│IssueComment │
|
||||||
|
└─────────────┘ └───────────┘ └─────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5.2 Key Entities
|
||||||
|
|
||||||
|
| Entity | Purpose | Key Fields |
|
||||||
|
|--------|---------|------------|
|
||||||
|
| User | Human users | email, auth |
|
||||||
|
| Project | Work containers | name, complexity, autonomy_level |
|
||||||
|
| AgentType | Agent templates | base_model, expertise, system_prompt |
|
||||||
|
| AgentInstance | Running agents | name, project_id, context |
|
||||||
|
| Issue | Work items | type, status, external_tracker_fields |
|
||||||
|
| Sprint | Time-boxed iterations | goal, velocity |
|
||||||
|
| Repository | Git repos | provider, clone_url |
|
||||||
|
| KnowledgeDocument | RAG documents | content, embedding_id |
|
||||||
|
|
||||||
|
### 5.3 Vector Storage
|
||||||
|
|
||||||
|
**pgvector** extension for:
|
||||||
|
- Document embeddings (RAG)
|
||||||
|
- Semantic search across knowledge base
|
||||||
|
- Agent context similarity
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 6. Integration Architecture
|
||||||
|
|
||||||
|
### 6.1 MCP Server Registry
|
||||||
|
|
||||||
|
| Server | Port | Purpose | Priority Providers |
|
||||||
|
|--------|------|---------|-------------------|
|
||||||
|
| LLM Gateway | 9001 | LLM routing | Anthropic, OpenAI, Ollama |
|
||||||
|
| Git MCP | 9002 | Git operations | Gitea, GitHub, GitLab |
|
||||||
|
| Knowledge Base | 9003 | RAG search | pgvector |
|
||||||
|
| Issues MCP | 9004 | Issue tracking | Gitea, GitHub, GitLab |
|
||||||
|
| File System | 9005 | Workspace files | Local FS |
|
||||||
|
| Code Analysis | 9006 | Static analysis | Ruff, ESLint |
|
||||||
|
| CI/CD MCP | 9007 | Pipelines | Gitea Actions |
|
||||||
|
|
||||||
|
### 6.2 External Integration Diagram
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Syndarix Backend │
|
||||||
|
│ │
|
||||||
|
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ MCP Client Manager │ │
|
||||||
|
│ │ │ │
|
||||||
|
│ │ ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐ │ │
|
||||||
|
│ │ │ LLM │ │ Git │ │ KB │ │ Issues │ │ CI/CD │ │ │
|
||||||
|
│ │ │ Client │ │ Client │ │ Client │ │ Client │ │ Client │ │ │
|
||||||
|
│ │ └───┬────┘ └───┬────┘ └───┬────┘ └───┬────┘ └───┬────┘ │ │
|
||||||
|
│ └──────│──────────│──────────│──────────│──────────│──────┘ │
|
||||||
|
└─────────│──────────│──────────│──────────│──────────│──────────┘
|
||||||
|
│ │ │ │ │
|
||||||
|
│ SSE │ SSE │ SSE │ SSE │ SSE
|
||||||
|
▼ ▼ ▼ ▼ ▼
|
||||||
|
┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
|
||||||
|
│ LLM │ │ Git │ │ KB │ │ Issues │ │ CI/CD │
|
||||||
|
│ MCP │ │ MCP │ │ MCP │ │ MCP │ │ MCP │
|
||||||
|
│ Server │ │ Server │ │ Server │ │ Server │ │ Server │
|
||||||
|
└───┬────┘ └───┬────┘ └───┬────┘ └───┬────┘ └───┬────┘
|
||||||
|
│ │ │ │ │
|
||||||
|
▼ ▼ ▼ ▼ ▼
|
||||||
|
┌─────────┐ ┌────────┐ ┌────────┐ ┌────────┐ ┌────────┐
|
||||||
|
│Anthropic│ │ Gitea │ │pgvector│ │ Gitea │ │ Gitea │
|
||||||
|
│ OpenAI │ │ GitHub │ │ │ │ Issues │ │Actions │
|
||||||
|
│ Ollama │ │ GitLab │ │ │ │ │ │ │
|
||||||
|
└─────────┘ └────────┘ └────────┘ └────────┘ └────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 7. Security Architecture
|
||||||
|
|
||||||
|
### 7.1 Authentication
|
||||||
|
|
||||||
|
- **JWT Dual-Token:** Access token (15 min) + Refresh token (7 days)
|
||||||
|
- **OAuth 2.0 Provider:** For MCP client authentication
|
||||||
|
- **Service Tokens:** Internal service-to-service auth
|
||||||
|
|
||||||
|
### 7.2 Authorization
|
||||||
|
|
||||||
|
- **RBAC:** Role-based access control
|
||||||
|
- **Project Scoping:** All operations scoped to projects
|
||||||
|
- **Agent Permissions:** Agents operate within project scope
|
||||||
|
|
||||||
|
### 7.3 Data Protection
|
||||||
|
|
||||||
|
- **TLS 1.3:** All external communications
|
||||||
|
- **Encryption at Rest:** Database encryption
|
||||||
|
- **Secrets Management:** Environment-based, never in code
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 8. Deployment Architecture
|
||||||
|
|
||||||
|
### 8.1 Container Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Docker Compose │
|
||||||
|
├─────────────────────────────────────────────────────────────────┤
|
||||||
|
│ │
|
||||||
|
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||||
|
│ │ Frontend │ │ Backend │ │ Workers │ │ Flower │ │
|
||||||
|
│ │ (Next.js)│ │ (FastAPI)│ │ (Celery) │ │(Monitor) │ │
|
||||||
|
│ │ :3000 │ │ :8000 │ │ │ │ :5555 │ │
|
||||||
|
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
|
||||||
|
│ │
|
||||||
|
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||||
|
│ │ LLM MCP │ │ Git MCP │ │ KB MCP │ │Issues MCP│ │
|
||||||
|
│ │ :9001 │ │ :9002 │ │ :9003 │ │ :9004 │ │
|
||||||
|
│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
|
||||||
|
│ │
|
||||||
|
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||||
|
│ │ FS MCP │ │ Code MCP │ │CI/CD MCP │ │
|
||||||
|
│ │ :9005 │ │ :9006 │ │ :9007 │ │
|
||||||
|
│ └──────────┘ └──────────┘ └──────────┘ │
|
||||||
|
│ │
|
||||||
|
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||||
|
│ │ Infrastructure │ │
|
||||||
|
│ │ ┌──────────┐ ┌──────────┐ │ │
|
||||||
|
│ │ │PostgreSQL│ │ Redis │ │ │
|
||||||
|
│ │ │ :5432 │ │ :6379 │ │ │
|
||||||
|
│ │ └──────────┘ └──────────┘ │ │
|
||||||
|
│ └──────────────────────────────────────────────────────────┘ │
|
||||||
|
│ │
|
||||||
|
└─────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### 8.2 Scaling Strategy
|
||||||
|
|
||||||
|
| Component | Scaling | Strategy |
|
||||||
|
|-----------|---------|----------|
|
||||||
|
| Frontend | Horizontal | Stateless, behind LB |
|
||||||
|
| Backend | Horizontal | Stateless, behind LB |
|
||||||
|
| Celery Workers | Horizontal | Queue-based routing |
|
||||||
|
| MCP Servers | Horizontal | Stateless singletons |
|
||||||
|
| PostgreSQL | Vertical + Read Replicas | Primary/replica |
|
||||||
|
| Redis | Cluster | Sentinel or Cluster mode |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 9. Cross-Cutting Concerns
|
||||||
|
|
||||||
|
### 9.1 Logging
|
||||||
|
|
||||||
|
- **Format:** Structured JSON
|
||||||
|
- **Correlation:** Request IDs across services
|
||||||
|
- **Levels:** DEBUG, INFO, WARNING, ERROR, CRITICAL
|
||||||
|
|
||||||
|
### 9.2 Monitoring
|
||||||
|
|
||||||
|
- **Metrics:** Prometheus-compatible export
|
||||||
|
- **Traces:** OpenTelemetry (future)
|
||||||
|
- **Dashboards:** Grafana (optional)
|
||||||
|
|
||||||
|
### 9.3 Error Handling
|
||||||
|
|
||||||
|
- **Agent Errors:** Logged, published via SSE
|
||||||
|
- **Task Failures:** Celery retry with backoff
|
||||||
|
- **Integration Errors:** Circuit breaker pattern
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 10. Architecture Decisions
|
||||||
|
|
||||||
|
### Summary of ADRs
|
||||||
|
|
||||||
|
| ADR | Title | Status |
|
||||||
|
|-----|-------|--------|
|
||||||
|
| [ADR-001](../adrs/ADR-001-mcp-integration-architecture.md) | MCP Integration Architecture | Accepted |
|
||||||
|
| [ADR-002](../adrs/ADR-002-realtime-communication.md) | Real-time Communication | Accepted |
|
||||||
|
| [ADR-003](../adrs/ADR-003-background-task-architecture.md) | Background Task Architecture | Accepted |
|
||||||
|
| [ADR-004](../adrs/ADR-004-llm-provider-abstraction.md) | LLM Provider Abstraction | Accepted |
|
||||||
|
| [ADR-005](../adrs/ADR-005-tech-stack-selection.md) | Tech Stack Selection | Accepted |
|
||||||
|
| [ADR-006](../adrs/ADR-006-agent-orchestration.md) | Agent Orchestration | Accepted |
|
||||||
|
|
||||||
|
### Key Decisions Summary
|
||||||
|
|
||||||
|
1. **Unified Singleton MCP Servers** with project/agent scoping
|
||||||
|
2. **SSE for real-time events**, WebSocket only for chat
|
||||||
|
3. **Celery + Redis** for background tasks
|
||||||
|
4. **LiteLLM** for unified LLM abstraction with failover
|
||||||
|
5. **PragmaStack** as foundation with Syndarix extensions
|
||||||
|
6. **Type-Instance pattern** for agent orchestration
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Appendix A: Technology Stack Quick Reference
|
||||||
|
|
||||||
|
| Layer | Technology |
|
||||||
|
|-------|------------|
|
||||||
|
| Frontend | Next.js 16, React 19, TypeScript, Tailwind, shadcn/ui |
|
||||||
|
| Backend | FastAPI, Python 3.11+, SQLAlchemy 2.0, Pydantic 2.0 |
|
||||||
|
| Database | PostgreSQL 15+ with pgvector |
|
||||||
|
| Cache/Queue | Redis 7.0+ |
|
||||||
|
| Task Queue | Celery 5.3+ |
|
||||||
|
| MCP | FastMCP 2.0 |
|
||||||
|
| LLM | LiteLLM (Claude, GPT-4, Ollama) |
|
||||||
|
| Testing | pytest, Jest, Playwright |
|
||||||
|
| Container | Docker, Docker Compose |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Appendix B: Port Reference
|
||||||
|
|
||||||
|
| Service | Port |
|
||||||
|
|---------|------|
|
||||||
|
| Frontend | 3000 |
|
||||||
|
| Backend | 8000 |
|
||||||
|
| PostgreSQL | 5432 |
|
||||||
|
| Redis | 6379 |
|
||||||
|
| Flower | 5555 |
|
||||||
|
| LLM MCP | 9001 |
|
||||||
|
| Git MCP | 9002 |
|
||||||
|
| KB MCP | 9003 |
|
||||||
|
| Issues MCP | 9004 |
|
||||||
|
| FS MCP | 9005 |
|
||||||
|
| Code MCP | 9006 |
|
||||||
|
| CI/CD MCP | 9007 |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This document provides the comprehensive architecture overview for Syndarix. For detailed decisions, see the individual ADRs.*
|
||||||
339
docs/architecture/IMPLEMENTATION_ROADMAP.md
Normal file
339
docs/architecture/IMPLEMENTATION_ROADMAP.md
Normal file
@@ -0,0 +1,339 @@
|
|||||||
|
# Syndarix Implementation Roadmap
|
||||||
|
|
||||||
|
**Version:** 1.0
|
||||||
|
**Date:** 2025-12-29
|
||||||
|
**Status:** Draft
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
|
||||||
|
This roadmap outlines the phased implementation approach for Syndarix, prioritizing foundational infrastructure before advanced features. Each phase builds upon the previous, with clear milestones and deliverables.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 0: Foundation (Weeks 1-2)
|
||||||
|
**Goal:** Establish development infrastructure and basic platform
|
||||||
|
|
||||||
|
### 0.1 Repository Setup
|
||||||
|
- [x] Fork PragmaStack to Syndarix
|
||||||
|
- [x] Create spike backlog in Gitea
|
||||||
|
- [x] Complete architecture documentation
|
||||||
|
- [ ] Rebrand codebase (Issue #13 - in progress)
|
||||||
|
- [ ] Configure CI/CD pipelines
|
||||||
|
- [ ] Set up development environment documentation
|
||||||
|
|
||||||
|
### 0.2 Core Infrastructure
|
||||||
|
- [ ] Configure Redis for cache + pub/sub
|
||||||
|
- [ ] Set up Celery worker infrastructure
|
||||||
|
- [ ] Configure pgvector extension
|
||||||
|
- [ ] Create MCP server directory structure
|
||||||
|
- [ ] Set up Docker Compose for local development
|
||||||
|
|
||||||
|
### Deliverables
|
||||||
|
- Fully branded Syndarix repository
|
||||||
|
- Working local development environment
|
||||||
|
- CI/CD pipeline running tests
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 1: Core Platform (Weeks 3-6)
|
||||||
|
**Goal:** Basic project and agent management without LLM integration
|
||||||
|
|
||||||
|
### 1.1 Data Model
|
||||||
|
- [ ] Create Project entity and CRUD
|
||||||
|
- [ ] Create AgentType entity and CRUD
|
||||||
|
- [ ] Create AgentInstance entity and CRUD
|
||||||
|
- [ ] Create Issue entity with external tracker fields
|
||||||
|
- [ ] Create Sprint entity and CRUD
|
||||||
|
- [ ] Database migrations with Alembic
|
||||||
|
|
||||||
|
### 1.2 API Layer
|
||||||
|
- [ ] Project management endpoints
|
||||||
|
- [ ] Agent type configuration endpoints
|
||||||
|
- [ ] Agent instance management endpoints
|
||||||
|
- [ ] Issue CRUD endpoints
|
||||||
|
- [ ] Sprint management endpoints
|
||||||
|
|
||||||
|
### 1.3 Real-time Infrastructure
|
||||||
|
- [ ] Implement EventBus with Redis Pub/Sub
|
||||||
|
- [ ] Create SSE endpoint for project events
|
||||||
|
- [ ] Implement event types enum
|
||||||
|
- [ ] Add keepalive mechanism
|
||||||
|
- [ ] Client-side SSE handling
|
||||||
|
|
||||||
|
### 1.4 Frontend Foundation
|
||||||
|
- [ ] Project dashboard page
|
||||||
|
- [ ] Agent configuration UI
|
||||||
|
- [ ] Issue list and detail views
|
||||||
|
- [ ] Real-time activity feed component
|
||||||
|
- [ ] Basic navigation and layout
|
||||||
|
|
||||||
|
### Deliverables
|
||||||
|
- CRUD operations for all core entities
|
||||||
|
- Real-time event streaming working
|
||||||
|
- Basic admin UI for configuration
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2: MCP Integration (Weeks 7-10)
|
||||||
|
**Goal:** Build MCP servers for external integrations
|
||||||
|
|
||||||
|
### 2.1 MCP Client Infrastructure
|
||||||
|
- [ ] Create MCPClientManager class
|
||||||
|
- [ ] Implement server registry
|
||||||
|
- [ ] Add connection management with reconnection
|
||||||
|
- [ ] Create tool call routing
|
||||||
|
|
||||||
|
### 2.2 LLM Gateway MCP (Priority 1)
|
||||||
|
- [ ] Create FastMCP server structure
|
||||||
|
- [ ] Implement LiteLLM integration
|
||||||
|
- [ ] Add model group routing
|
||||||
|
- [ ] Implement failover chain
|
||||||
|
- [ ] Add cost tracking callbacks
|
||||||
|
- [ ] Create token usage logging
|
||||||
|
|
||||||
|
### 2.3 Knowledge Base MCP (Priority 2)
|
||||||
|
- [ ] Create pgvector schema for embeddings
|
||||||
|
- [ ] Implement document ingestion pipeline
|
||||||
|
- [ ] Create chunking strategies (code, markdown, text)
|
||||||
|
- [ ] Implement semantic search
|
||||||
|
- [ ] Add hybrid search (vector + keyword)
|
||||||
|
- [ ] Per-project collection isolation
|
||||||
|
|
||||||
|
### 2.4 Git MCP (Priority 3)
|
||||||
|
- [ ] Create Git operations wrapper
|
||||||
|
- [ ] Implement clone, commit, push operations
|
||||||
|
- [ ] Add branch management
|
||||||
|
- [ ] Create PR operations
|
||||||
|
- [ ] Add Gitea API integration
|
||||||
|
- [ ] Implement GitHub/GitLab adapters
|
||||||
|
|
||||||
|
### 2.5 Issues MCP (Priority 4)
|
||||||
|
- [ ] Create issue sync service
|
||||||
|
- [ ] Implement Gitea issue operations
|
||||||
|
- [ ] Add GitHub issue adapter
|
||||||
|
- [ ] Add GitLab issue adapter
|
||||||
|
- [ ] Implement bi-directional sync
|
||||||
|
- [ ] Create conflict resolution logic
|
||||||
|
|
||||||
|
### Deliverables
|
||||||
|
- 4 working MCP servers
|
||||||
|
- LLM calls routed through gateway
|
||||||
|
- RAG search functional
|
||||||
|
- Git operations working
|
||||||
|
- Issue sync with external trackers
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 3: Agent Orchestration (Weeks 11-14)
|
||||||
|
**Goal:** Enable agents to perform autonomous work
|
||||||
|
|
||||||
|
### 3.1 Agent Runner
|
||||||
|
- [ ] Create AgentRunner class
|
||||||
|
- [ ] Implement context assembly
|
||||||
|
- [ ] Add memory management (short-term, long-term)
|
||||||
|
- [ ] Implement action execution
|
||||||
|
- [ ] Add tool call handling
|
||||||
|
- [ ] Create agent error handling
|
||||||
|
|
||||||
|
### 3.2 Agent Orchestrator
|
||||||
|
- [ ] Implement spawn_agent method
|
||||||
|
- [ ] Create terminate_agent method
|
||||||
|
- [ ] Implement send_message routing
|
||||||
|
- [ ] Add broadcast functionality
|
||||||
|
- [ ] Create agent status tracking
|
||||||
|
- [ ] Implement agent recovery
|
||||||
|
|
||||||
|
### 3.3 Inter-Agent Communication
|
||||||
|
- [ ] Define message format schema
|
||||||
|
- [ ] Implement message persistence
|
||||||
|
- [ ] Create message routing logic
|
||||||
|
- [ ] Add @mention parsing
|
||||||
|
- [ ] Implement priority queues
|
||||||
|
- [ ] Add conversation threading
|
||||||
|
|
||||||
|
### 3.4 Background Task Integration
|
||||||
|
- [ ] Create Celery task wrappers
|
||||||
|
- [ ] Implement progress reporting
|
||||||
|
- [ ] Add task chaining for workflows
|
||||||
|
- [ ] Create agent queue routing
|
||||||
|
- [ ] Implement task retry logic
|
||||||
|
|
||||||
|
### Deliverables
|
||||||
|
- Agents can be spawned and communicate
|
||||||
|
- Agents can call MCP tools
|
||||||
|
- Background tasks for long operations
|
||||||
|
- Agent activity visible in real-time
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 4: Workflow Engine (Weeks 15-18)
|
||||||
|
**Goal:** Implement structured workflows for software delivery
|
||||||
|
|
||||||
|
### 4.1 State Machine Foundation
|
||||||
|
- [ ] Create workflow state machine base
|
||||||
|
- [ ] Implement state persistence
|
||||||
|
- [ ] Add transition validation
|
||||||
|
- [ ] Create state history logging
|
||||||
|
- [ ] Implement compensation patterns
|
||||||
|
|
||||||
|
### 4.2 Core Workflows
|
||||||
|
- [ ] Requirements Discovery workflow
|
||||||
|
- [ ] Architecture Spike workflow
|
||||||
|
- [ ] Sprint Planning workflow
|
||||||
|
- [ ] Story Implementation workflow
|
||||||
|
- [ ] Sprint Demo workflow
|
||||||
|
|
||||||
|
### 4.3 Approval Gates
|
||||||
|
- [ ] Create approval checkpoint system
|
||||||
|
- [ ] Implement approval UI components
|
||||||
|
- [ ] Add notification triggers
|
||||||
|
- [ ] Create timeout handling
|
||||||
|
- [ ] Implement escalation logic
|
||||||
|
|
||||||
|
### 4.4 Autonomy Levels
|
||||||
|
- [ ] Implement FULL_CONTROL mode
|
||||||
|
- [ ] Implement MILESTONE mode
|
||||||
|
- [ ] Implement AUTONOMOUS mode
|
||||||
|
- [ ] Create autonomy configuration UI
|
||||||
|
- [ ] Add per-action approval overrides
|
||||||
|
|
||||||
|
### Deliverables
|
||||||
|
- Structured workflows executing
|
||||||
|
- Approval gates working
|
||||||
|
- Autonomy levels configurable
|
||||||
|
- Full sprint cycle possible
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 5: Advanced Features (Weeks 19-22)
|
||||||
|
**Goal:** Polish and production readiness
|
||||||
|
|
||||||
|
### 5.1 Cost Management
|
||||||
|
- [ ] Real-time cost tracking dashboard
|
||||||
|
- [ ] Budget configuration per project
|
||||||
|
- [ ] Alert threshold system
|
||||||
|
- [ ] Cost optimization recommendations
|
||||||
|
- [ ] Historical cost analytics
|
||||||
|
|
||||||
|
### 5.2 Audit & Compliance
|
||||||
|
- [ ] Comprehensive action logging
|
||||||
|
- [ ] Audit trail viewer UI
|
||||||
|
- [ ] Export functionality
|
||||||
|
- [ ] Retention policy implementation
|
||||||
|
- [ ] Compliance report generation
|
||||||
|
|
||||||
|
### 5.3 Human-Agent Collaboration
|
||||||
|
- [ ] Live activity dashboard
|
||||||
|
- [ ] Intervention panel (pause, guide, undo)
|
||||||
|
- [ ] Agent chat interface
|
||||||
|
- [ ] Context inspector
|
||||||
|
- [ ] Decision explainer
|
||||||
|
|
||||||
|
### 5.4 Additional MCP Servers
|
||||||
|
- [ ] File System MCP
|
||||||
|
- [ ] Code Analysis MCP
|
||||||
|
- [ ] CI/CD MCP
|
||||||
|
|
||||||
|
### Deliverables
|
||||||
|
- Production-ready system
|
||||||
|
- Full observability
|
||||||
|
- Cost controls active
|
||||||
|
- Audit compliance
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 6: Polish & Launch (Weeks 23-24)
|
||||||
|
**Goal:** Production deployment
|
||||||
|
|
||||||
|
### 6.1 Performance Optimization
|
||||||
|
- [ ] Load testing
|
||||||
|
- [ ] Query optimization
|
||||||
|
- [ ] Caching optimization
|
||||||
|
- [ ] Memory profiling
|
||||||
|
|
||||||
|
### 6.2 Security Hardening
|
||||||
|
- [ ] Security audit
|
||||||
|
- [ ] Penetration testing
|
||||||
|
- [ ] Secrets management
|
||||||
|
- [ ] Rate limiting tuning
|
||||||
|
|
||||||
|
### 6.3 Documentation
|
||||||
|
- [ ] User documentation
|
||||||
|
- [ ] API documentation
|
||||||
|
- [ ] Deployment guide
|
||||||
|
- [ ] Runbook
|
||||||
|
|
||||||
|
### 6.4 Deployment
|
||||||
|
- [ ] Production environment setup
|
||||||
|
- [ ] Monitoring & alerting
|
||||||
|
- [ ] Backup & recovery
|
||||||
|
- [ ] Launch checklist
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Risk Register
|
||||||
|
|
||||||
|
| Risk | Impact | Probability | Mitigation |
|
||||||
|
|------|--------|-------------|------------|
|
||||||
|
| LLM API outages | High | Medium | Multi-provider failover |
|
||||||
|
| Cost overruns | High | Medium | Budget enforcement, local models |
|
||||||
|
| Agent hallucinations | High | Medium | Approval gates, code review |
|
||||||
|
| Performance bottlenecks | Medium | Medium | Load testing, caching |
|
||||||
|
| Integration failures | Medium | Low | Contract testing, mocks |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Success Metrics
|
||||||
|
|
||||||
|
| Metric | Target | Measurement |
|
||||||
|
|--------|--------|-------------|
|
||||||
|
| Agent task success rate | >90% | Completed tasks / total tasks |
|
||||||
|
| Response time (P95) | <2s | API latency |
|
||||||
|
| Cost per project | <$50/sprint | LLM + compute costs |
|
||||||
|
| Time to first commit | <1 hour | From requirements to PR |
|
||||||
|
| Client satisfaction | >4/5 | Post-sprint survey |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dependencies
|
||||||
|
|
||||||
|
```
|
||||||
|
Phase 0 ─────▶ Phase 1 ─────▶ Phase 2 ─────▶ Phase 3 ─────▶ Phase 4 ─────▶ Phase 5 ─────▶ Phase 6
|
||||||
|
Foundation Core Platform MCP Integration Agent Orch Workflows Advanced Launch
|
||||||
|
│
|
||||||
|
│
|
||||||
|
Depends on:
|
||||||
|
- LLM Gateway
|
||||||
|
- Knowledge Base
|
||||||
|
- Real-time events
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resource Requirements
|
||||||
|
|
||||||
|
### Development Team
|
||||||
|
- 1 Backend Engineer (Python/FastAPI)
|
||||||
|
- 1 Frontend Engineer (React/Next.js)
|
||||||
|
- 0.5 DevOps Engineer
|
||||||
|
- 0.25 Product Manager
|
||||||
|
|
||||||
|
### Infrastructure
|
||||||
|
- PostgreSQL (managed or self-hosted)
|
||||||
|
- Redis (managed or self-hosted)
|
||||||
|
- Celery workers (2-4 instances)
|
||||||
|
- MCP servers (7 containers)
|
||||||
|
- API server (2+ instances)
|
||||||
|
- Frontend (static hosting or SSR)
|
||||||
|
|
||||||
|
### External Services
|
||||||
|
- Anthropic API (primary LLM)
|
||||||
|
- OpenAI API (fallback)
|
||||||
|
- Ollama (local models, optional)
|
||||||
|
- Gitea/GitHub/GitLab (issue tracking)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*This roadmap will be refined as spikes complete and requirements evolve.*
|
||||||
0
docs/requirements/.gitkeep
Normal file
0
docs/requirements/.gitkeep
Normal file
2405
docs/requirements/SYNDARIX_REQUIREMENTS.md
Normal file
2405
docs/requirements/SYNDARIX_REQUIREMENTS.md
Normal file
File diff suppressed because it is too large
Load Diff
0
docs/spikes/.gitkeep
Normal file
0
docs/spikes/.gitkeep
Normal file
288
docs/spikes/SPIKE-001-mcp-integration-pattern.md
Normal file
288
docs/spikes/SPIKE-001-mcp-integration-pattern.md
Normal file
@@ -0,0 +1,288 @@
|
|||||||
|
# SPIKE-001: MCP Integration Pattern
|
||||||
|
|
||||||
|
**Status:** Completed
|
||||||
|
**Date:** 2025-12-29
|
||||||
|
**Author:** Architecture Team
|
||||||
|
**Related Issue:** #1
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Research the optimal pattern for integrating Model Context Protocol (MCP) servers with FastAPI backend, focusing on unified singleton servers with project/agent scoping.
|
||||||
|
|
||||||
|
## Research Questions
|
||||||
|
|
||||||
|
1. What is the recommended MCP SDK for Python/FastAPI?
|
||||||
|
2. How should we structure unified MCP servers vs per-project servers?
|
||||||
|
3. What is the best pattern for project/agent scoping in MCP tools?
|
||||||
|
4. How do we handle authentication between Syndarix and MCP servers?
|
||||||
|
|
||||||
|
## Findings
|
||||||
|
|
||||||
|
### 1. FastMCP 2.0 - Recommended Framework
|
||||||
|
|
||||||
|
**FastMCP** is a high-level, Pythonic framework for building MCP servers that significantly reduces boilerplate compared to the low-level MCP SDK.
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
- Decorator-based tool registration (`@mcp.tool()`)
|
||||||
|
- Built-in context management for resources and prompts
|
||||||
|
- Support for server-sent events (SSE) and stdio transports
|
||||||
|
- Type-safe with Pydantic model support
|
||||||
|
- Async-first design compatible with FastAPI
|
||||||
|
|
||||||
|
**Installation:**
|
||||||
|
```bash
|
||||||
|
pip install fastmcp
|
||||||
|
```
|
||||||
|
|
||||||
|
**Basic Example:**
|
||||||
|
```python
|
||||||
|
from fastmcp import FastMCP
|
||||||
|
|
||||||
|
mcp = FastMCP("syndarix-knowledge-base")
|
||||||
|
|
||||||
|
@mcp.tool()
|
||||||
|
def search_knowledge(
|
||||||
|
project_id: str,
|
||||||
|
query: str,
|
||||||
|
scope: str = "project"
|
||||||
|
) -> list[dict]:
|
||||||
|
"""Search the knowledge base with project scoping."""
|
||||||
|
# Implementation here
|
||||||
|
return results
|
||||||
|
|
||||||
|
@mcp.resource("project://{project_id}/config")
|
||||||
|
def get_project_config(project_id: str) -> dict:
|
||||||
|
"""Get project configuration."""
|
||||||
|
return config
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Unified Singleton Pattern (Recommended)
|
||||||
|
|
||||||
|
**Decision:** Use unified singleton MCP servers instead of per-project servers.
|
||||||
|
|
||||||
|
**Architecture:**
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────┐
|
||||||
|
│ Syndarix Backend │
|
||||||
|
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||||
|
│ │ Agent 1 │ │ Agent 2 │ │ Agent 3 │ │
|
||||||
|
│ │ (project A) │ │ (project A) │ │ (project B) │ │
|
||||||
|
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
|
||||||
|
│ │ │ │ │
|
||||||
|
│ └────────────────┼────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌─────────────────────────────────────────────────┐ │
|
||||||
|
│ │ MCP Client (Singleton) │ │
|
||||||
|
│ │ Maintains connections to all MCP servers │ │
|
||||||
|
│ └─────────────────────────────────────────────────┘ │
|
||||||
|
└──────────────────────────┬──────────────────────────────┘
|
||||||
|
│
|
||||||
|
┌───────────────┼───────────────┐
|
||||||
|
│ │ │
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌────────────┐ ┌────────────┐ ┌────────────┐
|
||||||
|
│ Git MCP │ │ KB MCP │ │ LLM MCP │
|
||||||
|
│ (Singleton)│ │ (Singleton)│ │ (Singleton)│
|
||||||
|
└────────────┘ └────────────┘ └────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why Singleton:**
|
||||||
|
- Resource efficiency (one process per MCP type)
|
||||||
|
- Shared connection pools
|
||||||
|
- Centralized logging and monitoring
|
||||||
|
- Simpler deployment (7 services vs N×7)
|
||||||
|
- Cross-project learning possible (if needed)
|
||||||
|
|
||||||
|
**Scoping Pattern:**
|
||||||
|
```python
|
||||||
|
@mcp.tool()
|
||||||
|
def search_knowledge(
|
||||||
|
project_id: str, # Required - scopes to project
|
||||||
|
agent_id: str, # Required - identifies calling agent
|
||||||
|
query: str,
|
||||||
|
scope: Literal["project", "global"] = "project"
|
||||||
|
) -> SearchResults:
|
||||||
|
"""
|
||||||
|
All tools accept project_id and agent_id for:
|
||||||
|
- Access control validation
|
||||||
|
- Audit logging
|
||||||
|
- Context filtering
|
||||||
|
"""
|
||||||
|
# Validate agent has access to project
|
||||||
|
validate_access(agent_id, project_id)
|
||||||
|
|
||||||
|
# Log the access
|
||||||
|
log_tool_usage(agent_id, project_id, "search_knowledge")
|
||||||
|
|
||||||
|
# Perform scoped search
|
||||||
|
if scope == "project":
|
||||||
|
return search_project_kb(project_id, query)
|
||||||
|
else:
|
||||||
|
return search_global_kb(query)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. MCP Server Registry Architecture
|
||||||
|
|
||||||
|
```python
|
||||||
|
# mcp/registry.py
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from typing import Dict
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class MCPServerConfig:
|
||||||
|
name: str
|
||||||
|
port: int
|
||||||
|
transport: str # "sse" or "stdio"
|
||||||
|
enabled: bool = True
|
||||||
|
|
||||||
|
MCP_SERVERS: Dict[str, MCPServerConfig] = {
|
||||||
|
"llm_gateway": MCPServerConfig("llm-gateway", 9001, "sse"),
|
||||||
|
"git": MCPServerConfig("git-mcp", 9002, "sse"),
|
||||||
|
"knowledge_base": MCPServerConfig("kb-mcp", 9003, "sse"),
|
||||||
|
"issues": MCPServerConfig("issues-mcp", 9004, "sse"),
|
||||||
|
"file_system": MCPServerConfig("fs-mcp", 9005, "sse"),
|
||||||
|
"code_analysis": MCPServerConfig("code-mcp", 9006, "sse"),
|
||||||
|
"cicd": MCPServerConfig("cicd-mcp", 9007, "sse"),
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Authentication Pattern
|
||||||
|
|
||||||
|
**MCP OAuth 2.0 Integration:**
|
||||||
|
```python
|
||||||
|
from fastmcp import FastMCP
|
||||||
|
from fastmcp.auth import OAuth2Bearer
|
||||||
|
|
||||||
|
mcp = FastMCP(
|
||||||
|
"syndarix-mcp",
|
||||||
|
auth=OAuth2Bearer(
|
||||||
|
token_url="https://syndarix.local/oauth/token",
|
||||||
|
scopes=["mcp:read", "mcp:write"]
|
||||||
|
)
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Internal Service Auth (Recommended for v1):**
|
||||||
|
```python
|
||||||
|
# For internal deployment, use service tokens
|
||||||
|
@mcp.tool()
|
||||||
|
def create_issue(
|
||||||
|
service_token: str, # Validated internally
|
||||||
|
project_id: str,
|
||||||
|
title: str,
|
||||||
|
body: str
|
||||||
|
) -> Issue:
|
||||||
|
validate_service_token(service_token)
|
||||||
|
# ... implementation
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. FastAPI Integration Pattern
|
||||||
|
|
||||||
|
```python
|
||||||
|
# app/mcp/client.py
|
||||||
|
from mcp import ClientSession
|
||||||
|
from mcp.client.sse import sse_client
|
||||||
|
from contextlib import asynccontextmanager
|
||||||
|
|
||||||
|
class MCPClientManager:
|
||||||
|
def __init__(self):
|
||||||
|
self._sessions: dict[str, ClientSession] = {}
|
||||||
|
|
||||||
|
async def connect_all(self):
|
||||||
|
"""Connect to all configured MCP servers."""
|
||||||
|
for name, config in MCP_SERVERS.items():
|
||||||
|
if config.enabled:
|
||||||
|
session = await self._connect_server(config)
|
||||||
|
self._sessions[name] = session
|
||||||
|
|
||||||
|
async def call_tool(
|
||||||
|
self,
|
||||||
|
server: str,
|
||||||
|
tool_name: str,
|
||||||
|
arguments: dict
|
||||||
|
) -> Any:
|
||||||
|
"""Call a tool on a specific MCP server."""
|
||||||
|
session = self._sessions[server]
|
||||||
|
result = await session.call_tool(tool_name, arguments)
|
||||||
|
return result.content
|
||||||
|
|
||||||
|
# Usage in FastAPI
|
||||||
|
mcp_client = MCPClientManager()
|
||||||
|
|
||||||
|
@app.on_event("startup")
|
||||||
|
async def startup():
|
||||||
|
await mcp_client.connect_all()
|
||||||
|
|
||||||
|
@app.post("/api/v1/knowledge/search")
|
||||||
|
async def search_knowledge(request: SearchRequest):
|
||||||
|
result = await mcp_client.call_tool(
|
||||||
|
"knowledge_base",
|
||||||
|
"search_knowledge",
|
||||||
|
{
|
||||||
|
"project_id": request.project_id,
|
||||||
|
"agent_id": request.agent_id,
|
||||||
|
"query": request.query
|
||||||
|
}
|
||||||
|
)
|
||||||
|
return result
|
||||||
|
```
|
||||||
|
|
||||||
|
## Recommendations
|
||||||
|
|
||||||
|
### Immediate Actions
|
||||||
|
|
||||||
|
1. **Use FastMCP 2.0** for all MCP server implementations
|
||||||
|
2. **Implement unified singleton pattern** with explicit scoping
|
||||||
|
3. **Use SSE transport** for MCP server connections
|
||||||
|
4. **Service tokens** for internal auth (v1), OAuth 2.0 for future
|
||||||
|
|
||||||
|
### MCP Server Priority
|
||||||
|
|
||||||
|
1. **LLM Gateway** - Critical for agent operation
|
||||||
|
2. **Knowledge Base** - Required for RAG functionality
|
||||||
|
3. **Git MCP** - Required for code delivery
|
||||||
|
4. **Issues MCP** - Required for project management
|
||||||
|
5. **File System** - Required for workspace operations
|
||||||
|
6. **Code Analysis** - Enhance code quality
|
||||||
|
7. **CI/CD** - Automate deployments
|
||||||
|
|
||||||
|
### Code Organization
|
||||||
|
|
||||||
|
```
|
||||||
|
syndarix/
|
||||||
|
├── backend/
|
||||||
|
│ └── app/
|
||||||
|
│ └── mcp/
|
||||||
|
│ ├── __init__.py
|
||||||
|
│ ├── client.py # MCP client manager
|
||||||
|
│ ├── registry.py # Server configurations
|
||||||
|
│ └── schemas.py # Tool argument schemas
|
||||||
|
└── mcp_servers/
|
||||||
|
├── llm_gateway/
|
||||||
|
│ ├── __init__.py
|
||||||
|
│ ├── server.py
|
||||||
|
│ └── tools.py
|
||||||
|
├── knowledge_base/
|
||||||
|
├── git/
|
||||||
|
├── issues/
|
||||||
|
├── file_system/
|
||||||
|
├── code_analysis/
|
||||||
|
└── cicd/
|
||||||
|
```
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- [FastMCP Documentation](https://gofastmcp.com)
|
||||||
|
- [MCP Protocol Specification](https://spec.modelcontextprotocol.io)
|
||||||
|
- [Anthropic MCP SDK](https://github.com/anthropics/anthropic-sdk-mcp)
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
**Adopt FastMCP 2.0** with unified singleton servers and explicit project/agent scoping for all MCP integrations.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Spike completed. Findings will inform ADR-001: MCP Integration Architecture.*
|
||||||
1326
docs/spikes/SPIKE-002-agent-orchestration-pattern.md
Normal file
1326
docs/spikes/SPIKE-002-agent-orchestration-pattern.md
Normal file
File diff suppressed because it is too large
Load Diff
338
docs/spikes/SPIKE-003-realtime-updates.md
Normal file
338
docs/spikes/SPIKE-003-realtime-updates.md
Normal file
@@ -0,0 +1,338 @@
|
|||||||
|
# SPIKE-003: Real-time Updates Architecture
|
||||||
|
|
||||||
|
**Status:** Completed
|
||||||
|
**Date:** 2025-12-29
|
||||||
|
**Author:** Architecture Team
|
||||||
|
**Related Issue:** #3
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Evaluate WebSocket vs Server-Sent Events (SSE) for real-time updates in Syndarix, focusing on agent activity streams, progress updates, and client notifications.
|
||||||
|
|
||||||
|
## Research Questions
|
||||||
|
|
||||||
|
1. What are the trade-offs between WebSocket and SSE?
|
||||||
|
2. Which pattern best fits Syndarix's use cases?
|
||||||
|
3. How do we handle reconnection and reliability?
|
||||||
|
4. What is the FastAPI implementation approach?
|
||||||
|
|
||||||
|
## Findings
|
||||||
|
|
||||||
|
### 1. Use Case Analysis
|
||||||
|
|
||||||
|
| Use Case | Direction | Frequency | Latency Req |
|
||||||
|
|----------|-----------|-----------|-------------|
|
||||||
|
| Agent activity feed | Server → Client | High | Low |
|
||||||
|
| Sprint progress | Server → Client | Medium | Low |
|
||||||
|
| Build status | Server → Client | Low | Medium |
|
||||||
|
| Client approval requests | Server → Client | Low | High |
|
||||||
|
| Client messages | Client → Server | Low | Medium |
|
||||||
|
| Issue updates | Server → Client | Medium | Low |
|
||||||
|
|
||||||
|
**Key Insight:** 90%+ of real-time communication is **server-to-client** (unidirectional).
|
||||||
|
|
||||||
|
### 2. Technology Comparison
|
||||||
|
|
||||||
|
| Feature | Server-Sent Events (SSE) | WebSocket |
|
||||||
|
|---------|-------------------------|-----------|
|
||||||
|
| Direction | Unidirectional (server → client) | Bidirectional |
|
||||||
|
| Protocol | HTTP/1.1 or HTTP/2 | Custom (ws://) |
|
||||||
|
| Reconnection | Built-in automatic | Manual implementation |
|
||||||
|
| Connection limits | Limited per domain | Similar limits |
|
||||||
|
| Browser support | Excellent | Excellent |
|
||||||
|
| Through proxies | Native HTTP | May require config |
|
||||||
|
| Complexity | Simple | More complex |
|
||||||
|
| FastAPI support | Native | Native |
|
||||||
|
|
||||||
|
### 3. Recommendation: SSE for Primary, WebSocket for Chat
|
||||||
|
|
||||||
|
**SSE (Recommended for 90% of use cases):**
|
||||||
|
- Agent activity streams
|
||||||
|
- Progress updates
|
||||||
|
- Build/pipeline status
|
||||||
|
- Issue change notifications
|
||||||
|
- Approval request alerts
|
||||||
|
|
||||||
|
**WebSocket (For bidirectional needs):**
|
||||||
|
- Live chat with agents
|
||||||
|
- Interactive debugging sessions
|
||||||
|
- Real-time collaboration (future)
|
||||||
|
|
||||||
|
### 4. FastAPI SSE Implementation
|
||||||
|
|
||||||
|
```python
|
||||||
|
# app/api/v1/events.py
|
||||||
|
from fastapi import APIRouter, Request
|
||||||
|
from fastapi.responses import StreamingResponse
|
||||||
|
from app.services.events import EventBus
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
@router.get("/projects/{project_id}/events")
|
||||||
|
async def project_events(
|
||||||
|
project_id: str,
|
||||||
|
request: Request,
|
||||||
|
current_user: User = Depends(get_current_user)
|
||||||
|
):
|
||||||
|
"""Stream real-time events for a project."""
|
||||||
|
|
||||||
|
async def event_generator():
|
||||||
|
event_bus = EventBus()
|
||||||
|
subscriber = await event_bus.subscribe(
|
||||||
|
channel=f"project:{project_id}",
|
||||||
|
user_id=current_user.id
|
||||||
|
)
|
||||||
|
|
||||||
|
try:
|
||||||
|
while True:
|
||||||
|
# Check if client disconnected
|
||||||
|
if await request.is_disconnected():
|
||||||
|
break
|
||||||
|
|
||||||
|
# Wait for next event (with timeout for keepalive)
|
||||||
|
try:
|
||||||
|
event = await asyncio.wait_for(
|
||||||
|
subscriber.get_event(),
|
||||||
|
timeout=30.0
|
||||||
|
)
|
||||||
|
yield f"event: {event.type}\ndata: {event.json()}\n\n"
|
||||||
|
except asyncio.TimeoutError:
|
||||||
|
# Send keepalive comment
|
||||||
|
yield ": keepalive\n\n"
|
||||||
|
finally:
|
||||||
|
await event_bus.unsubscribe(subscriber)
|
||||||
|
|
||||||
|
return StreamingResponse(
|
||||||
|
event_generator(),
|
||||||
|
media_type="text/event-stream",
|
||||||
|
headers={
|
||||||
|
"Cache-Control": "no-cache",
|
||||||
|
"Connection": "keep-alive",
|
||||||
|
"X-Accel-Buffering": "no", # Disable nginx buffering
|
||||||
|
}
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Event Bus Architecture with Redis
|
||||||
|
|
||||||
|
```python
|
||||||
|
# app/services/events.py
|
||||||
|
from dataclasses import dataclass
|
||||||
|
from typing import AsyncIterator
|
||||||
|
import redis.asyncio as redis
|
||||||
|
import json
|
||||||
|
|
||||||
|
@dataclass
|
||||||
|
class Event:
|
||||||
|
type: str
|
||||||
|
data: dict
|
||||||
|
project_id: str
|
||||||
|
agent_id: str | None = None
|
||||||
|
timestamp: float = None
|
||||||
|
|
||||||
|
class EventBus:
|
||||||
|
def __init__(self, redis_url: str):
|
||||||
|
self.redis = redis.from_url(redis_url)
|
||||||
|
self.pubsub = self.redis.pubsub()
|
||||||
|
|
||||||
|
async def publish(self, channel: str, event: Event):
|
||||||
|
"""Publish an event to a channel."""
|
||||||
|
await self.redis.publish(
|
||||||
|
channel,
|
||||||
|
json.dumps(event.__dict__)
|
||||||
|
)
|
||||||
|
|
||||||
|
async def subscribe(self, channel: str) -> "Subscriber":
|
||||||
|
"""Subscribe to a channel."""
|
||||||
|
await self.pubsub.subscribe(channel)
|
||||||
|
return Subscriber(self.pubsub, channel)
|
||||||
|
|
||||||
|
class Subscriber:
|
||||||
|
def __init__(self, pubsub, channel: str):
|
||||||
|
self.pubsub = pubsub
|
||||||
|
self.channel = channel
|
||||||
|
|
||||||
|
async def get_event(self) -> Event:
|
||||||
|
"""Get the next event (blocking)."""
|
||||||
|
while True:
|
||||||
|
message = await self.pubsub.get_message(
|
||||||
|
ignore_subscribe_messages=True,
|
||||||
|
timeout=1.0
|
||||||
|
)
|
||||||
|
if message and message["type"] == "message":
|
||||||
|
data = json.loads(message["data"])
|
||||||
|
return Event(**data)
|
||||||
|
|
||||||
|
async def unsubscribe(self):
|
||||||
|
await self.pubsub.unsubscribe(self.channel)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. Client-Side Implementation
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// frontend/lib/events.ts
|
||||||
|
class EventSource {
|
||||||
|
private eventSource: EventSource | null = null;
|
||||||
|
private reconnectDelay = 1000;
|
||||||
|
private maxReconnectDelay = 30000;
|
||||||
|
|
||||||
|
connect(projectId: string, onEvent: (event: ProjectEvent) => void) {
|
||||||
|
const url = `/api/v1/projects/${projectId}/events`;
|
||||||
|
|
||||||
|
this.eventSource = new EventSource(url, {
|
||||||
|
withCredentials: true
|
||||||
|
});
|
||||||
|
|
||||||
|
this.eventSource.onopen = () => {
|
||||||
|
console.log('SSE connected');
|
||||||
|
this.reconnectDelay = 1000; // Reset on success
|
||||||
|
};
|
||||||
|
|
||||||
|
this.eventSource.addEventListener('agent_activity', (e) => {
|
||||||
|
onEvent({ type: 'agent_activity', data: JSON.parse(e.data) });
|
||||||
|
});
|
||||||
|
|
||||||
|
this.eventSource.addEventListener('issue_update', (e) => {
|
||||||
|
onEvent({ type: 'issue_update', data: JSON.parse(e.data) });
|
||||||
|
});
|
||||||
|
|
||||||
|
this.eventSource.addEventListener('approval_required', (e) => {
|
||||||
|
onEvent({ type: 'approval_required', data: JSON.parse(e.data) });
|
||||||
|
});
|
||||||
|
|
||||||
|
this.eventSource.onerror = () => {
|
||||||
|
this.eventSource?.close();
|
||||||
|
// Exponential backoff reconnect
|
||||||
|
setTimeout(() => this.connect(projectId, onEvent), this.reconnectDelay);
|
||||||
|
this.reconnectDelay = Math.min(
|
||||||
|
this.reconnectDelay * 2,
|
||||||
|
this.maxReconnectDelay
|
||||||
|
);
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
disconnect() {
|
||||||
|
this.eventSource?.close();
|
||||||
|
this.eventSource = null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7. Event Types
|
||||||
|
|
||||||
|
```python
|
||||||
|
# app/schemas/events.py
|
||||||
|
from enum import Enum
|
||||||
|
from pydantic import BaseModel
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
class EventType(str, Enum):
|
||||||
|
# Agent Events
|
||||||
|
AGENT_STARTED = "agent_started"
|
||||||
|
AGENT_ACTIVITY = "agent_activity"
|
||||||
|
AGENT_COMPLETED = "agent_completed"
|
||||||
|
AGENT_ERROR = "agent_error"
|
||||||
|
|
||||||
|
# Project Events
|
||||||
|
ISSUE_CREATED = "issue_created"
|
||||||
|
ISSUE_UPDATED = "issue_updated"
|
||||||
|
ISSUE_CLOSED = "issue_closed"
|
||||||
|
|
||||||
|
# Git Events
|
||||||
|
BRANCH_CREATED = "branch_created"
|
||||||
|
COMMIT_PUSHED = "commit_pushed"
|
||||||
|
PR_CREATED = "pr_created"
|
||||||
|
PR_MERGED = "pr_merged"
|
||||||
|
|
||||||
|
# Workflow Events
|
||||||
|
APPROVAL_REQUIRED = "approval_required"
|
||||||
|
SPRINT_STARTED = "sprint_started"
|
||||||
|
SPRINT_COMPLETED = "sprint_completed"
|
||||||
|
|
||||||
|
# Pipeline Events
|
||||||
|
PIPELINE_STARTED = "pipeline_started"
|
||||||
|
PIPELINE_COMPLETED = "pipeline_completed"
|
||||||
|
PIPELINE_FAILED = "pipeline_failed"
|
||||||
|
|
||||||
|
class ProjectEvent(BaseModel):
|
||||||
|
id: str
|
||||||
|
type: EventType
|
||||||
|
project_id: str
|
||||||
|
agent_id: str | None
|
||||||
|
data: dict
|
||||||
|
timestamp: datetime
|
||||||
|
```
|
||||||
|
|
||||||
|
### 8. WebSocket for Chat (Secondary)
|
||||||
|
|
||||||
|
```python
|
||||||
|
# app/api/v1/chat.py
|
||||||
|
from fastapi import WebSocket, WebSocketDisconnect
|
||||||
|
from app.services.agent_chat import AgentChatService
|
||||||
|
|
||||||
|
@router.websocket("/projects/{project_id}/agents/{agent_id}/chat")
|
||||||
|
async def agent_chat(
|
||||||
|
websocket: WebSocket,
|
||||||
|
project_id: str,
|
||||||
|
agent_id: str
|
||||||
|
):
|
||||||
|
"""Bidirectional chat with an agent."""
|
||||||
|
await websocket.accept()
|
||||||
|
|
||||||
|
chat_service = AgentChatService(project_id, agent_id)
|
||||||
|
|
||||||
|
try:
|
||||||
|
while True:
|
||||||
|
# Receive message from client
|
||||||
|
message = await websocket.receive_json()
|
||||||
|
|
||||||
|
# Stream response from agent
|
||||||
|
async for chunk in chat_service.get_response(message):
|
||||||
|
await websocket.send_json({
|
||||||
|
"type": "chunk",
|
||||||
|
"content": chunk
|
||||||
|
})
|
||||||
|
|
||||||
|
await websocket.send_json({"type": "done"})
|
||||||
|
except WebSocketDisconnect:
|
||||||
|
pass
|
||||||
|
```
|
||||||
|
|
||||||
|
## Performance Considerations
|
||||||
|
|
||||||
|
### Connection Limits
|
||||||
|
- Browser limit: ~6 connections per domain (HTTP/1.1)
|
||||||
|
- Recommendation: Use single SSE connection per project, multiplex events
|
||||||
|
|
||||||
|
### Scalability
|
||||||
|
- Redis Pub/Sub handles cross-instance event distribution
|
||||||
|
- Consider Redis Streams for message persistence (audit/replay)
|
||||||
|
|
||||||
|
### Keepalive
|
||||||
|
- Send comment every 30 seconds to prevent timeout
|
||||||
|
- Client reconnects automatically on disconnect
|
||||||
|
|
||||||
|
## Recommendations
|
||||||
|
|
||||||
|
1. **Use SSE for all server-to-client events** (simpler, auto-reconnect)
|
||||||
|
2. **Use WebSocket only for interactive chat** with agents
|
||||||
|
3. **Redis Pub/Sub for event distribution** across instances
|
||||||
|
4. **Single SSE connection per project** with event multiplexing
|
||||||
|
5. **Exponential backoff** for client reconnection
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- [FastAPI SSE](https://fastapi.tiangolo.com/advanced/custom-response/#streamingresponse)
|
||||||
|
- [MDN EventSource](https://developer.mozilla.org/en-US/docs/Web/API/EventSource)
|
||||||
|
- [Redis Pub/Sub](https://redis.io/topics/pubsub)
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
**Adopt SSE as the primary real-time transport** with WebSocket reserved for bidirectional chat. Use Redis Pub/Sub for event distribution.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Spike completed. Findings will inform ADR-002: Real-time Communication Architecture.*
|
||||||
420
docs/spikes/SPIKE-004-celery-redis-integration.md
Normal file
420
docs/spikes/SPIKE-004-celery-redis-integration.md
Normal file
@@ -0,0 +1,420 @@
|
|||||||
|
# SPIKE-004: Celery + Redis Integration
|
||||||
|
|
||||||
|
**Status:** Completed
|
||||||
|
**Date:** 2025-12-29
|
||||||
|
**Author:** Architecture Team
|
||||||
|
**Related Issue:** #4
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Research best practices for integrating Celery with FastAPI for background task processing, focusing on agent orchestration, long-running workflows, and task monitoring.
|
||||||
|
|
||||||
|
## Research Questions
|
||||||
|
|
||||||
|
1. How to properly integrate Celery with async FastAPI?
|
||||||
|
2. What is the optimal task queue architecture for Syndarix?
|
||||||
|
3. How to handle long-running agent tasks?
|
||||||
|
4. What monitoring and visibility patterns should we use?
|
||||||
|
|
||||||
|
## Findings
|
||||||
|
|
||||||
|
### 1. Celery + FastAPI Integration Pattern
|
||||||
|
|
||||||
|
**Challenge:** Celery is synchronous, FastAPI is async.
|
||||||
|
|
||||||
|
**Solution:** Use `celery.result.AsyncResult` with async polling or callbacks.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# app/core/celery.py
|
||||||
|
from celery import Celery
|
||||||
|
from app.core.config import settings
|
||||||
|
|
||||||
|
celery_app = Celery(
|
||||||
|
"syndarix",
|
||||||
|
broker=settings.REDIS_URL,
|
||||||
|
backend=settings.REDIS_URL,
|
||||||
|
include=[
|
||||||
|
"app.tasks.agent_tasks",
|
||||||
|
"app.tasks.git_tasks",
|
||||||
|
"app.tasks.sync_tasks",
|
||||||
|
]
|
||||||
|
)
|
||||||
|
|
||||||
|
celery_app.conf.update(
|
||||||
|
task_serializer="json",
|
||||||
|
accept_content=["json"],
|
||||||
|
result_serializer="json",
|
||||||
|
timezone="UTC",
|
||||||
|
enable_utc=True,
|
||||||
|
task_track_started=True,
|
||||||
|
task_time_limit=3600, # 1 hour max
|
||||||
|
task_soft_time_limit=3300, # 55 min soft limit
|
||||||
|
worker_prefetch_multiplier=1, # One task at a time for LLM tasks
|
||||||
|
task_acks_late=True, # Acknowledge after completion
|
||||||
|
task_reject_on_worker_lost=True, # Retry if worker dies
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Task Queue Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────────────────────────────────┐
|
||||||
|
│ FastAPI Backend │
|
||||||
|
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||||
|
│ │ API Layer │ │ Services │ │ Events │ │
|
||||||
|
│ └──────┬──────┘ └──────┬──────┘ └──────┬──────┘ │
|
||||||
|
│ │ │ │ │
|
||||||
|
│ └────────────────┼────────────────┘ │
|
||||||
|
│ │ │
|
||||||
|
│ ▼ │
|
||||||
|
│ ┌────────────────────────────────┐ │
|
||||||
|
│ │ Task Dispatcher │ │
|
||||||
|
│ │ (Celery send_task) │ │
|
||||||
|
│ └────────────────┬───────────────┘ │
|
||||||
|
└──────────────────────────┼──────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
▼
|
||||||
|
┌──────────────────────────────────────────────────────────────────┐
|
||||||
|
│ Redis (Broker + Backend) │
|
||||||
|
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||||
|
│ │ agent_queue │ │ git_queue │ │ sync_queue │ │
|
||||||
|
│ │ (priority) │ │ │ │ │ │
|
||||||
|
│ └──────────────┘ └──────────────┘ └──────────────┘ │
|
||||||
|
└──────────────────────────────────────────────────────────────────┘
|
||||||
|
│
|
||||||
|
┌───────────────┼───────────────┐
|
||||||
|
│ │ │
|
||||||
|
▼ ▼ ▼
|
||||||
|
┌────────────┐ ┌────────────┐ ┌────────────┐
|
||||||
|
│ Worker │ │ Worker │ │ Worker │
|
||||||
|
│ (agents) │ │ (git) │ │ (sync) │
|
||||||
|
│ prefetch=1 │ │ prefetch=4 │ │ prefetch=4 │
|
||||||
|
└────────────┘ └────────────┘ └────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Queue Configuration
|
||||||
|
|
||||||
|
```python
|
||||||
|
# app/core/celery.py
|
||||||
|
celery_app.conf.task_queues = [
|
||||||
|
Queue("agent_queue", routing_key="agent.#"),
|
||||||
|
Queue("git_queue", routing_key="git.#"),
|
||||||
|
Queue("sync_queue", routing_key="sync.#"),
|
||||||
|
Queue("cicd_queue", routing_key="cicd.#"),
|
||||||
|
]
|
||||||
|
|
||||||
|
celery_app.conf.task_routes = {
|
||||||
|
"app.tasks.agent_tasks.*": {"queue": "agent_queue"},
|
||||||
|
"app.tasks.git_tasks.*": {"queue": "git_queue"},
|
||||||
|
"app.tasks.sync_tasks.*": {"queue": "sync_queue"},
|
||||||
|
"app.tasks.cicd_tasks.*": {"queue": "cicd_queue"},
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Agent Task Implementation
|
||||||
|
|
||||||
|
```python
|
||||||
|
# app/tasks/agent_tasks.py
|
||||||
|
from celery import Task
|
||||||
|
from app.core.celery import celery_app
|
||||||
|
from app.services.agent_runner import AgentRunner
|
||||||
|
from app.services.events import EventBus
|
||||||
|
|
||||||
|
class AgentTask(Task):
|
||||||
|
"""Base class for agent tasks with retry and monitoring."""
|
||||||
|
|
||||||
|
autoretry_for = (ConnectionError, TimeoutError)
|
||||||
|
retry_backoff = True
|
||||||
|
retry_backoff_max = 600
|
||||||
|
retry_jitter = True
|
||||||
|
max_retries = 3
|
||||||
|
|
||||||
|
def on_failure(self, exc, task_id, args, kwargs, einfo):
|
||||||
|
"""Handle task failure."""
|
||||||
|
project_id = kwargs.get("project_id")
|
||||||
|
agent_id = kwargs.get("agent_id")
|
||||||
|
EventBus().publish(f"project:{project_id}", {
|
||||||
|
"type": "agent_error",
|
||||||
|
"agent_id": agent_id,
|
||||||
|
"error": str(exc)
|
||||||
|
})
|
||||||
|
|
||||||
|
@celery_app.task(bind=True, base=AgentTask)
|
||||||
|
def run_agent_action(
|
||||||
|
self,
|
||||||
|
agent_id: str,
|
||||||
|
project_id: str,
|
||||||
|
action: str,
|
||||||
|
context: dict
|
||||||
|
) -> dict:
|
||||||
|
"""
|
||||||
|
Execute an agent action as a background task.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
agent_id: The agent instance ID
|
||||||
|
project_id: The project context
|
||||||
|
action: The action to perform
|
||||||
|
context: Action-specific context
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Action result dictionary
|
||||||
|
"""
|
||||||
|
runner = AgentRunner(agent_id, project_id)
|
||||||
|
|
||||||
|
# Update task state for monitoring
|
||||||
|
self.update_state(
|
||||||
|
state="RUNNING",
|
||||||
|
meta={"agent_id": agent_id, "action": action}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Publish start event
|
||||||
|
EventBus().publish(f"project:{project_id}", {
|
||||||
|
"type": "agent_started",
|
||||||
|
"agent_id": agent_id,
|
||||||
|
"action": action,
|
||||||
|
"task_id": self.request.id
|
||||||
|
})
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = runner.execute(action, context)
|
||||||
|
|
||||||
|
# Publish completion event
|
||||||
|
EventBus().publish(f"project:{project_id}", {
|
||||||
|
"type": "agent_completed",
|
||||||
|
"agent_id": agent_id,
|
||||||
|
"action": action,
|
||||||
|
"result_summary": result.get("summary")
|
||||||
|
})
|
||||||
|
|
||||||
|
return result
|
||||||
|
except Exception as e:
|
||||||
|
# Will trigger on_failure
|
||||||
|
raise
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Long-Running Task Patterns
|
||||||
|
|
||||||
|
**Progress Reporting:**
|
||||||
|
```python
|
||||||
|
@celery_app.task(bind=True)
|
||||||
|
def implement_story(self, story_id: str, agent_id: str, project_id: str):
|
||||||
|
"""Implement a user story with progress reporting."""
|
||||||
|
|
||||||
|
steps = [
|
||||||
|
("analyzing", "Analyzing requirements"),
|
||||||
|
("designing", "Designing solution"),
|
||||||
|
("implementing", "Writing code"),
|
||||||
|
("testing", "Running tests"),
|
||||||
|
("documenting", "Updating documentation"),
|
||||||
|
]
|
||||||
|
|
||||||
|
for i, (state, description) in enumerate(steps):
|
||||||
|
self.update_state(
|
||||||
|
state="PROGRESS",
|
||||||
|
meta={
|
||||||
|
"current": i + 1,
|
||||||
|
"total": len(steps),
|
||||||
|
"status": description
|
||||||
|
}
|
||||||
|
)
|
||||||
|
|
||||||
|
# Do the actual work
|
||||||
|
execute_step(state, story_id, agent_id)
|
||||||
|
|
||||||
|
# Publish progress event
|
||||||
|
EventBus().publish(f"project:{project_id}", {
|
||||||
|
"type": "agent_progress",
|
||||||
|
"agent_id": agent_id,
|
||||||
|
"step": i + 1,
|
||||||
|
"total": len(steps),
|
||||||
|
"description": description
|
||||||
|
})
|
||||||
|
|
||||||
|
return {"status": "completed", "story_id": story_id}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Task Chaining:**
|
||||||
|
```python
|
||||||
|
from celery import chain, group
|
||||||
|
|
||||||
|
# Sequential workflow
|
||||||
|
workflow = chain(
|
||||||
|
analyze_requirements.s(story_id),
|
||||||
|
design_solution.s(),
|
||||||
|
implement_code.s(),
|
||||||
|
run_tests.s(),
|
||||||
|
create_pr.s()
|
||||||
|
)
|
||||||
|
|
||||||
|
# Parallel execution
|
||||||
|
parallel_tests = group(
|
||||||
|
run_unit_tests.s(project_id),
|
||||||
|
run_integration_tests.s(project_id),
|
||||||
|
run_linting.s(project_id)
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. FastAPI Integration
|
||||||
|
|
||||||
|
```python
|
||||||
|
# app/api/v1/agents.py
|
||||||
|
from fastapi import APIRouter, BackgroundTasks
|
||||||
|
from app.tasks.agent_tasks import run_agent_action
|
||||||
|
from celery.result import AsyncResult
|
||||||
|
|
||||||
|
router = APIRouter()
|
||||||
|
|
||||||
|
@router.post("/agents/{agent_id}/actions")
|
||||||
|
async def trigger_agent_action(
|
||||||
|
agent_id: str,
|
||||||
|
action: AgentActionRequest,
|
||||||
|
background_tasks: BackgroundTasks
|
||||||
|
):
|
||||||
|
"""Trigger an agent action as a background task."""
|
||||||
|
|
||||||
|
# Dispatch to Celery
|
||||||
|
task = run_agent_action.delay(
|
||||||
|
agent_id=agent_id,
|
||||||
|
project_id=action.project_id,
|
||||||
|
action=action.action,
|
||||||
|
context=action.context
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"task_id": task.id,
|
||||||
|
"status": "queued"
|
||||||
|
}
|
||||||
|
|
||||||
|
@router.get("/tasks/{task_id}")
|
||||||
|
async def get_task_status(task_id: str):
|
||||||
|
"""Get the status of a background task."""
|
||||||
|
|
||||||
|
result = AsyncResult(task_id)
|
||||||
|
|
||||||
|
if result.state == "PENDING":
|
||||||
|
return {"status": "pending"}
|
||||||
|
elif result.state == "RUNNING":
|
||||||
|
return {"status": "running", **result.info}
|
||||||
|
elif result.state == "PROGRESS":
|
||||||
|
return {"status": "progress", **result.info}
|
||||||
|
elif result.state == "SUCCESS":
|
||||||
|
return {"status": "completed", "result": result.result}
|
||||||
|
elif result.state == "FAILURE":
|
||||||
|
return {"status": "failed", "error": str(result.result)}
|
||||||
|
|
||||||
|
return {"status": result.state}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7. Worker Configuration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Run different workers for different queues
|
||||||
|
|
||||||
|
# Agent worker (single task at a time for LLM rate limiting)
|
||||||
|
celery -A app.core.celery worker \
|
||||||
|
-Q agent_queue \
|
||||||
|
-c 4 \
|
||||||
|
--prefetch-multiplier=1 \
|
||||||
|
-n agent_worker@%h
|
||||||
|
|
||||||
|
# Git worker (can handle multiple concurrent tasks)
|
||||||
|
celery -A app.core.celery worker \
|
||||||
|
-Q git_queue \
|
||||||
|
-c 8 \
|
||||||
|
--prefetch-multiplier=4 \
|
||||||
|
-n git_worker@%h
|
||||||
|
|
||||||
|
# Sync worker
|
||||||
|
celery -A app.core.celery worker \
|
||||||
|
-Q sync_queue \
|
||||||
|
-c 4 \
|
||||||
|
--prefetch-multiplier=4 \
|
||||||
|
-n sync_worker@%h
|
||||||
|
```
|
||||||
|
|
||||||
|
### 8. Monitoring with Flower
|
||||||
|
|
||||||
|
```python
|
||||||
|
# docker-compose.yml
|
||||||
|
services:
|
||||||
|
flower:
|
||||||
|
image: mher/flower:latest
|
||||||
|
command: celery flower --broker=redis://redis:6379/0
|
||||||
|
ports:
|
||||||
|
- "5555:5555"
|
||||||
|
environment:
|
||||||
|
- CELERY_BROKER_URL=redis://redis:6379/0
|
||||||
|
- FLOWER_BASIC_AUTH=admin:password
|
||||||
|
```
|
||||||
|
|
||||||
|
### 9. Task Scheduling (Celery Beat)
|
||||||
|
|
||||||
|
```python
|
||||||
|
# app/core/celery.py
|
||||||
|
from celery.schedules import crontab
|
||||||
|
|
||||||
|
celery_app.conf.beat_schedule = {
|
||||||
|
# Sync issues every minute
|
||||||
|
"sync-external-issues": {
|
||||||
|
"task": "app.tasks.sync_tasks.sync_all_issues",
|
||||||
|
"schedule": 60.0,
|
||||||
|
},
|
||||||
|
# Health check every 5 minutes
|
||||||
|
"agent-health-check": {
|
||||||
|
"task": "app.tasks.agent_tasks.health_check_all_agents",
|
||||||
|
"schedule": 300.0,
|
||||||
|
},
|
||||||
|
# Daily cleanup at midnight
|
||||||
|
"cleanup-old-tasks": {
|
||||||
|
"task": "app.tasks.maintenance.cleanup_old_tasks",
|
||||||
|
"schedule": crontab(hour=0, minute=0),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
1. **One task per LLM call** - Avoid rate limiting issues
|
||||||
|
2. **Progress reporting** - Update state for long-running tasks
|
||||||
|
3. **Idempotent tasks** - Handle retries gracefully
|
||||||
|
4. **Separate queues** - Isolate slow tasks from fast ones
|
||||||
|
5. **Task result expiry** - Set `result_expires` to avoid Redis bloat
|
||||||
|
6. **Soft time limits** - Allow graceful shutdown before hard kill
|
||||||
|
|
||||||
|
## Recommendations
|
||||||
|
|
||||||
|
1. **Use Celery for all long-running operations**
|
||||||
|
- Agent actions
|
||||||
|
- Git operations
|
||||||
|
- External sync
|
||||||
|
- CI/CD triggers
|
||||||
|
|
||||||
|
2. **Use Redis as both broker and backend**
|
||||||
|
- Simplifies infrastructure
|
||||||
|
- Fast enough for our scale
|
||||||
|
|
||||||
|
3. **Configure separate queues**
|
||||||
|
- `agent_queue` with prefetch=1
|
||||||
|
- `git_queue` with prefetch=4
|
||||||
|
- `sync_queue` with prefetch=4
|
||||||
|
|
||||||
|
4. **Implement proper monitoring**
|
||||||
|
- Flower for web UI
|
||||||
|
- Prometheus metrics export
|
||||||
|
- Dead letter queue for failed tasks
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- [Celery Documentation](https://docs.celeryq.dev/)
|
||||||
|
- [FastAPI Background Tasks](https://fastapi.tiangolo.com/tutorial/background-tasks/)
|
||||||
|
- [Celery Best Practices](https://docs.celeryq.dev/en/stable/userguide/tasks.html#tips-and-best-practices)
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
**Adopt Celery + Redis** for all background task processing with queue-based routing and progress reporting via Redis Pub/Sub events.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Spike completed. Findings will inform ADR-003: Background Task Architecture.*
|
||||||
516
docs/spikes/SPIKE-005-llm-provider-abstraction.md
Normal file
516
docs/spikes/SPIKE-005-llm-provider-abstraction.md
Normal file
@@ -0,0 +1,516 @@
|
|||||||
|
# SPIKE-005: LLM Provider Abstraction
|
||||||
|
|
||||||
|
**Status:** Completed
|
||||||
|
**Date:** 2025-12-29
|
||||||
|
**Author:** Architecture Team
|
||||||
|
**Related Issue:** #5
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Objective
|
||||||
|
|
||||||
|
Research the best approach for unified LLM provider abstraction with support for multiple providers, automatic failover, and cost tracking.
|
||||||
|
|
||||||
|
## Research Questions
|
||||||
|
|
||||||
|
1. What libraries exist for unified LLM access?
|
||||||
|
2. How to implement automatic failover between providers?
|
||||||
|
3. How to track token usage and costs per agent/project?
|
||||||
|
4. What caching strategies can reduce API costs?
|
||||||
|
|
||||||
|
## Findings
|
||||||
|
|
||||||
|
### 1. LiteLLM - Recommended Solution
|
||||||
|
|
||||||
|
**LiteLLM** provides a unified interface to 100+ LLM providers using the OpenAI SDK format.
|
||||||
|
|
||||||
|
**Key Features:**
|
||||||
|
- Unified API across providers (Anthropic, OpenAI, local, etc.)
|
||||||
|
- Built-in failover and load balancing
|
||||||
|
- Token counting and cost tracking
|
||||||
|
- Streaming support
|
||||||
|
- Async support
|
||||||
|
- Caching with Redis
|
||||||
|
|
||||||
|
**Installation:**
|
||||||
|
```bash
|
||||||
|
pip install litellm
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Basic Usage
|
||||||
|
|
||||||
|
```python
|
||||||
|
from litellm import completion, acompletion
|
||||||
|
import litellm
|
||||||
|
|
||||||
|
# Configure providers
|
||||||
|
litellm.api_key = os.getenv("ANTHROPIC_API_KEY")
|
||||||
|
litellm.set_verbose = True # For debugging
|
||||||
|
|
||||||
|
# Synchronous call
|
||||||
|
response = completion(
|
||||||
|
model="claude-3-5-sonnet-20241022",
|
||||||
|
messages=[{"role": "user", "content": "Hello!"}]
|
||||||
|
)
|
||||||
|
|
||||||
|
# Async call (for FastAPI)
|
||||||
|
response = await acompletion(
|
||||||
|
model="claude-3-5-sonnet-20241022",
|
||||||
|
messages=[{"role": "user", "content": "Hello!"}]
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Model Naming Convention
|
||||||
|
|
||||||
|
LiteLLM uses prefixed model names:
|
||||||
|
|
||||||
|
| Provider | Model Format |
|
||||||
|
|----------|--------------|
|
||||||
|
| Anthropic | `claude-3-5-sonnet-20241022` |
|
||||||
|
| OpenAI | `gpt-4-turbo` |
|
||||||
|
| Azure OpenAI | `azure/deployment-name` |
|
||||||
|
| Ollama | `ollama/llama3` |
|
||||||
|
| Together AI | `together_ai/togethercomputer/llama-2-70b` |
|
||||||
|
|
||||||
|
### 4. Failover Configuration
|
||||||
|
|
||||||
|
```python
|
||||||
|
from litellm import Router
|
||||||
|
|
||||||
|
# Define model list with fallbacks
|
||||||
|
model_list = [
|
||||||
|
{
|
||||||
|
"model_name": "primary-agent",
|
||||||
|
"litellm_params": {
|
||||||
|
"model": "claude-3-5-sonnet-20241022",
|
||||||
|
"api_key": os.getenv("ANTHROPIC_API_KEY"),
|
||||||
|
},
|
||||||
|
"model_info": {"id": 1}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"model_name": "primary-agent", # Same name = fallback
|
||||||
|
"litellm_params": {
|
||||||
|
"model": "gpt-4-turbo",
|
||||||
|
"api_key": os.getenv("OPENAI_API_KEY"),
|
||||||
|
},
|
||||||
|
"model_info": {"id": 2}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"model_name": "primary-agent",
|
||||||
|
"litellm_params": {
|
||||||
|
"model": "ollama/llama3",
|
||||||
|
"api_base": "http://localhost:11434",
|
||||||
|
},
|
||||||
|
"model_info": {"id": 3}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
# Initialize router with failover
|
||||||
|
router = Router(
|
||||||
|
model_list=model_list,
|
||||||
|
fallbacks=[
|
||||||
|
{"primary-agent": ["primary-agent"]} # Try all models with same name
|
||||||
|
],
|
||||||
|
routing_strategy="simple-shuffle", # or "latency-based-routing"
|
||||||
|
num_retries=3,
|
||||||
|
retry_after=5, # seconds
|
||||||
|
timeout=60,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Use router
|
||||||
|
response = await router.acompletion(
|
||||||
|
model="primary-agent",
|
||||||
|
messages=[{"role": "user", "content": "Hello!"}]
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Syndarix LLM Gateway Architecture
|
||||||
|
|
||||||
|
```python
|
||||||
|
# app/services/llm_gateway.py
|
||||||
|
from litellm import Router, acompletion
|
||||||
|
from app.core.config import settings
|
||||||
|
from app.models.agent import AgentType
|
||||||
|
from app.services.cost_tracker import CostTracker
|
||||||
|
from app.services.events import EventBus
|
||||||
|
|
||||||
|
class LLMGateway:
|
||||||
|
"""Unified LLM gateway with failover and cost tracking."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.router = self._build_router()
|
||||||
|
self.cost_tracker = CostTracker()
|
||||||
|
self.event_bus = EventBus()
|
||||||
|
|
||||||
|
def _build_router(self) -> Router:
|
||||||
|
"""Build LiteLLM router from configuration."""
|
||||||
|
model_list = []
|
||||||
|
|
||||||
|
# Add Anthropic models
|
||||||
|
if settings.ANTHROPIC_API_KEY:
|
||||||
|
model_list.extend([
|
||||||
|
{
|
||||||
|
"model_name": "high-reasoning",
|
||||||
|
"litellm_params": {
|
||||||
|
"model": "claude-3-5-sonnet-20241022",
|
||||||
|
"api_key": settings.ANTHROPIC_API_KEY,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"model_name": "fast-response",
|
||||||
|
"litellm_params": {
|
||||||
|
"model": "claude-3-haiku-20240307",
|
||||||
|
"api_key": settings.ANTHROPIC_API_KEY,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
# Add OpenAI fallbacks
|
||||||
|
if settings.OPENAI_API_KEY:
|
||||||
|
model_list.extend([
|
||||||
|
{
|
||||||
|
"model_name": "high-reasoning",
|
||||||
|
"litellm_params": {
|
||||||
|
"model": "gpt-4-turbo",
|
||||||
|
"api_key": settings.OPENAI_API_KEY,
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"model_name": "fast-response",
|
||||||
|
"litellm_params": {
|
||||||
|
"model": "gpt-4o-mini",
|
||||||
|
"api_key": settings.OPENAI_API_KEY,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
])
|
||||||
|
|
||||||
|
# Add local models (Ollama)
|
||||||
|
if settings.OLLAMA_URL:
|
||||||
|
model_list.append({
|
||||||
|
"model_name": "local-fallback",
|
||||||
|
"litellm_params": {
|
||||||
|
"model": "ollama/llama3",
|
||||||
|
"api_base": settings.OLLAMA_URL,
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
return Router(
|
||||||
|
model_list=model_list,
|
||||||
|
fallbacks=[
|
||||||
|
{"high-reasoning": ["high-reasoning", "local-fallback"]},
|
||||||
|
{"fast-response": ["fast-response", "local-fallback"]},
|
||||||
|
],
|
||||||
|
routing_strategy="latency-based-routing",
|
||||||
|
num_retries=3,
|
||||||
|
timeout=120,
|
||||||
|
)
|
||||||
|
|
||||||
|
async def complete(
|
||||||
|
self,
|
||||||
|
agent_id: str,
|
||||||
|
project_id: str,
|
||||||
|
messages: list[dict],
|
||||||
|
model_preference: str = "high-reasoning",
|
||||||
|
stream: bool = False,
|
||||||
|
**kwargs
|
||||||
|
) -> dict:
|
||||||
|
"""
|
||||||
|
Generate a completion with automatic failover and cost tracking.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
agent_id: The calling agent's ID
|
||||||
|
project_id: The project context
|
||||||
|
messages: Chat messages
|
||||||
|
model_preference: "high-reasoning" or "fast-response"
|
||||||
|
stream: Whether to stream the response
|
||||||
|
**kwargs: Additional LiteLLM parameters
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Completion response dictionary
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
if stream:
|
||||||
|
return self._stream_completion(
|
||||||
|
agent_id, project_id, messages, model_preference, **kwargs
|
||||||
|
)
|
||||||
|
|
||||||
|
response = await self.router.acompletion(
|
||||||
|
model=model_preference,
|
||||||
|
messages=messages,
|
||||||
|
**kwargs
|
||||||
|
)
|
||||||
|
|
||||||
|
# Track usage
|
||||||
|
await self._track_usage(
|
||||||
|
agent_id=agent_id,
|
||||||
|
project_id=project_id,
|
||||||
|
model=response.model,
|
||||||
|
usage=response.usage,
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"content": response.choices[0].message.content,
|
||||||
|
"model": response.model,
|
||||||
|
"usage": {
|
||||||
|
"prompt_tokens": response.usage.prompt_tokens,
|
||||||
|
"completion_tokens": response.usage.completion_tokens,
|
||||||
|
"total_tokens": response.usage.total_tokens,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
# Publish error event
|
||||||
|
await self.event_bus.publish(f"project:{project_id}", {
|
||||||
|
"type": "llm_error",
|
||||||
|
"agent_id": agent_id,
|
||||||
|
"error": str(e)
|
||||||
|
})
|
||||||
|
raise
|
||||||
|
|
||||||
|
async def _stream_completion(
|
||||||
|
self,
|
||||||
|
agent_id: str,
|
||||||
|
project_id: str,
|
||||||
|
messages: list[dict],
|
||||||
|
model_preference: str,
|
||||||
|
**kwargs
|
||||||
|
):
|
||||||
|
"""Stream a completion response."""
|
||||||
|
response = await self.router.acompletion(
|
||||||
|
model=model_preference,
|
||||||
|
messages=messages,
|
||||||
|
stream=True,
|
||||||
|
**kwargs
|
||||||
|
)
|
||||||
|
|
||||||
|
async for chunk in response:
|
||||||
|
if chunk.choices[0].delta.content:
|
||||||
|
yield chunk.choices[0].delta.content
|
||||||
|
|
||||||
|
async def _track_usage(
|
||||||
|
self,
|
||||||
|
agent_id: str,
|
||||||
|
project_id: str,
|
||||||
|
model: str,
|
||||||
|
usage: dict
|
||||||
|
):
|
||||||
|
"""Track token usage and costs."""
|
||||||
|
await self.cost_tracker.record_usage(
|
||||||
|
agent_id=agent_id,
|
||||||
|
project_id=project_id,
|
||||||
|
model=model,
|
||||||
|
prompt_tokens=usage.prompt_tokens,
|
||||||
|
completion_tokens=usage.completion_tokens,
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. Cost Tracking
|
||||||
|
|
||||||
|
```python
|
||||||
|
# app/services/cost_tracker.py
|
||||||
|
from sqlalchemy.ext.asyncio import AsyncSession
|
||||||
|
from app.models.usage import TokenUsage
|
||||||
|
from datetime import datetime
|
||||||
|
|
||||||
|
# Cost per 1M tokens (approximate)
|
||||||
|
MODEL_COSTS = {
|
||||||
|
"claude-3-5-sonnet-20241022": {"input": 3.00, "output": 15.00},
|
||||||
|
"claude-3-haiku-20240307": {"input": 0.25, "output": 1.25},
|
||||||
|
"gpt-4-turbo": {"input": 10.00, "output": 30.00},
|
||||||
|
"gpt-4o-mini": {"input": 0.15, "output": 0.60},
|
||||||
|
"ollama/llama3": {"input": 0.00, "output": 0.00}, # Local
|
||||||
|
}
|
||||||
|
|
||||||
|
class CostTracker:
|
||||||
|
def __init__(self, db: AsyncSession):
|
||||||
|
self.db = db
|
||||||
|
|
||||||
|
async def record_usage(
|
||||||
|
self,
|
||||||
|
agent_id: str,
|
||||||
|
project_id: str,
|
||||||
|
model: str,
|
||||||
|
prompt_tokens: int,
|
||||||
|
completion_tokens: int,
|
||||||
|
):
|
||||||
|
"""Record token usage and calculate cost."""
|
||||||
|
costs = MODEL_COSTS.get(model, {"input": 0, "output": 0})
|
||||||
|
|
||||||
|
input_cost = (prompt_tokens / 1_000_000) * costs["input"]
|
||||||
|
output_cost = (completion_tokens / 1_000_000) * costs["output"]
|
||||||
|
total_cost = input_cost + output_cost
|
||||||
|
|
||||||
|
usage = TokenUsage(
|
||||||
|
agent_id=agent_id,
|
||||||
|
project_id=project_id,
|
||||||
|
model=model,
|
||||||
|
prompt_tokens=prompt_tokens,
|
||||||
|
completion_tokens=completion_tokens,
|
||||||
|
total_tokens=prompt_tokens + completion_tokens,
|
||||||
|
cost_usd=total_cost,
|
||||||
|
timestamp=datetime.utcnow(),
|
||||||
|
)
|
||||||
|
|
||||||
|
self.db.add(usage)
|
||||||
|
await self.db.commit()
|
||||||
|
|
||||||
|
async def get_project_usage(
|
||||||
|
self,
|
||||||
|
project_id: str,
|
||||||
|
start_date: datetime = None,
|
||||||
|
end_date: datetime = None,
|
||||||
|
) -> dict:
|
||||||
|
"""Get usage summary for a project."""
|
||||||
|
# Query aggregated usage
|
||||||
|
...
|
||||||
|
|
||||||
|
async def check_budget(
|
||||||
|
self,
|
||||||
|
project_id: str,
|
||||||
|
budget_limit: float,
|
||||||
|
) -> bool:
|
||||||
|
"""Check if project is within budget."""
|
||||||
|
usage = await self.get_project_usage(project_id)
|
||||||
|
return usage["total_cost_usd"] < budget_limit
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7. Caching with Redis
|
||||||
|
|
||||||
|
```python
|
||||||
|
import litellm
|
||||||
|
from litellm import Cache
|
||||||
|
|
||||||
|
# Configure Redis cache
|
||||||
|
litellm.cache = Cache(
|
||||||
|
type="redis",
|
||||||
|
host=settings.REDIS_HOST,
|
||||||
|
port=settings.REDIS_PORT,
|
||||||
|
password=settings.REDIS_PASSWORD,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Enable caching
|
||||||
|
litellm.enable_cache()
|
||||||
|
|
||||||
|
# Cached completions (same input = cached response)
|
||||||
|
response = await litellm.acompletion(
|
||||||
|
model="claude-3-5-sonnet-20241022",
|
||||||
|
messages=[{"role": "user", "content": "What is 2+2?"}],
|
||||||
|
cache={"ttl": 3600} # Cache for 1 hour
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 8. Agent Type Model Mapping
|
||||||
|
|
||||||
|
```python
|
||||||
|
# app/models/agent_type.py
|
||||||
|
from sqlalchemy import Column, String, Enum as SQLEnum
|
||||||
|
from app.db.base import Base
|
||||||
|
|
||||||
|
class ModelPreference(str, Enum):
|
||||||
|
HIGH_REASONING = "high-reasoning"
|
||||||
|
FAST_RESPONSE = "fast-response"
|
||||||
|
COST_OPTIMIZED = "cost-optimized"
|
||||||
|
|
||||||
|
class AgentType(Base):
|
||||||
|
__tablename__ = "agent_types"
|
||||||
|
|
||||||
|
id = Column(UUID, primary_key=True)
|
||||||
|
name = Column(String(50), unique=True)
|
||||||
|
role = Column(String(50))
|
||||||
|
|
||||||
|
# LLM configuration
|
||||||
|
model_preference = Column(
|
||||||
|
SQLEnum(ModelPreference),
|
||||||
|
default=ModelPreference.HIGH_REASONING
|
||||||
|
)
|
||||||
|
max_tokens = Column(Integer, default=4096)
|
||||||
|
temperature = Column(Float, default=0.7)
|
||||||
|
|
||||||
|
# System prompt
|
||||||
|
system_prompt = Column(Text)
|
||||||
|
|
||||||
|
# Mapping agent types to models
|
||||||
|
AGENT_MODEL_MAPPING = {
|
||||||
|
"Product Owner": ModelPreference.HIGH_REASONING,
|
||||||
|
"Project Manager": ModelPreference.FAST_RESPONSE,
|
||||||
|
"Business Analyst": ModelPreference.HIGH_REASONING,
|
||||||
|
"Software Architect": ModelPreference.HIGH_REASONING,
|
||||||
|
"Software Engineer": ModelPreference.HIGH_REASONING,
|
||||||
|
"UI/UX Designer": ModelPreference.HIGH_REASONING,
|
||||||
|
"QA Engineer": ModelPreference.FAST_RESPONSE,
|
||||||
|
"DevOps Engineer": ModelPreference.FAST_RESPONSE,
|
||||||
|
"AI/ML Engineer": ModelPreference.HIGH_REASONING,
|
||||||
|
"Security Expert": ModelPreference.HIGH_REASONING,
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Rate Limiting Strategy
|
||||||
|
|
||||||
|
```python
|
||||||
|
from litellm import Router
|
||||||
|
import asyncio
|
||||||
|
|
||||||
|
# Configure rate limits per model
|
||||||
|
router = Router(
|
||||||
|
model_list=model_list,
|
||||||
|
redis_host=settings.REDIS_HOST,
|
||||||
|
redis_port=settings.REDIS_PORT,
|
||||||
|
routing_strategy="usage-based-routing", # Route based on rate limits
|
||||||
|
)
|
||||||
|
|
||||||
|
# Custom rate limiter
|
||||||
|
class RateLimiter:
|
||||||
|
def __init__(self, requests_per_minute: int = 60):
|
||||||
|
self.rpm = requests_per_minute
|
||||||
|
self.semaphore = asyncio.Semaphore(requests_per_minute)
|
||||||
|
|
||||||
|
async def acquire(self):
|
||||||
|
await self.semaphore.acquire()
|
||||||
|
# Release after 60 seconds
|
||||||
|
asyncio.create_task(self._release_after(60))
|
||||||
|
|
||||||
|
async def _release_after(self, seconds: int):
|
||||||
|
await asyncio.sleep(seconds)
|
||||||
|
self.semaphore.release()
|
||||||
|
```
|
||||||
|
|
||||||
|
## Recommendations
|
||||||
|
|
||||||
|
1. **Use LiteLLM as the unified abstraction layer**
|
||||||
|
- Simplifies multi-provider support
|
||||||
|
- Built-in failover and retry
|
||||||
|
- Consistent API across providers
|
||||||
|
|
||||||
|
2. **Configure model groups by use case**
|
||||||
|
- `high-reasoning`: Complex analysis, architecture decisions
|
||||||
|
- `fast-response`: Quick tasks, simple queries
|
||||||
|
- `cost-optimized`: Non-critical, high-volume tasks
|
||||||
|
|
||||||
|
3. **Implement automatic failover chain**
|
||||||
|
- Primary: Claude 3.5 Sonnet
|
||||||
|
- Fallback 1: GPT-4 Turbo
|
||||||
|
- Fallback 2: Local Llama 3 (if available)
|
||||||
|
|
||||||
|
4. **Track all usage and costs**
|
||||||
|
- Per agent, per project
|
||||||
|
- Set budget alerts
|
||||||
|
- Generate usage reports
|
||||||
|
|
||||||
|
5. **Cache frequently repeated queries**
|
||||||
|
- Use Redis-backed cache
|
||||||
|
- Cache embeddings for RAG
|
||||||
|
- Cache deterministic transformations
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- [LiteLLM Documentation](https://docs.litellm.ai/)
|
||||||
|
- [LiteLLM Router](https://docs.litellm.ai/docs/routing)
|
||||||
|
- [Anthropic Rate Limits](https://docs.anthropic.com/en/api/rate-limits)
|
||||||
|
|
||||||
|
## Decision
|
||||||
|
|
||||||
|
**Adopt LiteLLM** as the unified LLM abstraction layer with automatic failover, usage-based routing, and Redis-backed caching.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
*Spike completed. Findings will inform ADR-004: LLM Provider Integration Architecture.*
|
||||||
1259
docs/spikes/SPIKE-006-knowledge-base-pgvector.md
Normal file
1259
docs/spikes/SPIKE-006-knowledge-base-pgvector.md
Normal file
File diff suppressed because it is too large
Load Diff
1496
docs/spikes/SPIKE-007-agent-communication-protocol.md
Normal file
1496
docs/spikes/SPIKE-007-agent-communication-protocol.md
Normal file
File diff suppressed because it is too large
Load Diff
1513
docs/spikes/SPIKE-008-workflow-state-machine.md
Normal file
1513
docs/spikes/SPIKE-008-workflow-state-machine.md
Normal file
File diff suppressed because it is too large
Load Diff
1494
docs/spikes/SPIKE-009-issue-synchronization.md
Normal file
1494
docs/spikes/SPIKE-009-issue-synchronization.md
Normal file
File diff suppressed because it is too large
Load Diff
1821
docs/spikes/SPIKE-010-cost-tracking.md
Normal file
1821
docs/spikes/SPIKE-010-cost-tracking.md
Normal file
File diff suppressed because it is too large
Load Diff
1064
docs/spikes/SPIKE-011-audit-logging.md
Normal file
1064
docs/spikes/SPIKE-011-audit-logging.md
Normal file
File diff suppressed because it is too large
Load Diff
1662
docs/spikes/SPIKE-012-client-approval-flow.md
Normal file
1662
docs/spikes/SPIKE-012-client-approval-flow.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,4 +1,4 @@
|
|||||||
# PragmaStack - Frontend
|
# Syndarix - Frontend
|
||||||
|
|
||||||
Production-ready Next.js 16 frontend with TypeScript, authentication, admin panel, and internationalization.
|
Production-ready Next.js 16 frontend with TypeScript, authentication, admin panel, and internationalization.
|
||||||
|
|
||||||
|
|||||||
@@ -273,7 +273,7 @@ NEXT_PUBLIC_DEMO_MODE=true npm run dev
|
|||||||
**1. Fork Repository**
|
**1. Fork Repository**
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
gh repo fork your-repo/fast-next-template
|
git clone https://gitea.pragmazest.com/cardosofelipe/syndarix.git
|
||||||
```
|
```
|
||||||
|
|
||||||
**2. Connect to Vercel**
|
**2. Connect to Vercel**
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Internationalization (i18n) Guide
|
# Internationalization (i18n) Guide
|
||||||
|
|
||||||
This document describes the internationalization implementation in the PragmaStack.
|
This document describes the internationalization implementation in Syndarix.
|
||||||
|
|
||||||
## Overview
|
## Overview
|
||||||
|
|
||||||
|
|||||||
@@ -4,10 +4,10 @@
|
|||||||
|
|
||||||
## Logo
|
## Logo
|
||||||
|
|
||||||
The **PragmaStack** logo represents the core values of the project: structure, speed, and clarity.
|
The **Syndarix** logo represents the core values of the project: structure, speed, and clarity.
|
||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<img src="../../public/logo.svg" alt="PragmaStack Logo" width="300" />
|
<img src="../../public/logo.svg" alt="Syndarix Logo" width="300" />
|
||||||
<p><em>The Stack: Geometric layers representing the full-stack architecture.</em></p>
|
<p><em>The Stack: Geometric layers representing the full-stack architecture.</em></p>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -16,7 +16,7 @@ The **PragmaStack** logo represents the core values of the project: structure, s
|
|||||||
For smaller contexts (favicons, headers), we use the simplified icon:
|
For smaller contexts (favicons, headers), we use the simplified icon:
|
||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<img src="../../public/logo-icon.svg" alt="PragmaStack Icon" width="64" />
|
<img src="../../public/logo-icon.svg" alt="Syndarix Icon" width="64" />
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
For now, we use the **Lucide React** icon set for all iconography. Icons should be used sparingly and meaningfully to enhance understanding, not just for decoration.
|
For now, we use the **Lucide React** icon set for all iconography. Icons should be used sparingly and meaningfully to enhance understanding, not just for decoration.
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Branding Guidelines
|
# Branding Guidelines
|
||||||
|
|
||||||
Welcome to the **PragmaStack** branding guidelines. This section defines who we are, how we speak, and how we look.
|
Welcome to the **Syndarix** branding guidelines. This section defines who we are, how we speak, and how we look.
|
||||||
|
|
||||||
## Contents
|
## Contents
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Quick Start Guide
|
# Quick Start Guide
|
||||||
|
|
||||||
Get up and running with the PragmaStack design system immediately. This guide covers the essential patterns you need to build 80% of interfaces.
|
Get up and running with the Syndarix design system immediately. This guide covers the essential patterns you need to build 80% of interfaces.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# AI Code Generation Guidelines
|
# AI Code Generation Guidelines
|
||||||
|
|
||||||
**For AI Assistants**: This document contains strict rules for generating code in the PragmaStack project. Follow these rules to ensure generated code matches the design system perfectly.
|
**For AI Assistants**: This document contains strict rules for generating code in the Syndarix project. Follow these rules to ensure generated code matches the design system perfectly.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Quick Reference
|
# Quick Reference
|
||||||
|
|
||||||
**Bookmark this page** for instant lookups of colors, spacing, typography, components, and common patterns. Your go-to cheat sheet for the PragmaStack design system.
|
**Bookmark this page** for instant lookups of colors, spacing, typography, components, and common patterns. Your go-to cheat sheet for the Syndarix design system.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
# Design System Documentation
|
# Design System Documentation
|
||||||
|
|
||||||
**PragmaStack Design System** - A comprehensive guide to building consistent, accessible, and beautiful user interfaces.
|
**Syndarix Design System** - A comprehensive guide to building consistent, accessible, and beautiful user interfaces.
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ test.describe('Homepage - Desktop Navigation', () => {
|
|||||||
|
|
||||||
test('should display header with logo and navigation', async ({ page }) => {
|
test('should display header with logo and navigation', async ({ page }) => {
|
||||||
// Logo should be visible
|
// Logo should be visible
|
||||||
await expect(page.getByRole('link', { name: /PragmaStack/i })).toBeVisible();
|
await expect(page.getByRole('link', { name: /Syndarix/i })).toBeVisible();
|
||||||
|
|
||||||
// Desktop navigation links should be visible (use locator to find within header)
|
// Desktop navigation links should be visible (use locator to find within header)
|
||||||
const header = page.locator('header').first();
|
const header = page.locator('header').first();
|
||||||
@@ -23,8 +23,8 @@ test.describe('Homepage - Desktop Navigation', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
test('should display GitHub link with star badge', async ({ page }) => {
|
test('should display GitHub link with star badge', async ({ page }) => {
|
||||||
// Find GitHub link by checking for one that has github.com in href
|
// Find GitHub link by checking for one that has gitea.pragmazest.com in href
|
||||||
const githubLink = page.locator('a[href*="github.com"]').first();
|
const githubLink = page.locator('a[href*="gitea.pragmazest.com"]').first();
|
||||||
await expect(githubLink).toBeVisible();
|
await expect(githubLink).toBeVisible();
|
||||||
await expect(githubLink).toHaveAttribute('target', '_blank');
|
await expect(githubLink).toHaveAttribute('target', '_blank');
|
||||||
});
|
});
|
||||||
@@ -120,7 +120,7 @@ test.describe('Homepage - Hero Section', () => {
|
|||||||
test('should navigate to GitHub when clicking View on GitHub', async ({ page }) => {
|
test('should navigate to GitHub when clicking View on GitHub', async ({ page }) => {
|
||||||
const githubLink = page.getByRole('link', { name: /View on GitHub/i }).first();
|
const githubLink = page.getByRole('link', { name: /View on GitHub/i }).first();
|
||||||
await expect(githubLink).toBeVisible();
|
await expect(githubLink).toBeVisible();
|
||||||
await expect(githubLink).toHaveAttribute('href', expect.stringContaining('github.com'));
|
await expect(githubLink).toHaveAttribute('href', expect.stringContaining('gitea.pragmazest.com'));
|
||||||
});
|
});
|
||||||
|
|
||||||
test('should navigate to components when clicking Explore Components', async ({ page }) => {
|
test('should navigate to components when clicking Explore Components', async ({ page }) => {
|
||||||
@@ -250,7 +250,7 @@ test.describe('Homepage - Feature Sections', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
test('should display philosophy section', async ({ page }) => {
|
test('should display philosophy section', async ({ page }) => {
|
||||||
await expect(page.getByRole('heading', { name: /Why PragmaStack/i })).toBeVisible();
|
await expect(page.getByRole('heading', { name: /Why Syndarix/i })).toBeVisible();
|
||||||
await expect(page.getByText(/MIT licensed/i).first()).toBeVisible();
|
await expect(page.getByText(/MIT licensed/i).first()).toBeVisible();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
@@ -264,7 +264,7 @@ test.describe('Homepage - Footer', () => {
|
|||||||
// Scroll to footer
|
// Scroll to footer
|
||||||
await page.locator('footer').scrollIntoViewIfNeeded();
|
await page.locator('footer').scrollIntoViewIfNeeded();
|
||||||
|
|
||||||
await expect(page.getByText(/PragmaStack. MIT Licensed/i)).toBeVisible();
|
await expect(page.getByText(/Syndarix. MIT Licensed/i)).toBeVisible();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -285,7 +285,7 @@ test.describe('Homepage - Accessibility', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
test('should have accessible links with proper attributes', async ({ page }) => {
|
test('should have accessible links with proper attributes', async ({ page }) => {
|
||||||
const githubLink = page.locator('a[href*="github.com"]').first();
|
const githubLink = page.locator('a[href*="gitea.pragmazest.com"]').first();
|
||||||
await expect(githubLink).toHaveAttribute('target', '_blank');
|
await expect(githubLink).toHaveAttribute('target', '_blank');
|
||||||
await expect(githubLink).toHaveAttribute('rel', 'noopener noreferrer');
|
await expect(githubLink).toHaveAttribute('rel', 'noopener noreferrer');
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -7,42 +7,42 @@
|
|||||||
* - Please do NOT modify this file.
|
* - Please do NOT modify this file.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
const PACKAGE_VERSION = '2.12.3';
|
const PACKAGE_VERSION = '2.12.3'
|
||||||
const INTEGRITY_CHECKSUM = '4db4a41e972cec1b64cc569c66952d82';
|
const INTEGRITY_CHECKSUM = '4db4a41e972cec1b64cc569c66952d82'
|
||||||
const IS_MOCKED_RESPONSE = Symbol('isMockedResponse');
|
const IS_MOCKED_RESPONSE = Symbol('isMockedResponse')
|
||||||
const activeClientIds = new Set();
|
const activeClientIds = new Set()
|
||||||
|
|
||||||
addEventListener('install', function () {
|
addEventListener('install', function () {
|
||||||
self.skipWaiting();
|
self.skipWaiting()
|
||||||
});
|
})
|
||||||
|
|
||||||
addEventListener('activate', function (event) {
|
addEventListener('activate', function (event) {
|
||||||
event.waitUntil(self.clients.claim());
|
event.waitUntil(self.clients.claim())
|
||||||
});
|
})
|
||||||
|
|
||||||
addEventListener('message', async function (event) {
|
addEventListener('message', async function (event) {
|
||||||
const clientId = Reflect.get(event.source || {}, 'id');
|
const clientId = Reflect.get(event.source || {}, 'id')
|
||||||
|
|
||||||
if (!clientId || !self.clients) {
|
if (!clientId || !self.clients) {
|
||||||
return;
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
const client = await self.clients.get(clientId);
|
const client = await self.clients.get(clientId)
|
||||||
|
|
||||||
if (!client) {
|
if (!client) {
|
||||||
return;
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
const allClients = await self.clients.matchAll({
|
const allClients = await self.clients.matchAll({
|
||||||
type: 'window',
|
type: 'window',
|
||||||
});
|
})
|
||||||
|
|
||||||
switch (event.data) {
|
switch (event.data) {
|
||||||
case 'KEEPALIVE_REQUEST': {
|
case 'KEEPALIVE_REQUEST': {
|
||||||
sendToClient(client, {
|
sendToClient(client, {
|
||||||
type: 'KEEPALIVE_RESPONSE',
|
type: 'KEEPALIVE_RESPONSE',
|
||||||
});
|
})
|
||||||
break;
|
break
|
||||||
}
|
}
|
||||||
|
|
||||||
case 'INTEGRITY_CHECK_REQUEST': {
|
case 'INTEGRITY_CHECK_REQUEST': {
|
||||||
@@ -52,12 +52,12 @@ addEventListener('message', async function (event) {
|
|||||||
packageVersion: PACKAGE_VERSION,
|
packageVersion: PACKAGE_VERSION,
|
||||||
checksum: INTEGRITY_CHECKSUM,
|
checksum: INTEGRITY_CHECKSUM,
|
||||||
},
|
},
|
||||||
});
|
})
|
||||||
break;
|
break
|
||||||
}
|
}
|
||||||
|
|
||||||
case 'MOCK_ACTIVATE': {
|
case 'MOCK_ACTIVATE': {
|
||||||
activeClientIds.add(clientId);
|
activeClientIds.add(clientId)
|
||||||
|
|
||||||
sendToClient(client, {
|
sendToClient(client, {
|
||||||
type: 'MOCKING_ENABLED',
|
type: 'MOCKING_ENABLED',
|
||||||
@@ -67,51 +67,54 @@ addEventListener('message', async function (event) {
|
|||||||
frameType: client.frameType,
|
frameType: client.frameType,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
});
|
})
|
||||||
break;
|
break
|
||||||
}
|
}
|
||||||
|
|
||||||
case 'CLIENT_CLOSED': {
|
case 'CLIENT_CLOSED': {
|
||||||
activeClientIds.delete(clientId);
|
activeClientIds.delete(clientId)
|
||||||
|
|
||||||
const remainingClients = allClients.filter((client) => {
|
const remainingClients = allClients.filter((client) => {
|
||||||
return client.id !== clientId;
|
return client.id !== clientId
|
||||||
});
|
})
|
||||||
|
|
||||||
// Unregister itself when there are no more clients
|
// Unregister itself when there are no more clients
|
||||||
if (remainingClients.length === 0) {
|
if (remainingClients.length === 0) {
|
||||||
self.registration.unregister();
|
self.registration.unregister()
|
||||||
}
|
}
|
||||||
|
|
||||||
break;
|
break
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
});
|
})
|
||||||
|
|
||||||
addEventListener('fetch', function (event) {
|
addEventListener('fetch', function (event) {
|
||||||
const requestInterceptedAt = Date.now();
|
const requestInterceptedAt = Date.now()
|
||||||
|
|
||||||
// Bypass navigation requests.
|
// Bypass navigation requests.
|
||||||
if (event.request.mode === 'navigate') {
|
if (event.request.mode === 'navigate') {
|
||||||
return;
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Opening the DevTools triggers the "only-if-cached" request
|
// Opening the DevTools triggers the "only-if-cached" request
|
||||||
// that cannot be handled by the worker. Bypass such requests.
|
// that cannot be handled by the worker. Bypass such requests.
|
||||||
if (event.request.cache === 'only-if-cached' && event.request.mode !== 'same-origin') {
|
if (
|
||||||
return;
|
event.request.cache === 'only-if-cached' &&
|
||||||
|
event.request.mode !== 'same-origin'
|
||||||
|
) {
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
// Bypass all requests when there are no active clients.
|
// Bypass all requests when there are no active clients.
|
||||||
// Prevents the self-unregistered worked from handling requests
|
// Prevents the self-unregistered worked from handling requests
|
||||||
// after it's been terminated (still remains active until the next reload).
|
// after it's been terminated (still remains active until the next reload).
|
||||||
if (activeClientIds.size === 0) {
|
if (activeClientIds.size === 0) {
|
||||||
return;
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
const requestId = crypto.randomUUID();
|
const requestId = crypto.randomUUID()
|
||||||
event.respondWith(handleRequest(event, requestId, requestInterceptedAt));
|
event.respondWith(handleRequest(event, requestId, requestInterceptedAt))
|
||||||
});
|
})
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* @param {FetchEvent} event
|
* @param {FetchEvent} event
|
||||||
@@ -119,18 +122,23 @@ addEventListener('fetch', function (event) {
|
|||||||
* @param {number} requestInterceptedAt
|
* @param {number} requestInterceptedAt
|
||||||
*/
|
*/
|
||||||
async function handleRequest(event, requestId, requestInterceptedAt) {
|
async function handleRequest(event, requestId, requestInterceptedAt) {
|
||||||
const client = await resolveMainClient(event);
|
const client = await resolveMainClient(event)
|
||||||
const requestCloneForEvents = event.request.clone();
|
const requestCloneForEvents = event.request.clone()
|
||||||
const response = await getResponse(event, client, requestId, requestInterceptedAt);
|
const response = await getResponse(
|
||||||
|
event,
|
||||||
|
client,
|
||||||
|
requestId,
|
||||||
|
requestInterceptedAt,
|
||||||
|
)
|
||||||
|
|
||||||
// Send back the response clone for the "response:*" life-cycle events.
|
// Send back the response clone for the "response:*" life-cycle events.
|
||||||
// Ensure MSW is active and ready to handle the message, otherwise
|
// Ensure MSW is active and ready to handle the message, otherwise
|
||||||
// this message will pend indefinitely.
|
// this message will pend indefinitely.
|
||||||
if (client && activeClientIds.has(client.id)) {
|
if (client && activeClientIds.has(client.id)) {
|
||||||
const serializedRequest = await serializeRequest(requestCloneForEvents);
|
const serializedRequest = await serializeRequest(requestCloneForEvents)
|
||||||
|
|
||||||
// Clone the response so both the client and the library could consume it.
|
// Clone the response so both the client and the library could consume it.
|
||||||
const responseClone = response.clone();
|
const responseClone = response.clone()
|
||||||
|
|
||||||
sendToClient(
|
sendToClient(
|
||||||
client,
|
client,
|
||||||
@@ -151,11 +159,11 @@ async function handleRequest(event, requestId, requestInterceptedAt) {
|
|||||||
},
|
},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
responseClone.body ? [serializedRequest.body, responseClone.body] : []
|
responseClone.body ? [serializedRequest.body, responseClone.body] : [],
|
||||||
);
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
return response;
|
return response
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -167,30 +175,30 @@ async function handleRequest(event, requestId, requestInterceptedAt) {
|
|||||||
* @returns {Promise<Client | undefined>}
|
* @returns {Promise<Client | undefined>}
|
||||||
*/
|
*/
|
||||||
async function resolveMainClient(event) {
|
async function resolveMainClient(event) {
|
||||||
const client = await self.clients.get(event.clientId);
|
const client = await self.clients.get(event.clientId)
|
||||||
|
|
||||||
if (activeClientIds.has(event.clientId)) {
|
if (activeClientIds.has(event.clientId)) {
|
||||||
return client;
|
return client
|
||||||
}
|
}
|
||||||
|
|
||||||
if (client?.frameType === 'top-level') {
|
if (client?.frameType === 'top-level') {
|
||||||
return client;
|
return client
|
||||||
}
|
}
|
||||||
|
|
||||||
const allClients = await self.clients.matchAll({
|
const allClients = await self.clients.matchAll({
|
||||||
type: 'window',
|
type: 'window',
|
||||||
});
|
})
|
||||||
|
|
||||||
return allClients
|
return allClients
|
||||||
.filter((client) => {
|
.filter((client) => {
|
||||||
// Get only those clients that are currently visible.
|
// Get only those clients that are currently visible.
|
||||||
return client.visibilityState === 'visible';
|
return client.visibilityState === 'visible'
|
||||||
})
|
})
|
||||||
.find((client) => {
|
.find((client) => {
|
||||||
// Find the client ID that's recorded in the
|
// Find the client ID that's recorded in the
|
||||||
// set of clients that have registered the worker.
|
// set of clients that have registered the worker.
|
||||||
return activeClientIds.has(client.id);
|
return activeClientIds.has(client.id)
|
||||||
});
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -203,34 +211,36 @@ async function resolveMainClient(event) {
|
|||||||
async function getResponse(event, client, requestId, requestInterceptedAt) {
|
async function getResponse(event, client, requestId, requestInterceptedAt) {
|
||||||
// Clone the request because it might've been already used
|
// Clone the request because it might've been already used
|
||||||
// (i.e. its body has been read and sent to the client).
|
// (i.e. its body has been read and sent to the client).
|
||||||
const requestClone = event.request.clone();
|
const requestClone = event.request.clone()
|
||||||
|
|
||||||
function passthrough() {
|
function passthrough() {
|
||||||
// Cast the request headers to a new Headers instance
|
// Cast the request headers to a new Headers instance
|
||||||
// so the headers can be manipulated with.
|
// so the headers can be manipulated with.
|
||||||
const headers = new Headers(requestClone.headers);
|
const headers = new Headers(requestClone.headers)
|
||||||
|
|
||||||
// Remove the "accept" header value that marked this request as passthrough.
|
// Remove the "accept" header value that marked this request as passthrough.
|
||||||
// This prevents request alteration and also keeps it compliant with the
|
// This prevents request alteration and also keeps it compliant with the
|
||||||
// user-defined CORS policies.
|
// user-defined CORS policies.
|
||||||
const acceptHeader = headers.get('accept');
|
const acceptHeader = headers.get('accept')
|
||||||
if (acceptHeader) {
|
if (acceptHeader) {
|
||||||
const values = acceptHeader.split(',').map((value) => value.trim());
|
const values = acceptHeader.split(',').map((value) => value.trim())
|
||||||
const filteredValues = values.filter((value) => value !== 'msw/passthrough');
|
const filteredValues = values.filter(
|
||||||
|
(value) => value !== 'msw/passthrough',
|
||||||
|
)
|
||||||
|
|
||||||
if (filteredValues.length > 0) {
|
if (filteredValues.length > 0) {
|
||||||
headers.set('accept', filteredValues.join(', '));
|
headers.set('accept', filteredValues.join(', '))
|
||||||
} else {
|
} else {
|
||||||
headers.delete('accept');
|
headers.delete('accept')
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return fetch(requestClone, { headers });
|
return fetch(requestClone, { headers })
|
||||||
}
|
}
|
||||||
|
|
||||||
// Bypass mocking when the client is not active.
|
// Bypass mocking when the client is not active.
|
||||||
if (!client) {
|
if (!client) {
|
||||||
return passthrough();
|
return passthrough()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Bypass initial page load requests (i.e. static assets).
|
// Bypass initial page load requests (i.e. static assets).
|
||||||
@@ -238,11 +248,11 @@ async function getResponse(event, client, requestId, requestInterceptedAt) {
|
|||||||
// means that MSW hasn't dispatched the "MOCK_ACTIVATE" event yet
|
// means that MSW hasn't dispatched the "MOCK_ACTIVATE" event yet
|
||||||
// and is not ready to handle requests.
|
// and is not ready to handle requests.
|
||||||
if (!activeClientIds.has(client.id)) {
|
if (!activeClientIds.has(client.id)) {
|
||||||
return passthrough();
|
return passthrough()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Notify the client that a request has been intercepted.
|
// Notify the client that a request has been intercepted.
|
||||||
const serializedRequest = await serializeRequest(event.request);
|
const serializedRequest = await serializeRequest(event.request)
|
||||||
const clientMessage = await sendToClient(
|
const clientMessage = await sendToClient(
|
||||||
client,
|
client,
|
||||||
{
|
{
|
||||||
@@ -253,20 +263,20 @@ async function getResponse(event, client, requestId, requestInterceptedAt) {
|
|||||||
...serializedRequest,
|
...serializedRequest,
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
[serializedRequest.body]
|
[serializedRequest.body],
|
||||||
);
|
)
|
||||||
|
|
||||||
switch (clientMessage.type) {
|
switch (clientMessage.type) {
|
||||||
case 'MOCK_RESPONSE': {
|
case 'MOCK_RESPONSE': {
|
||||||
return respondWithMock(clientMessage.data);
|
return respondWithMock(clientMessage.data)
|
||||||
}
|
}
|
||||||
|
|
||||||
case 'PASSTHROUGH': {
|
case 'PASSTHROUGH': {
|
||||||
return passthrough();
|
return passthrough()
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return passthrough();
|
return passthrough()
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -277,18 +287,21 @@ async function getResponse(event, client, requestId, requestInterceptedAt) {
|
|||||||
*/
|
*/
|
||||||
function sendToClient(client, message, transferrables = []) {
|
function sendToClient(client, message, transferrables = []) {
|
||||||
return new Promise((resolve, reject) => {
|
return new Promise((resolve, reject) => {
|
||||||
const channel = new MessageChannel();
|
const channel = new MessageChannel()
|
||||||
|
|
||||||
channel.port1.onmessage = (event) => {
|
channel.port1.onmessage = (event) => {
|
||||||
if (event.data && event.data.error) {
|
if (event.data && event.data.error) {
|
||||||
return reject(event.data.error);
|
return reject(event.data.error)
|
||||||
}
|
}
|
||||||
|
|
||||||
resolve(event.data);
|
resolve(event.data)
|
||||||
};
|
}
|
||||||
|
|
||||||
client.postMessage(message, [channel.port2, ...transferrables.filter(Boolean)]);
|
client.postMessage(message, [
|
||||||
});
|
channel.port2,
|
||||||
|
...transferrables.filter(Boolean),
|
||||||
|
])
|
||||||
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -301,17 +314,17 @@ function respondWithMock(response) {
|
|||||||
// instance will have status code set to 0. Since it's not possible to create
|
// instance will have status code set to 0. Since it's not possible to create
|
||||||
// a Response instance with status code 0, handle that use-case separately.
|
// a Response instance with status code 0, handle that use-case separately.
|
||||||
if (response.status === 0) {
|
if (response.status === 0) {
|
||||||
return Response.error();
|
return Response.error()
|
||||||
}
|
}
|
||||||
|
|
||||||
const mockedResponse = new Response(response.body, response);
|
const mockedResponse = new Response(response.body, response)
|
||||||
|
|
||||||
Reflect.defineProperty(mockedResponse, IS_MOCKED_RESPONSE, {
|
Reflect.defineProperty(mockedResponse, IS_MOCKED_RESPONSE, {
|
||||||
value: true,
|
value: true,
|
||||||
enumerable: true,
|
enumerable: true,
|
||||||
});
|
})
|
||||||
|
|
||||||
return mockedResponse;
|
return mockedResponse
|
||||||
}
|
}
|
||||||
|
|
||||||
/**
|
/**
|
||||||
@@ -332,5 +345,5 @@ async function serializeRequest(request) {
|
|||||||
referrerPolicy: request.referrerPolicy,
|
referrerPolicy: request.referrerPolicy,
|
||||||
body: await request.arrayBuffer(),
|
body: await request.arrayBuffer(),
|
||||||
keepalive: request.keepalive,
|
keepalive: request.keepalive,
|
||||||
};
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -10,7 +10,7 @@ import { Footer } from '@/components/layout/Footer';
|
|||||||
|
|
||||||
export const metadata: Metadata = {
|
export const metadata: Metadata = {
|
||||||
title: {
|
title: {
|
||||||
template: '%s | PragmaStack',
|
template: '%s | Syndarix',
|
||||||
default: 'Dashboard',
|
default: 'Dashboard',
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ import { AdminSidebar, Breadcrumbs } from '@/components/admin';
|
|||||||
|
|
||||||
export const metadata: Metadata = {
|
export const metadata: Metadata = {
|
||||||
title: {
|
title: {
|
||||||
template: '%s | Admin | PragmaStack',
|
template: '%s | Admin | Syndarix',
|
||||||
default: 'Admin Dashboard',
|
default: 'Admin Dashboard',
|
||||||
},
|
},
|
||||||
};
|
};
|
||||||
|
|||||||
@@ -26,8 +26,8 @@ import { Badge } from '@/components/ui/badge';
|
|||||||
import { Separator } from '@/components/ui/separator';
|
import { Separator } from '@/components/ui/separator';
|
||||||
|
|
||||||
export const metadata: Metadata = {
|
export const metadata: Metadata = {
|
||||||
title: 'Demo Tour | PragmaStack',
|
title: 'Demo Tour | Syndarix',
|
||||||
description: 'Try all features with demo credentials - comprehensive guide to the PragmaStack',
|
description: 'Try all features with demo credentials - comprehensive guide to the Syndarix',
|
||||||
};
|
};
|
||||||
|
|
||||||
const demoCategories = [
|
const demoCategories = [
|
||||||
|
|||||||
@@ -120,7 +120,7 @@ export default function DocsHub() {
|
|||||||
<h2 className="text-4xl font-bold tracking-tight mb-4">Design System Documentation</h2>
|
<h2 className="text-4xl font-bold tracking-tight mb-4">Design System Documentation</h2>
|
||||||
<p className="text-lg text-muted-foreground mb-8">
|
<p className="text-lg text-muted-foreground mb-8">
|
||||||
Comprehensive guides, best practices, and references for building consistent,
|
Comprehensive guides, best practices, and references for building consistent,
|
||||||
accessible, and maintainable user interfaces with the PragmaStack design system.
|
accessible, and maintainable user interfaces with the Syndarix design system.
|
||||||
</p>
|
</p>
|
||||||
<div className="flex flex-wrap gap-3 justify-center">
|
<div className="flex flex-wrap gap-3 justify-center">
|
||||||
<Link href="/dev/docs/design-system/00-quick-start">
|
<Link href="/dev/docs/design-system/00-quick-start">
|
||||||
|
|||||||
@@ -14,7 +14,7 @@ import { Badge } from '@/components/ui/badge';
|
|||||||
import { Separator } from '@/components/ui/separator';
|
import { Separator } from '@/components/ui/separator';
|
||||||
|
|
||||||
export const metadata: Metadata = {
|
export const metadata: Metadata = {
|
||||||
title: 'Design System Hub | PragmaStack',
|
title: 'Design System Hub | Syndarix',
|
||||||
description:
|
description:
|
||||||
'Interactive design system demonstrations with live examples - explore components, layouts, spacing, and forms built with shadcn/ui and Tailwind CSS',
|
'Interactive design system demonstrations with live examples - explore components, layouts, spacing, and forms built with shadcn/ui and Tailwind CSS',
|
||||||
};
|
};
|
||||||
@@ -90,7 +90,7 @@ export default function DesignSystemHub() {
|
|||||||
</div>
|
</div>
|
||||||
<p className="text-lg text-muted-foreground">
|
<p className="text-lg text-muted-foreground">
|
||||||
Interactive demonstrations, live examples, and comprehensive documentation for the
|
Interactive demonstrations, live examples, and comprehensive documentation for the
|
||||||
PragmaStack design system. Built with shadcn/ui + Tailwind CSS 4.
|
Syndarix design system. Built with shadcn/ui + Tailwind CSS 4.
|
||||||
</p>
|
</p>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
/* istanbul ignore file -- @preserve Landing page with complex interactions tested via E2E */
|
/* istanbul ignore file -- @preserve Landing page with complex interactions tested via E2E */
|
||||||
/**
|
/**
|
||||||
* Homepage / Landing Page
|
* Homepage / Landing Page
|
||||||
* Main landing page for the PragmaStack project
|
* Main landing page for the Syndarix project
|
||||||
* Showcases features, tech stack, and provides demos for developers
|
* Showcases features, tech stack, and provides demos for developers
|
||||||
*/
|
*/
|
||||||
|
|
||||||
@@ -68,7 +68,7 @@ export default function Home() {
|
|||||||
<div className="container mx-auto px-6 py-8">
|
<div className="container mx-auto px-6 py-8">
|
||||||
<div className="flex flex-col md:flex-row items-center justify-between gap-4">
|
<div className="flex flex-col md:flex-row items-center justify-between gap-4">
|
||||||
<div className="text-sm text-muted-foreground">
|
<div className="text-sm text-muted-foreground">
|
||||||
© {new Date().getFullYear()} PragmaStack. MIT Licensed.
|
© {new Date().getFullYear()} Syndarix. MIT Licensed.
|
||||||
</div>
|
</div>
|
||||||
<div className="flex items-center gap-6 text-sm text-muted-foreground">
|
<div className="flex items-center gap-6 text-sm text-muted-foreground">
|
||||||
<Link href="/demos" className="hover:text-foreground transition-colors">
|
<Link href="/demos" className="hover:text-foreground transition-colors">
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
@import 'tailwindcss';
|
@import 'tailwindcss';
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* PragmaStack Design System
|
* Syndarix Design System
|
||||||
* Theme: Modern Minimal (from tweakcn.com)
|
* Theme: Modern Minimal (from tweakcn.com)
|
||||||
* Primary: Blue | Color Space: OKLCH
|
* Primary: Blue | Color Space: OKLCH
|
||||||
*
|
*
|
||||||
|
|||||||
@@ -96,12 +96,12 @@ export function DevLayout({ children }: DevLayoutProps) {
|
|||||||
<div className="flex items-center gap-3 shrink-0">
|
<div className="flex items-center gap-3 shrink-0">
|
||||||
<Image
|
<Image
|
||||||
src="/logo-icon.svg"
|
src="/logo-icon.svg"
|
||||||
alt="PragmaStack Logo"
|
alt="Syndarix Logo"
|
||||||
width={24}
|
width={24}
|
||||||
height={24}
|
height={24}
|
||||||
className="h-6 w-6"
|
className="h-6 w-6"
|
||||||
/>
|
/>
|
||||||
<h1 className="text-base font-semibold">PragmaStack</h1>
|
<h1 className="text-base font-semibold">Syndarix</h1>
|
||||||
<Badge variant="secondary" className="text-xs">
|
<Badge variant="secondary" className="text-xs">
|
||||||
Dev
|
Dev
|
||||||
</Badge>
|
</Badge>
|
||||||
|
|||||||
@@ -14,8 +14,8 @@ import { Link } from '@/lib/i18n/routing';
|
|||||||
|
|
||||||
const commands = [
|
const commands = [
|
||||||
{ text: '# Clone the repository', delay: 0 },
|
{ text: '# Clone the repository', delay: 0 },
|
||||||
{ text: '$ git clone https://github.com/your-org/fast-next-template.git', delay: 800 },
|
{ text: '$ git clone https://gitea.pragmazest.com/cardosofelipe/syndarix.git', delay: 800 },
|
||||||
{ text: '$ cd fast-next-template', delay: 1600 },
|
{ text: '$ cd syndarix', delay: 1600 },
|
||||||
{ text: '', delay: 2200 },
|
{ text: '', delay: 2200 },
|
||||||
{ text: '# Start with Docker (one command)', delay: 2400 },
|
{ text: '# Start with Docker (one command)', delay: 2400 },
|
||||||
{ text: '$ docker-compose up', delay: 3200 },
|
{ text: '$ docker-compose up', delay: 3200 },
|
||||||
|
|||||||
@@ -49,7 +49,7 @@ export function CTASection({ onOpenDemoModal }: CTASectionProps) {
|
|||||||
<div className="flex flex-col sm:flex-row items-center justify-center gap-4 pt-4">
|
<div className="flex flex-col sm:flex-row items-center justify-center gap-4 pt-4">
|
||||||
<Button asChild size="lg" className="gap-2 text-base group">
|
<Button asChild size="lg" className="gap-2 text-base group">
|
||||||
<a
|
<a
|
||||||
href="https://github.com/your-org/fast-next-template"
|
href="https://gitea.pragmazest.com/cardosofelipe/syndarix"
|
||||||
target="_blank"
|
target="_blank"
|
||||||
rel="noopener noreferrer"
|
rel="noopener noreferrer"
|
||||||
>
|
>
|
||||||
@@ -75,7 +75,7 @@ export function CTASection({ onOpenDemoModal }: CTASectionProps) {
|
|||||||
</Button>
|
</Button>
|
||||||
<Button asChild size="lg" variant="ghost" className="gap-2 text-base group">
|
<Button asChild size="lg" variant="ghost" className="gap-2 text-base group">
|
||||||
<a
|
<a
|
||||||
href="https://github.com/your-org/fast-next-template#documentation"
|
href="https://gitea.pragmazest.com/cardosofelipe/syndarix#documentation"
|
||||||
target="_blank"
|
target="_blank"
|
||||||
rel="noopener noreferrer"
|
rel="noopener noreferrer"
|
||||||
>
|
>
|
||||||
|
|||||||
@@ -44,7 +44,7 @@ const features = [
|
|||||||
'12+ documentation guides covering architecture, design system, testing patterns, deployment, and AI code generation guidelines. Interactive API docs with Swagger and ReDoc',
|
'12+ documentation guides covering architecture, design system, testing patterns, deployment, and AI code generation guidelines. Interactive API docs with Swagger and ReDoc',
|
||||||
highlight: 'Developer-first docs',
|
highlight: 'Developer-first docs',
|
||||||
ctaText: 'Browse Docs',
|
ctaText: 'Browse Docs',
|
||||||
ctaHref: 'https://github.com/your-org/fast-next-template#documentation',
|
ctaHref: 'https://gitea.pragmazest.com/cardosofelipe/syndarix#documentation',
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
icon: Server,
|
icon: Server,
|
||||||
@@ -53,7 +53,7 @@ const features = [
|
|||||||
'Docker deployment configs, database migrations with Alembic helpers, connection pooling, health checks, monitoring setup, and production security headers',
|
'Docker deployment configs, database migrations with Alembic helpers, connection pooling, health checks, monitoring setup, and production security headers',
|
||||||
highlight: 'Deploy with confidence',
|
highlight: 'Deploy with confidence',
|
||||||
ctaText: 'Deployment Guide',
|
ctaText: 'Deployment Guide',
|
||||||
ctaHref: 'https://github.com/your-org/fast-next-template#deployment',
|
ctaHref: 'https://gitea.pragmazest.com/cardosofelipe/syndarix#deployment',
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
icon: Code,
|
icon: Code,
|
||||||
|
|||||||
@@ -48,13 +48,13 @@ export function Header({ onOpenDemoModal }: HeaderProps) {
|
|||||||
>
|
>
|
||||||
<Image
|
<Image
|
||||||
src="/logo-icon.svg"
|
src="/logo-icon.svg"
|
||||||
alt="PragmaStack Logo"
|
alt="Syndarix Logo"
|
||||||
width={32}
|
width={32}
|
||||||
height={32}
|
height={32}
|
||||||
className="h-8 w-8"
|
className="h-8 w-8"
|
||||||
/>
|
/>
|
||||||
<span className="bg-gradient-to-r from-primary to-primary/60 bg-clip-text text-transparent">
|
<span className="bg-gradient-to-r from-primary to-primary/60 bg-clip-text text-transparent">
|
||||||
PragmaStack
|
Syndarix
|
||||||
</span>
|
</span>
|
||||||
</Link>
|
</Link>
|
||||||
|
|
||||||
@@ -72,7 +72,7 @@ export function Header({ onOpenDemoModal }: HeaderProps) {
|
|||||||
|
|
||||||
{/* GitHub Link with Star */}
|
{/* GitHub Link with Star */}
|
||||||
<a
|
<a
|
||||||
href="https://github.com/your-org/fast-next-template"
|
href="https://gitea.pragmazest.com/cardosofelipe/syndarix"
|
||||||
target="_blank"
|
target="_blank"
|
||||||
rel="noopener noreferrer"
|
rel="noopener noreferrer"
|
||||||
className="flex items-center gap-2 text-sm font-medium text-muted-foreground hover:text-foreground transition-colors"
|
className="flex items-center gap-2 text-sm font-medium text-muted-foreground hover:text-foreground transition-colors"
|
||||||
@@ -135,7 +135,7 @@ export function Header({ onOpenDemoModal }: HeaderProps) {
|
|||||||
|
|
||||||
{/* GitHub Link */}
|
{/* GitHub Link */}
|
||||||
<a
|
<a
|
||||||
href="https://github.com/your-org/fast-next-template"
|
href="https://gitea.pragmazest.com/cardosofelipe/syndarix"
|
||||||
target="_blank"
|
target="_blank"
|
||||||
rel="noopener noreferrer"
|
rel="noopener noreferrer"
|
||||||
onClick={() => setMobileMenuOpen(false)}
|
onClick={() => setMobileMenuOpen(false)}
|
||||||
|
|||||||
@@ -72,7 +72,7 @@ export function HeroSection({ onOpenDemoModal }: HeroSectionProps) {
|
|||||||
animate={{ opacity: 1, y: 0 }}
|
animate={{ opacity: 1, y: 0 }}
|
||||||
transition={{ duration: 0.5, delay: 0.2 }}
|
transition={{ duration: 0.5, delay: 0.2 }}
|
||||||
>
|
>
|
||||||
Opinionated, secure, and production-ready. PragmaStack gives you the solid foundation
|
Opinionated, secure, and production-ready. Syndarix gives you the solid foundation
|
||||||
you need to stop configuring and start shipping.{' '}
|
you need to stop configuring and start shipping.{' '}
|
||||||
<span className="text-foreground font-medium">Start building features on day one.</span>
|
<span className="text-foreground font-medium">Start building features on day one.</span>
|
||||||
</motion.p>
|
</motion.p>
|
||||||
@@ -93,7 +93,7 @@ export function HeroSection({ onOpenDemoModal }: HeroSectionProps) {
|
|||||||
</Button>
|
</Button>
|
||||||
<Button asChild size="lg" variant="outline" className="gap-2 text-base group">
|
<Button asChild size="lg" variant="outline" className="gap-2 text-base group">
|
||||||
<a
|
<a
|
||||||
href="https://github.com/your-org/fast-next-template"
|
href="https://gitea.pragmazest.com/cardosofelipe/syndarix"
|
||||||
target="_blank"
|
target="_blank"
|
||||||
rel="noopener noreferrer"
|
rel="noopener noreferrer"
|
||||||
>
|
>
|
||||||
|
|||||||
@@ -33,7 +33,7 @@ export function PhilosophySection() {
|
|||||||
viewport={{ once: true, margin: '-100px' }}
|
viewport={{ once: true, margin: '-100px' }}
|
||||||
transition={{ duration: 0.6 }}
|
transition={{ duration: 0.6 }}
|
||||||
>
|
>
|
||||||
<h2 className="text-3xl md:text-4xl font-bold mb-6">Why PragmaStack?</h2>
|
<h2 className="text-3xl md:text-4xl font-bold mb-6">Why Syndarix?</h2>
|
||||||
<div className="space-y-4 text-lg text-muted-foreground leading-relaxed">
|
<div className="space-y-4 text-lg text-muted-foreground leading-relaxed">
|
||||||
<p>
|
<p>
|
||||||
We built this template after rebuilding the same authentication, authorization, and
|
We built this template after rebuilding the same authentication, authorization, and
|
||||||
|
|||||||
@@ -13,8 +13,8 @@ import { vscDarkPlus } from 'react-syntax-highlighter/dist/esm/styles/prism';
|
|||||||
import { Button } from '@/components/ui/button';
|
import { Button } from '@/components/ui/button';
|
||||||
|
|
||||||
const codeString = `# Clone and start with Docker
|
const codeString = `# Clone and start with Docker
|
||||||
git clone https://github.com/your-org/fast-next-template.git
|
git clone https://gitea.pragmazest.com/cardosofelipe/syndarix.git
|
||||||
cd fast-next-template
|
cd syndarix
|
||||||
docker-compose up
|
docker-compose up
|
||||||
|
|
||||||
# Or set up locally
|
# Or set up locally
|
||||||
|
|||||||
@@ -18,12 +18,12 @@ export function Footer() {
|
|||||||
<div className="flex items-center gap-2 text-center text-sm text-muted-foreground md:text-left">
|
<div className="flex items-center gap-2 text-center text-sm text-muted-foreground md:text-left">
|
||||||
<Image
|
<Image
|
||||||
src="/logo-icon.svg"
|
src="/logo-icon.svg"
|
||||||
alt="PragmaStack Logo"
|
alt="Syndarix Logo"
|
||||||
width={20}
|
width={20}
|
||||||
height={20}
|
height={20}
|
||||||
className="h-5 w-5 opacity-70"
|
className="h-5 w-5 opacity-70"
|
||||||
/>
|
/>
|
||||||
<span>© {currentYear} PragmaStack. All rights reserved.</span>
|
<span>© {currentYear} Syndarix. All rights reserved.</span>
|
||||||
</div>
|
</div>
|
||||||
<div className="flex space-x-6">
|
<div className="flex space-x-6">
|
||||||
<Link
|
<Link
|
||||||
@@ -33,7 +33,7 @@ export function Footer() {
|
|||||||
Settings
|
Settings
|
||||||
</Link>
|
</Link>
|
||||||
<a
|
<a
|
||||||
href="https://github.com/cardosofelipe/pragmastack"
|
href="https://gitea.pragmazest.com/cardosofelipe/syndarix"
|
||||||
target="_blank"
|
target="_blank"
|
||||||
rel="noopener noreferrer"
|
rel="noopener noreferrer"
|
||||||
className="text-sm text-muted-foreground hover:text-foreground transition-colors"
|
className="text-sm text-muted-foreground hover:text-foreground transition-colors"
|
||||||
|
|||||||
@@ -86,12 +86,12 @@ export function Header() {
|
|||||||
<Link href="/" className="flex items-center space-x-2">
|
<Link href="/" className="flex items-center space-x-2">
|
||||||
<Image
|
<Image
|
||||||
src="/logo-icon.svg"
|
src="/logo-icon.svg"
|
||||||
alt="PragmaStack Logo"
|
alt="Syndarix Logo"
|
||||||
width={32}
|
width={32}
|
||||||
height={32}
|
height={32}
|
||||||
className="h-8 w-8"
|
className="h-8 w-8"
|
||||||
/>
|
/>
|
||||||
<span className="text-xl font-bold text-foreground">PragmaStack</span>
|
<span className="text-xl font-bold text-foreground">Syndarix</span>
|
||||||
</Link>
|
</Link>
|
||||||
|
|
||||||
{/* Navigation Links */}
|
{/* Navigation Links */}
|
||||||
|
|||||||
@@ -13,8 +13,8 @@ export type Locale = 'en' | 'it';
|
|||||||
*/
|
*/
|
||||||
export const siteConfig = {
|
export const siteConfig = {
|
||||||
name: {
|
name: {
|
||||||
en: 'PragmaStack',
|
en: 'Syndarix',
|
||||||
it: 'PragmaStack',
|
it: 'Syndarix',
|
||||||
},
|
},
|
||||||
description: {
|
description: {
|
||||||
en: 'Production-ready FastAPI + Next.js full-stack template with authentication, admin panel, and comprehensive testing',
|
en: 'Production-ready FastAPI + Next.js full-stack template with authentication, admin panel, and comprehensive testing',
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
/**
|
/**
|
||||||
* Tests for Home Page
|
* Tests for Home Page
|
||||||
* Tests for the new PragmaStack landing page
|
* Tests for the new Syndarix landing page
|
||||||
*/
|
*/
|
||||||
|
|
||||||
import { render, screen, within, fireEvent } from '@testing-library/react';
|
import { render, screen, within, fireEvent } from '@testing-library/react';
|
||||||
@@ -87,13 +87,13 @@ describe('HomePage', () => {
|
|||||||
it('renders header with logo', () => {
|
it('renders header with logo', () => {
|
||||||
render(<Home />);
|
render(<Home />);
|
||||||
const header = screen.getByRole('banner');
|
const header = screen.getByRole('banner');
|
||||||
expect(within(header).getByText('PragmaStack')).toBeInTheDocument();
|
expect(within(header).getByText('Syndarix')).toBeInTheDocument();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('renders footer with copyright', () => {
|
it('renders footer with copyright', () => {
|
||||||
render(<Home />);
|
render(<Home />);
|
||||||
const footer = screen.getByRole('contentinfo');
|
const footer = screen.getByRole('contentinfo');
|
||||||
expect(within(footer).getByText(/PragmaStack. MIT Licensed/i)).toBeInTheDocument();
|
expect(within(footer).getByText(/Syndarix. MIT Licensed/i)).toBeInTheDocument();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -210,7 +210,7 @@ describe('HomePage', () => {
|
|||||||
describe('Philosophy Section', () => {
|
describe('Philosophy Section', () => {
|
||||||
it('renders why this template exists', () => {
|
it('renders why this template exists', () => {
|
||||||
render(<Home />);
|
render(<Home />);
|
||||||
expect(screen.getByText(/Why PragmaStack\?/i)).toBeInTheDocument();
|
expect(screen.getByText(/Why Syndarix\?/i)).toBeInTheDocument();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('renders what you wont find section', () => {
|
it('renders what you wont find section', () => {
|
||||||
|
|||||||
@@ -71,7 +71,7 @@ describe('CTASection', () => {
|
|||||||
);
|
);
|
||||||
|
|
||||||
const githubLink = screen.getByRole('link', { name: /get started on github/i });
|
const githubLink = screen.getByRole('link', { name: /get started on github/i });
|
||||||
expect(githubLink).toHaveAttribute('href', 'https://github.com/your-org/fast-next-template');
|
expect(githubLink).toHaveAttribute('href', 'https://gitea.pragmazest.com/cardosofelipe/syndarix');
|
||||||
expect(githubLink).toHaveAttribute('target', '_blank');
|
expect(githubLink).toHaveAttribute('target', '_blank');
|
||||||
expect(githubLink).toHaveAttribute('rel', 'noopener noreferrer');
|
expect(githubLink).toHaveAttribute('rel', 'noopener noreferrer');
|
||||||
});
|
});
|
||||||
@@ -101,7 +101,7 @@ describe('CTASection', () => {
|
|||||||
const docsLink = screen.getByRole('link', { name: /read documentation/i });
|
const docsLink = screen.getByRole('link', { name: /read documentation/i });
|
||||||
expect(docsLink).toHaveAttribute(
|
expect(docsLink).toHaveAttribute(
|
||||||
'href',
|
'href',
|
||||||
'https://github.com/your-org/fast-next-template#documentation'
|
'https://gitea.pragmazest.com/cardosofelipe/syndarix#documentation'
|
||||||
);
|
);
|
||||||
expect(docsLink).toHaveAttribute('target', '_blank');
|
expect(docsLink).toHaveAttribute('target', '_blank');
|
||||||
expect(docsLink).toHaveAttribute('rel', 'noopener noreferrer');
|
expect(docsLink).toHaveAttribute('rel', 'noopener noreferrer');
|
||||||
|
|||||||
@@ -55,7 +55,7 @@ describe('Header', () => {
|
|||||||
/>
|
/>
|
||||||
);
|
);
|
||||||
|
|
||||||
expect(screen.getByText('PragmaStack')).toBeInTheDocument();
|
expect(screen.getByText('Syndarix')).toBeInTheDocument();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('logo links to homepage', () => {
|
it('logo links to homepage', () => {
|
||||||
@@ -67,7 +67,7 @@ describe('Header', () => {
|
|||||||
/>
|
/>
|
||||||
);
|
);
|
||||||
|
|
||||||
const logoLink = screen.getByRole('link', { name: /PragmaStack/i });
|
const logoLink = screen.getByRole('link', { name: /Syndarix/i });
|
||||||
expect(logoLink).toHaveAttribute('href', '/');
|
expect(logoLink).toHaveAttribute('href', '/');
|
||||||
});
|
});
|
||||||
|
|
||||||
@@ -97,12 +97,12 @@ describe('Header', () => {
|
|||||||
|
|
||||||
const githubLinks = screen.getAllByRole('link', { name: /github/i });
|
const githubLinks = screen.getAllByRole('link', { name: /github/i });
|
||||||
const desktopGithubLink = githubLinks.find((link) =>
|
const desktopGithubLink = githubLinks.find((link) =>
|
||||||
link.getAttribute('href')?.includes('github.com')
|
link.getAttribute('href')?.includes('gitea.pragmazest.com')
|
||||||
);
|
);
|
||||||
|
|
||||||
expect(desktopGithubLink).toHaveAttribute(
|
expect(desktopGithubLink).toHaveAttribute(
|
||||||
'href',
|
'href',
|
||||||
'https://github.com/your-org/fast-next-template'
|
'https://gitea.pragmazest.com/cardosofelipe/syndarix'
|
||||||
);
|
);
|
||||||
expect(desktopGithubLink).toHaveAttribute('target', '_blank');
|
expect(desktopGithubLink).toHaveAttribute('target', '_blank');
|
||||||
expect(desktopGithubLink).toHaveAttribute('rel', 'noopener noreferrer');
|
expect(desktopGithubLink).toHaveAttribute('rel', 'noopener noreferrer');
|
||||||
|
|||||||
@@ -100,7 +100,7 @@ describe('HeroSection', () => {
|
|||||||
);
|
);
|
||||||
|
|
||||||
const githubLink = screen.getByRole('link', { name: /view on github/i });
|
const githubLink = screen.getByRole('link', { name: /view on github/i });
|
||||||
expect(githubLink).toHaveAttribute('href', 'https://github.com/your-org/fast-next-template');
|
expect(githubLink).toHaveAttribute('href', 'https://gitea.pragmazest.com/cardosofelipe/syndarix');
|
||||||
expect(githubLink).toHaveAttribute('target', '_blank');
|
expect(githubLink).toHaveAttribute('target', '_blank');
|
||||||
expect(githubLink).toHaveAttribute('rel', 'noopener noreferrer');
|
expect(githubLink).toHaveAttribute('rel', 'noopener noreferrer');
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -20,7 +20,7 @@ describe('Footer', () => {
|
|||||||
|
|
||||||
const currentYear = new Date().getFullYear();
|
const currentYear = new Date().getFullYear();
|
||||||
expect(
|
expect(
|
||||||
screen.getByText(`© ${currentYear} PragmaStack. All rights reserved.`)
|
screen.getByText(`© ${currentYear} Syndarix. All rights reserved.`)
|
||||||
).toBeInTheDocument();
|
).toBeInTheDocument();
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@@ -63,7 +63,7 @@ describe('Header', () => {
|
|||||||
|
|
||||||
render(<Header />);
|
render(<Header />);
|
||||||
|
|
||||||
expect(screen.getByText('PragmaStack')).toBeInTheDocument();
|
expect(screen.getByText('Syndarix')).toBeInTheDocument();
|
||||||
});
|
});
|
||||||
|
|
||||||
it('renders theme toggle', () => {
|
it('renders theme toggle', () => {
|
||||||
|
|||||||
@@ -27,8 +27,8 @@ describe('metadata utilities', () => {
|
|||||||
});
|
});
|
||||||
|
|
||||||
it('should have English and Italian names', () => {
|
it('should have English and Italian names', () => {
|
||||||
expect(siteConfig.name.en).toBe('PragmaStack');
|
expect(siteConfig.name.en).toBe('Syndarix');
|
||||||
expect(siteConfig.name.it).toBe('PragmaStack');
|
expect(siteConfig.name.it).toBe('Syndarix');
|
||||||
});
|
});
|
||||||
|
|
||||||
it('should have English and Italian descriptions', () => {
|
it('should have English and Italian descriptions', () => {
|
||||||
|
|||||||
Reference in New Issue
Block a user