forked from cardosofelipe/pragma-stack
Compare commits
3 Commits
6e3cdebbfb
...
bd702734c2
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
bd702734c2 | ||
|
|
5594655fba | ||
|
|
ebd307cab4 |
@@ -1,6 +1,6 @@
|
||||
# PragmaStack Backend API
|
||||
# Syndarix Backend API
|
||||
|
||||
> The pragmatic, production-ready FastAPI backend for PragmaStack.
|
||||
> The pragmatic, production-ready FastAPI backend for Syndarix.
|
||||
|
||||
## Overview
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ from pydantic_settings import BaseSettings
|
||||
|
||||
|
||||
class Settings(BaseSettings):
|
||||
PROJECT_NAME: str = "PragmaStack"
|
||||
PROJECT_NAME: str = "Syndarix"
|
||||
VERSION: str = "1.0.0"
|
||||
API_V1_STR: str = "/api/v1"
|
||||
|
||||
|
||||
320
docs/adrs/ADR-007-agentic-framework-selection.md
Normal file
320
docs/adrs/ADR-007-agentic-framework-selection.md
Normal file
@@ -0,0 +1,320 @@
|
||||
# ADR-007: Agentic Framework Selection
|
||||
|
||||
**Status:** Accepted
|
||||
**Date:** 2025-12-29
|
||||
**Deciders:** Architecture Team
|
||||
**Related Spikes:** SPIKE-002, SPIKE-005, SPIKE-007
|
||||
|
||||
---
|
||||
|
||||
## Context
|
||||
|
||||
Syndarix requires a robust multi-agent orchestration system capable of:
|
||||
- Managing 50+ concurrent agent instances
|
||||
- Supporting long-running workflows (sprints spanning days/weeks)
|
||||
- Providing durable execution that survives crashes/restarts
|
||||
- Enabling human-in-the-loop at configurable autonomy levels
|
||||
- Tracking token usage and costs per agent instance
|
||||
- Supporting multi-provider LLM failover
|
||||
|
||||
We evaluated whether to adopt an existing framework wholesale or build a custom solution.
|
||||
|
||||
## Decision Drivers
|
||||
|
||||
- **Production Readiness:** Must be battle-tested, not experimental
|
||||
- **Self-Hostability:** All components must be self-hostable with no mandatory subscriptions
|
||||
- **Flexibility:** Must support Syndarix-specific patterns (autonomy levels, client approvals)
|
||||
- **Durability:** Workflows must survive failures, restarts, and deployments
|
||||
- **Observability:** Full visibility into agent activities and costs
|
||||
- **Scalability:** Handle 50+ concurrent agents without architectural changes
|
||||
|
||||
## Considered Options
|
||||
|
||||
### Option 1: CrewAI (Full Framework)
|
||||
|
||||
**Pros:**
|
||||
- Easy to get started (role-based agents)
|
||||
- Good for sequential/hierarchical workflows
|
||||
- Strong enterprise traction ($18M Series A, 60% Fortune 500)
|
||||
- LLM-agnostic design
|
||||
|
||||
**Cons:**
|
||||
- Teams report hitting walls at 6-12 months of complexity
|
||||
- Multi-agent coordination can cause infinite loops
|
||||
- Limited ceiling for complex custom patterns
|
||||
- Flows architecture adds learning curve without solving durability
|
||||
|
||||
**Verdict:** Rejected - insufficient flexibility for Syndarix's complex requirements
|
||||
|
||||
### Option 2: AutoGen 0.4 (Full Framework)
|
||||
|
||||
**Pros:**
|
||||
- Event-driven, async-first architecture
|
||||
- Cross-language support (.NET, Python)
|
||||
- Built-in observability (OpenTelemetry)
|
||||
- Microsoft ecosystem integration
|
||||
|
||||
**Cons:**
|
||||
- Tied to Microsoft patterns
|
||||
- Less flexible for custom orchestration
|
||||
- Newer 0.4 version still maturing
|
||||
- No built-in durability for week-long workflows
|
||||
|
||||
**Verdict:** Rejected - too opinionated, insufficient durability
|
||||
|
||||
### Option 3: LangGraph + Custom Infrastructure (Hybrid)
|
||||
|
||||
**Pros:**
|
||||
- Fine-grained control over agent flow
|
||||
- Excellent state management with PostgreSQL persistence
|
||||
- Human-in-the-loop built-in
|
||||
- Production-proven (Klarna, Replit, Elastic)
|
||||
- Fully open source (MIT license)
|
||||
- Can implement any pattern (supervisor, hierarchical, peer-to-peer)
|
||||
|
||||
**Cons:**
|
||||
- Steep learning curve (graph theory, state machines)
|
||||
- Needs additional infrastructure for durability (Temporal)
|
||||
- Observability requires additional tooling
|
||||
|
||||
**Verdict:** Selected as foundation
|
||||
|
||||
### Option 4: Fully Custom Solution
|
||||
|
||||
**Pros:**
|
||||
- Complete control
|
||||
- No external dependencies
|
||||
- Tailored to exact requirements
|
||||
|
||||
**Cons:**
|
||||
- Reinvents production-tested solutions
|
||||
- Higher development and maintenance cost
|
||||
- Longer time to market
|
||||
- More bugs in critical path
|
||||
|
||||
**Verdict:** Rejected - unnecessary when proven components exist
|
||||
|
||||
## Decision
|
||||
|
||||
**Adopt a hybrid architecture using LangGraph as the core agent framework**, complemented by:
|
||||
|
||||
1. **LangGraph** - Agent state machines and logic
|
||||
2. **Temporal** - Durable workflow execution
|
||||
3. **Redis Streams** - Agent-to-agent communication
|
||||
4. **LiteLLM** - Unified LLM access with failover
|
||||
5. **PostgreSQL + pgvector** - State persistence and RAG
|
||||
|
||||
### Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ Syndarix Agentic Architecture │
|
||||
├─────────────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌───────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Temporal Workflow Engine │ │
|
||||
│ │ │ │
|
||||
│ │ • Durable execution (survives crashes, restarts, deployments) │ │
|
||||
│ │ • Human approval checkpoints (wait indefinitely for client) │ │
|
||||
│ │ • Long-running workflows (projects spanning weeks/months) │ │
|
||||
│ │ • Built-in retry policies and timeouts │ │
|
||||
│ │ │ │
|
||||
│ │ License: MIT | Self-Hosted: Yes | Subscription: None Required │ │
|
||||
│ └───────────────────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌───────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ LangGraph Agent Runtime │ │
|
||||
│ │ │ │
|
||||
│ │ • Graph-based state machines for agent logic │ │
|
||||
│ │ • Persistent checkpoints to PostgreSQL │ │
|
||||
│ │ • Cycles, conditionals, parallel execution │ │
|
||||
│ │ • Human-in-the-loop first-class support │ │
|
||||
│ │ │ │
|
||||
│ │ ┌─────────────────────────────────────────────────────────────┐ │ │
|
||||
│ │ │ Agent State Graph │ │ │
|
||||
│ │ │ [IDLE] ──► [THINKING] ──► [EXECUTING] ──► [WAITING] │ │ │
|
||||
│ │ │ ▲ │ │ │ │ │ │
|
||||
│ │ │ └─────────────┴──────────────┴──────────────┘ │ │ │
|
||||
│ │ └─────────────────────────────────────────────────────────────┘ │ │
|
||||
│ │ │ │
|
||||
│ │ License: MIT | Self-Hosted: Yes | Subscription: None Required │ │
|
||||
│ └───────────────────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌───────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ Redis Streams Communication Layer │ │
|
||||
│ │ │ │
|
||||
│ │ • Agent-to-Agent messaging (A2A protocol concepts) │ │
|
||||
│ │ • Event-driven architecture │ │
|
||||
│ │ • Real-time activity streaming to UI │ │
|
||||
│ │ • Project-scoped message channels │ │
|
||||
│ │ │ │
|
||||
│ │ License: BSD-3 | Self-Hosted: Yes | Subscription: None Required │ │
|
||||
│ └───────────────────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌───────────────────────────────────────────────────────────────────┐ │
|
||||
│ │ LiteLLM Gateway │ │
|
||||
│ │ │ │
|
||||
│ │ • Unified API for 100+ LLM providers │ │
|
||||
│ │ • Automatic failover chains (Claude → GPT-4 → Ollama) │ │
|
||||
│ │ • Token counting and cost calculation │ │
|
||||
│ │ • Rate limiting and load balancing │ │
|
||||
│ │ │ │
|
||||
│ │ License: MIT | Self-Hosted: Yes | Subscription: None Required │ │
|
||||
│ └───────────────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Component Responsibilities
|
||||
|
||||
| Component | Responsibility | Why This Choice |
|
||||
|-----------|---------------|-----------------|
|
||||
| **LangGraph** | Agent state machines, tool execution, reasoning loops | Production-proven, fine-grained control, PostgreSQL checkpointing |
|
||||
| **Temporal** | Durable workflows, human approvals, long-running orchestration | Only solution for week-long workflows that survive failures |
|
||||
| **Redis Streams** | Agent messaging, real-time events, pub/sub | Low-latency, persistent streams, consumer groups |
|
||||
| **LiteLLM** | LLM abstraction, failover, cost tracking | Unified API, automatic failover, no vendor lock-in |
|
||||
| **PostgreSQL** | State persistence, audit logs, agent data | Already in stack, pgvector for RAG |
|
||||
|
||||
### Self-Hostability Guarantee
|
||||
|
||||
All components are fully self-hostable with permissive open-source licenses:
|
||||
|
||||
| Component | License | Paid Cloud Alternative | Required for Syndarix? |
|
||||
|-----------|---------|----------------------|----------------------|
|
||||
| LangGraph | MIT | LangSmith (observability) | No - use LangFuse or custom |
|
||||
| Temporal | MIT | Temporal Cloud | No - self-host server |
|
||||
| LiteLLM | MIT | LiteLLM Enterprise | No - self-host proxy |
|
||||
| Redis | BSD-3 | Redis Cloud | No - self-host |
|
||||
| PostgreSQL | PostgreSQL | Various managed DBs | No - self-host |
|
||||
|
||||
**No mandatory subscriptions.** All paid alternatives are optional cloud-managed offerings.
|
||||
|
||||
### What We Build vs. What We Use
|
||||
|
||||
| Concern | Approach | Rationale |
|
||||
|---------|----------|-----------|
|
||||
| Agent Logic | **USE LangGraph** | Don't reinvent state machines |
|
||||
| LLM Access | **USE LiteLLM** | Don't reinvent provider abstraction |
|
||||
| Durability | **USE Temporal** | Don't reinvent durable execution |
|
||||
| Messaging | **USE Redis Streams** | Don't reinvent pub/sub |
|
||||
| Orchestration | **BUILD thin layer** | Syndarix-specific (autonomy levels, team structure) |
|
||||
| Agent Spawning | **BUILD thin layer** | Type-Instance pattern specific to Syndarix |
|
||||
| Cost Attribution | **BUILD thin layer** | Per-agent, per-project tracking specific to Syndarix |
|
||||
|
||||
### Integration Pattern
|
||||
|
||||
```python
|
||||
# Example: How the layers integrate
|
||||
|
||||
# 1. Temporal orchestrates the high-level workflow
|
||||
@workflow.defn
|
||||
class SprintWorkflow:
|
||||
@workflow.run
|
||||
async def run(self, sprint: SprintConfig) -> SprintResult:
|
||||
# Spawns agents and waits for completion
|
||||
agents = await workflow.execute_activity(spawn_agent_team, sprint)
|
||||
|
||||
# Each agent runs a LangGraph state machine
|
||||
results = await workflow.execute_activity(
|
||||
run_agent_tasks,
|
||||
agents,
|
||||
start_to_close_timeout=timedelta(days=7),
|
||||
)
|
||||
|
||||
# Human checkpoint (waits indefinitely)
|
||||
if sprint.autonomy_level != AutonomyLevel.AUTONOMOUS:
|
||||
await workflow.wait_condition(lambda: self._approved)
|
||||
|
||||
return results
|
||||
|
||||
# 2. LangGraph handles individual agent logic
|
||||
def create_agent_graph() -> StateGraph:
|
||||
graph = StateGraph(AgentState)
|
||||
graph.add_node("think", think_node) # LLM reasoning
|
||||
graph.add_node("execute", execute_node) # Tool calls via MCP
|
||||
graph.add_node("handoff", handoff_node) # Message to other agent
|
||||
# ... state transitions
|
||||
return graph.compile(checkpointer=PostgresSaver(...))
|
||||
|
||||
# 3. LiteLLM handles LLM calls with failover
|
||||
async def think_node(state: AgentState) -> AgentState:
|
||||
response = await litellm.acompletion(
|
||||
model="claude-sonnet-4-20250514",
|
||||
messages=state["messages"],
|
||||
fallbacks=["gpt-4-turbo", "ollama/llama3"],
|
||||
metadata={"agent_id": state["agent_id"]},
|
||||
)
|
||||
return {"messages": [response.choices[0].message]}
|
||||
|
||||
# 4. Redis Streams handles agent communication
|
||||
async def handoff_node(state: AgentState) -> AgentState:
|
||||
await message_bus.publish(AgentMessage(
|
||||
source_agent_id=state["agent_id"],
|
||||
target_agent_id=state["handoff_target"],
|
||||
message_type="TASK_HANDOFF",
|
||||
payload=state["handoff_context"],
|
||||
))
|
||||
return state
|
||||
```
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
|
||||
- **Production-tested foundations** - LangGraph, Temporal, LiteLLM are battle-tested
|
||||
- **No subscription lock-in** - All components self-hostable under permissive licenses
|
||||
- **Right tool for each job** - Specialized components for durability, state, communication
|
||||
- **Escape hatches** - Can replace any component without full rewrite
|
||||
- **Enterprise patterns** - Temporal used by Netflix, Uber, Stripe for similar problems
|
||||
|
||||
### Negative
|
||||
|
||||
- **Multiple technologies to learn** - Team needs LangGraph, Temporal, Redis Streams knowledge
|
||||
- **Operational complexity** - More services to deploy and monitor
|
||||
- **Integration work** - Thin glue layers needed between components
|
||||
|
||||
### Mitigation
|
||||
|
||||
- **Learning curve** - Start with simple 2-3 agent workflows, expand gradually
|
||||
- **Operational complexity** - Use Docker Compose locally, consider managed services for production if needed
|
||||
- **Integration** - Create clear abstractions; each layer only knows its immediate neighbors
|
||||
|
||||
## Compliance
|
||||
|
||||
This decision aligns with:
|
||||
- **FR-101-105**: Agent orchestration requirements
|
||||
- **FR-301-305**: Workflow execution requirements
|
||||
- **NFR-501**: Self-hosting requirement (all components MIT/BSD licensed)
|
||||
- **TC-001**: PostgreSQL as primary database
|
||||
- **TC-002**: Redis for caching and messaging
|
||||
|
||||
## Alternatives Not Chosen
|
||||
|
||||
### LangSmith for Observability
|
||||
|
||||
LangSmith is LangChain's paid observability platform. Instead, we will:
|
||||
- Use **LangFuse** (open source, self-hostable) for LLM observability
|
||||
- Use **Temporal UI** (built-in) for workflow visibility
|
||||
- Build custom dashboards for Syndarix-specific metrics
|
||||
|
||||
### Temporal Cloud
|
||||
|
||||
Temporal offers a managed cloud service. Instead, we will:
|
||||
- Self-host Temporal server (single-node for start, cluster for scale)
|
||||
- Use PostgreSQL as Temporal's persistence backend (already in stack)
|
||||
|
||||
## References
|
||||
|
||||
- [LangGraph Documentation](https://langchain-ai.github.io/langgraph/)
|
||||
- [Temporal.io Documentation](https://docs.temporal.io/)
|
||||
- [LiteLLM Documentation](https://docs.litellm.ai/)
|
||||
- [LangFuse (Open Source LLM Observability)](https://langfuse.com/)
|
||||
- [SPIKE-002: Agent Orchestration Pattern](../spikes/SPIKE-002-agent-orchestration-pattern.md)
|
||||
- [SPIKE-005: LLM Provider Abstraction](../spikes/SPIKE-005-llm-provider-abstraction.md)
|
||||
|
||||
---
|
||||
|
||||
*This ADR establishes the foundational framework choices for Syndarix's multi-agent orchestration system.*
|
||||
680
docs/architecture/ARCHITECTURE_DEEP_ANALYSIS.md
Normal file
680
docs/architecture/ARCHITECTURE_DEEP_ANALYSIS.md
Normal file
@@ -0,0 +1,680 @@
|
||||
# Syndarix Architecture Deep Analysis
|
||||
|
||||
**Version:** 1.0
|
||||
**Date:** 2025-12-29
|
||||
**Status:** Draft - Architectural Thinking
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This document captures deep architectural thinking about Syndarix beyond the immediate spikes. It addresses complex challenges that arise when building a truly autonomous multi-agent system and proposes solutions based on first principles.
|
||||
|
||||
---
|
||||
|
||||
## 1. Agent Memory and Context Management
|
||||
|
||||
### The Challenge
|
||||
|
||||
Agents in Syndarix may work on projects for weeks or months. LLM context windows are finite (128K-200K tokens), but project context grows unboundedly. How do we maintain coherent agent "memory" over time?
|
||||
|
||||
### Analysis
|
||||
|
||||
**Context Window Constraints:**
|
||||
| Model | Context Window | Practical Limit (with tools) |
|
||||
|-------|---------------|------------------------------|
|
||||
| Claude 3.5 Sonnet | 200K tokens | ~150K usable |
|
||||
| GPT-4 Turbo | 128K tokens | ~100K usable |
|
||||
| Llama 3 (70B) | 8K-128K tokens | ~80K usable |
|
||||
|
||||
**Memory Types Needed:**
|
||||
1. **Working Memory** - Current task context (fits in context window)
|
||||
2. **Short-term Memory** - Recent conversation history (RAG-retrievable)
|
||||
3. **Long-term Memory** - Project knowledge, past decisions (RAG + summarization)
|
||||
4. **Episodic Memory** - Specific past events/mistakes to learn from
|
||||
|
||||
### Proposed Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Agent Memory System │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ Working │ │ Short-term │ │ Long-term │ │
|
||||
│ │ Memory │ │ Memory │ │ Memory │ │
|
||||
│ │ (Context) │ │ (Redis) │ │ (pgvector) │ │
|
||||
│ └──────┬───────┘ └──────┬───────┘ └──────┬───────┘ │
|
||||
│ │ │ │ │
|
||||
│ └───────────────────┼──────────────────┘ │
|
||||
│ │ │
|
||||
│ ▼ │
|
||||
│ ┌──────────────────────────────────────────────────────────┐ │
|
||||
│ │ Context Assembler │ │
|
||||
│ │ │ │
|
||||
│ │ 1. System prompt (agent personality, role) │ │
|
||||
│ │ 2. Project context (from long-term memory) │ │
|
||||
│ │ 3. Task context (current issue, requirements) │ │
|
||||
│ │ 4. Relevant history (from short-term memory) │ │
|
||||
│ │ 5. User message │ │
|
||||
│ │ │ │
|
||||
│ │ Total: Fit within context window limits │ │
|
||||
│ └──────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Context Compression Strategy:**
|
||||
```python
|
||||
class ContextManager:
|
||||
"""Manages agent context to fit within LLM limits."""
|
||||
|
||||
MAX_CONTEXT_TOKENS = 100_000 # Leave room for response
|
||||
|
||||
async def build_context(
|
||||
self,
|
||||
agent: AgentInstance,
|
||||
task: Task,
|
||||
user_message: str
|
||||
) -> list[Message]:
|
||||
# Fixed costs
|
||||
system_prompt = self._get_system_prompt(agent) # ~2K tokens
|
||||
task_context = self._get_task_context(task) # ~1K tokens
|
||||
|
||||
# Variable budget
|
||||
remaining = self.MAX_CONTEXT_TOKENS - token_count(system_prompt, task_context, user_message)
|
||||
|
||||
# Allocate remaining to memories
|
||||
long_term = await self._query_long_term(agent, task, budget=remaining * 0.4)
|
||||
short_term = await self._get_short_term(agent, budget=remaining * 0.4)
|
||||
episodic = await self._get_relevant_episodes(agent, task, budget=remaining * 0.2)
|
||||
|
||||
return self._assemble_messages(
|
||||
system_prompt, task_context, long_term, short_term, episodic, user_message
|
||||
)
|
||||
```
|
||||
|
||||
**Conversation Summarization:**
|
||||
- After every N turns (e.g., 10), summarize conversation and archive
|
||||
- Use smaller/cheaper model for summarization
|
||||
- Store summaries in pgvector for semantic retrieval
|
||||
|
||||
### Recommendation
|
||||
|
||||
Implement a **tiered memory system** with automatic context compression and semantic retrieval. Use Redis for hot short-term memory, pgvector for cold long-term memory, and automatic summarization to prevent context overflow.
|
||||
|
||||
---
|
||||
|
||||
## 2. Cross-Project Knowledge Sharing
|
||||
|
||||
### The Challenge
|
||||
|
||||
Each project has isolated knowledge, but agents could benefit from cross-project learnings:
|
||||
- Common patterns (authentication, testing, CI/CD)
|
||||
- Technology expertise (how to configure Kubernetes)
|
||||
- Anti-patterns (what didn't work before)
|
||||
|
||||
### Analysis
|
||||
|
||||
**Privacy Considerations:**
|
||||
- Client data must remain isolated (contractual, legal)
|
||||
- Technical patterns are generally shareable
|
||||
- Need clear data classification
|
||||
|
||||
**Knowledge Categories:**
|
||||
| Category | Scope | Examples |
|
||||
|----------|-------|----------|
|
||||
| **Client Data** | Project-only | Requirements, business logic, code |
|
||||
| **Technical Patterns** | Global | Best practices, configurations |
|
||||
| **Agent Learnings** | Global | What approaches worked/failed |
|
||||
| **Anti-patterns** | Global | Common mistakes to avoid |
|
||||
|
||||
### Proposed Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Knowledge Graph │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||
│ │ GLOBAL KNOWLEDGE │ │
|
||||
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
|
||||
│ │ │ Patterns │ │ Anti-patterns│ │ Expertise │ │ │
|
||||
│ │ │ Library │ │ Library │ │ Index │ │ │
|
||||
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
|
||||
│ └─────────────────────────────────────────────────────────┘ │
|
||||
│ ▲ │
|
||||
│ │ Curated extraction │
|
||||
│ │ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ Project A │ │ Project B │ │ Project C │ │
|
||||
│ │ Knowledge │ │ Knowledge │ │ Knowledge │ │
|
||||
│ │ (Isolated) │ │ (Isolated) │ │ (Isolated) │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Knowledge Extraction Pipeline:**
|
||||
```python
|
||||
class KnowledgeExtractor:
|
||||
"""Extracts shareable learnings from project work."""
|
||||
|
||||
async def extract_learnings(self, project_id: str) -> list[Learning]:
|
||||
"""
|
||||
Run periodically or after sprints to extract learnings.
|
||||
Human review required before promoting to global.
|
||||
"""
|
||||
# Get completed work
|
||||
completed_issues = await self.get_completed_issues(project_id)
|
||||
|
||||
# Extract patterns using LLM
|
||||
patterns = await self.llm.extract_patterns(
|
||||
completed_issues,
|
||||
categories=["architecture", "testing", "deployment", "security"]
|
||||
)
|
||||
|
||||
# Classify privacy
|
||||
for pattern in patterns:
|
||||
pattern.privacy_level = await self.llm.classify_privacy(pattern)
|
||||
|
||||
# Return only shareable patterns for review
|
||||
return [p for p in patterns if p.privacy_level == "public"]
|
||||
```
|
||||
|
||||
### Recommendation
|
||||
|
||||
Implement **privacy-aware knowledge extraction** with human review gate. Project knowledge stays isolated by default; only explicitly approved patterns flow to global knowledge.
|
||||
|
||||
---
|
||||
|
||||
## 3. Agent Specialization vs Generalization Trade-offs
|
||||
|
||||
### The Challenge
|
||||
|
||||
Should each agent type be highly specialized (depth) or have overlapping capabilities (breadth)?
|
||||
|
||||
### Analysis
|
||||
|
||||
**Specialization Benefits:**
|
||||
- Deeper expertise in domain
|
||||
- Cleaner system prompts
|
||||
- Less confusion about responsibilities
|
||||
- Easier to optimize prompts per role
|
||||
|
||||
**Generalization Benefits:**
|
||||
- Fewer agent types to maintain
|
||||
- Smoother handoffs (shared context)
|
||||
- More flexible team composition
|
||||
- Graceful degradation if agent unavailable
|
||||
|
||||
**Current Agent Types (10):**
|
||||
| Role | Primary Domain | Potential Overlap |
|
||||
|------|---------------|-------------------|
|
||||
| Product Owner | Requirements | Business Analyst |
|
||||
| Business Analyst | Documentation | Product Owner |
|
||||
| Project Manager | Planning | Product Owner |
|
||||
| Software Architect | Design | Senior Engineer |
|
||||
| Software Engineer | Coding | Architect, QA |
|
||||
| UI/UX Designer | Interface | Frontend Engineer |
|
||||
| QA Engineer | Testing | Software Engineer |
|
||||
| DevOps Engineer | Infrastructure | Senior Engineer |
|
||||
| AI/ML Engineer | ML/AI | Software Engineer |
|
||||
| Security Expert | Security | All |
|
||||
|
||||
### Proposed Approach: Layered Specialization
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Agent Capability Layers │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Layer 3: Role-Specific Expertise │
|
||||
│ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ │
|
||||
│ │ Product │ │ Architect│ │Engineer │ │ QA │ │
|
||||
│ │ Owner │ │ │ │ │ │ │ │
|
||||
│ └────┬────┘ └────┬────┘ └────┬────┘ └────┬────┘ │
|
||||
│ │ │ │ │ │
|
||||
│ Layer 2: Shared Professional Skills │
|
||||
│ ┌──────────────────────────────────────────────────────┐ │
|
||||
│ │ Technical Communication | Code Understanding | Git │ │
|
||||
│ │ Documentation | Research | Problem Decomposition │ │
|
||||
│ └──────────────────────────────────────────────────────┘ │
|
||||
│ │ │
|
||||
│ Layer 1: Foundation Model Capabilities │
|
||||
│ ┌──────────────────────────────────────────────────────┐ │
|
||||
│ │ Reasoning | Analysis | Writing | Coding (LLM Base) │ │
|
||||
│ └──────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Capability Inheritance:**
|
||||
```python
|
||||
class AgentTypeBuilder:
|
||||
"""Builds agent types with layered capabilities."""
|
||||
|
||||
BASE_CAPABILITIES = [
|
||||
"reasoning", "analysis", "writing", "coding_assist"
|
||||
]
|
||||
|
||||
PROFESSIONAL_SKILLS = [
|
||||
"technical_communication", "code_understanding",
|
||||
"git_operations", "documentation", "research"
|
||||
]
|
||||
|
||||
ROLE_SPECIFIC = {
|
||||
"ENGINEER": ["code_generation", "code_review", "testing", "debugging"],
|
||||
"ARCHITECT": ["system_design", "adr_writing", "tech_selection"],
|
||||
"QA": ["test_planning", "test_automation", "bug_reporting"],
|
||||
# ...
|
||||
}
|
||||
|
||||
def build_capabilities(self, role: AgentRole) -> list[str]:
|
||||
return (
|
||||
self.BASE_CAPABILITIES +
|
||||
self.PROFESSIONAL_SKILLS +
|
||||
self.ROLE_SPECIFIC[role]
|
||||
)
|
||||
```
|
||||
|
||||
### Recommendation
|
||||
|
||||
Adopt **layered specialization** where all agents share foundational and professional capabilities, with role-specific expertise on top. This enables smooth collaboration while maintaining clear responsibilities.
|
||||
|
||||
---
|
||||
|
||||
## 4. Human-Agent Collaboration Model
|
||||
|
||||
### The Challenge
|
||||
|
||||
Beyond approval gates, how do humans effectively collaborate with autonomous agents during active work?
|
||||
|
||||
### Interaction Patterns
|
||||
|
||||
| Pattern | Use Case | Frequency |
|
||||
|---------|----------|-----------|
|
||||
| **Approval** | Confirm before action | Per checkpoint |
|
||||
| **Guidance** | Steer direction | On-demand |
|
||||
| **Override** | Correct mistake | Rare |
|
||||
| **Pair Working** | Work together | Optional |
|
||||
| **Review** | Evaluate output | Post-completion |
|
||||
|
||||
### Proposed Collaboration Interface
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Human-Agent Collaboration Dashboard │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||
│ │ Activity Stream │ │
|
||||
│ │ ────────────────────────────────────────────────────── │ │
|
||||
│ │ [10:23] Dave (Engineer) is implementing login API │ │
|
||||
│ │ [10:24] Dave created auth/service.py │ │
|
||||
│ │ [10:25] Dave is writing unit tests │ │
|
||||
│ │ [LIVE] Dave: "I'm adding JWT validation. Using HS256..." │ │
|
||||
│ └─────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||
│ │ Intervention Panel │ │
|
||||
│ │ │ │
|
||||
│ │ [💬 Chat] [⏸️ Pause] [↩️ Undo Last] [📝 Guide] │ │
|
||||
│ │ │ │
|
||||
│ │ Quick Guidance: │ │
|
||||
│ │ ┌─────────────────────────────────────────────────┐ │ │
|
||||
│ │ │ "Use RS256 instead of HS256 for JWT signing" │ │ │
|
||||
│ │ │ [Send] 📤 │ │ │
|
||||
│ │ └─────────────────────────────────────────────────┘ │ │
|
||||
│ └─────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Intervention API:**
|
||||
```python
|
||||
@router.post("/agents/{agent_id}/intervene")
|
||||
async def intervene(
|
||||
agent_id: UUID,
|
||||
intervention: InterventionRequest,
|
||||
current_user: User = Depends(get_current_user)
|
||||
):
|
||||
"""Allow human to intervene in agent work."""
|
||||
match intervention.type:
|
||||
case "pause":
|
||||
await orchestrator.pause_agent(agent_id)
|
||||
case "resume":
|
||||
await orchestrator.resume_agent(agent_id)
|
||||
case "guide":
|
||||
await orchestrator.send_guidance(agent_id, intervention.message)
|
||||
case "undo":
|
||||
await orchestrator.undo_last_action(agent_id)
|
||||
case "override":
|
||||
await orchestrator.override_decision(agent_id, intervention.decision)
|
||||
```
|
||||
|
||||
### Recommendation
|
||||
|
||||
Build a **real-time collaboration dashboard** with intervention capabilities. Humans should be able to observe, guide, pause, and correct agents without stopping the entire workflow.
|
||||
|
||||
---
|
||||
|
||||
## 5. Testing Strategy for Autonomous AI Systems
|
||||
|
||||
### The Challenge
|
||||
|
||||
Traditional testing (unit, integration, E2E) doesn't capture autonomous agent behavior. How do we ensure quality?
|
||||
|
||||
### Testing Pyramid for AI Agents
|
||||
|
||||
```
|
||||
▲
|
||||
╱ ╲
|
||||
╱ ╲
|
||||
╱ E2E ╲ Agent Scenarios
|
||||
╱ Agent ╲ (Full workflows)
|
||||
╱─────────╲
|
||||
╱ Integration╲ Tool + LLM Integration
|
||||
╱ (with mocks) ╲ (Deterministic responses)
|
||||
╱─────────────────╲
|
||||
╱ Unit Tests ╲ Orchestrator, Services
|
||||
╱ (no LLM needed) ╲ (Pure logic)
|
||||
╱───────────────────────╲
|
||||
╱ Prompt Testing ╲ System prompt evaluation
|
||||
╱ (LLM evals) ╲(Quality metrics)
|
||||
╱─────────────────────────────╲
|
||||
```
|
||||
|
||||
### Test Categories
|
||||
|
||||
**1. Prompt Testing (Eval Framework):**
|
||||
```python
|
||||
class PromptEvaluator:
|
||||
"""Evaluate system prompt quality."""
|
||||
|
||||
TEST_CASES = [
|
||||
EvalCase(
|
||||
name="requirement_extraction",
|
||||
input="Client wants a mobile app for food delivery",
|
||||
expected_behaviors=[
|
||||
"asks clarifying questions",
|
||||
"identifies stakeholders",
|
||||
"considers non-functional requirements"
|
||||
]
|
||||
),
|
||||
EvalCase(
|
||||
name="code_review_thoroughness",
|
||||
input="Review this PR: [vulnerable SQL code]",
|
||||
expected_behaviors=[
|
||||
"identifies SQL injection",
|
||||
"suggests parameterized queries",
|
||||
"mentions security best practices"
|
||||
]
|
||||
)
|
||||
]
|
||||
|
||||
async def evaluate(self, agent_type: AgentType) -> EvalReport:
|
||||
results = []
|
||||
for case in self.TEST_CASES:
|
||||
response = await self.llm.complete(
|
||||
system=agent_type.system_prompt,
|
||||
user=case.input
|
||||
)
|
||||
score = await self.judge_response(response, case.expected_behaviors)
|
||||
results.append(score)
|
||||
return EvalReport(results)
|
||||
```
|
||||
|
||||
**2. Integration Testing (Mock LLM):**
|
||||
```python
|
||||
@pytest.fixture
|
||||
def mock_llm():
|
||||
"""Deterministic LLM responses for integration tests."""
|
||||
responses = {
|
||||
"analyze requirements": "...",
|
||||
"generate code": "def hello(): return 'world'",
|
||||
"review code": "LGTM"
|
||||
}
|
||||
return MockLLM(responses)
|
||||
|
||||
async def test_story_implementation_workflow(mock_llm):
|
||||
"""Test full workflow with predictable responses."""
|
||||
orchestrator = AgentOrchestrator(llm=mock_llm)
|
||||
|
||||
result = await orchestrator.execute_workflow(
|
||||
workflow="implement_story",
|
||||
inputs={"story_id": "TEST-123"}
|
||||
)
|
||||
|
||||
assert result.status == "completed"
|
||||
assert "hello" in result.artifacts["code"]
|
||||
```
|
||||
|
||||
**3. Agent Scenario Testing:**
|
||||
```python
|
||||
class AgentScenarioTest:
|
||||
"""End-to-end agent behavior testing."""
|
||||
|
||||
@scenario("engineer_handles_bug_report")
|
||||
async def test_bug_resolution(self):
|
||||
"""Engineer agent should fix bugs correctly."""
|
||||
# Setup
|
||||
project = await create_test_project()
|
||||
engineer = await spawn_agent("engineer", project)
|
||||
|
||||
# Act
|
||||
bug = await create_issue(
|
||||
project,
|
||||
title="Login button not working",
|
||||
type="bug"
|
||||
)
|
||||
result = await engineer.handle(bug)
|
||||
|
||||
# Assert
|
||||
assert result.pr_created
|
||||
assert result.tests_pass
|
||||
assert "button" in result.changes_summary.lower()
|
||||
```
|
||||
|
||||
### Recommendation
|
||||
|
||||
Implement a **multi-layer testing strategy** with prompt evals, deterministic integration tests, and scenario-based agent testing. Use LLM-as-judge for evaluating open-ended responses.
|
||||
|
||||
---
|
||||
|
||||
## 6. Rollback and Recovery
|
||||
|
||||
### The Challenge
|
||||
|
||||
Autonomous agents will make mistakes. How do we recover gracefully?
|
||||
|
||||
### Error Categories
|
||||
|
||||
| Category | Example | Recovery Strategy |
|
||||
|----------|---------|-------------------|
|
||||
| **Reversible** | Wrong code generated | Revert commit, regenerate |
|
||||
| **Partially Reversible** | Merged bad PR | Revert PR, fix, re-merge |
|
||||
| **Non-reversible** | Deployed to production | Forward-fix or rollback deploy |
|
||||
| **External Side Effects** | Email sent to client | Apology + correction |
|
||||
|
||||
### Recovery Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Recovery System │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||
│ │ Action Log │ │
|
||||
│ │ ┌──────────────────────────────────────────────────┐ │ │
|
||||
│ │ │ Action ID | Agent | Type | Reversible | State │ │ │
|
||||
│ │ ├──────────────────────────────────────────────────┤ │ │
|
||||
│ │ │ a-001 | Dave | commit | Yes | completed │ │ │
|
||||
│ │ │ a-002 | Dave | push | Yes | completed │ │ │
|
||||
│ │ │ a-003 | Dave | create_pr | Yes | completed │ │ │
|
||||
│ │ │ a-004 | Kate | merge_pr | Partial | completed │ │ │
|
||||
│ │ └──────────────────────────────────────────────────┘ │ │
|
||||
│ └─────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||
│ │ Rollback Engine │ │
|
||||
│ │ │ │
|
||||
│ │ rollback_to(action_id) -> Reverses all actions after │ │
|
||||
│ │ undo_action(action_id) -> Reverses single action │ │
|
||||
│ │ compensate(action_id) -> Creates compensating action │ │
|
||||
│ │ │ │
|
||||
│ └─────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Action Logging:**
|
||||
```python
|
||||
class ActionLog:
|
||||
"""Immutable log of all agent actions for recovery."""
|
||||
|
||||
async def record(
|
||||
self,
|
||||
agent_id: UUID,
|
||||
action_type: str,
|
||||
inputs: dict,
|
||||
outputs: dict,
|
||||
reversible: bool,
|
||||
reverse_action: str | None = None
|
||||
) -> ActionRecord:
|
||||
record = ActionRecord(
|
||||
id=uuid4(),
|
||||
agent_id=agent_id,
|
||||
action_type=action_type,
|
||||
inputs=inputs,
|
||||
outputs=outputs,
|
||||
reversible=reversible,
|
||||
reverse_action=reverse_action,
|
||||
timestamp=datetime.utcnow()
|
||||
)
|
||||
await self.db.add(record)
|
||||
return record
|
||||
|
||||
async def rollback_to(self, action_id: UUID) -> RollbackResult:
|
||||
"""Rollback all actions after the given action."""
|
||||
actions = await self.get_actions_after(action_id)
|
||||
|
||||
results = []
|
||||
for action in reversed(actions):
|
||||
if action.reversible:
|
||||
result = await self._execute_reverse(action)
|
||||
results.append(result)
|
||||
else:
|
||||
results.append(RollbackSkipped(action, reason="non-reversible"))
|
||||
|
||||
return RollbackResult(results)
|
||||
```
|
||||
|
||||
**Compensation Pattern:**
|
||||
```python
|
||||
class CompensationEngine:
|
||||
"""Handles compensating actions for non-reversible operations."""
|
||||
|
||||
COMPENSATIONS = {
|
||||
"email_sent": "send_correction_email",
|
||||
"deployment": "rollback_deployment",
|
||||
"external_api_call": "create_reversal_request"
|
||||
}
|
||||
|
||||
async def compensate(self, action: ActionRecord) -> CompensationResult:
|
||||
if action.action_type in self.COMPENSATIONS:
|
||||
compensation = self.COMPENSATIONS[action.action_type]
|
||||
return await self._execute_compensation(compensation, action)
|
||||
else:
|
||||
return CompensationResult(
|
||||
status="manual_required",
|
||||
message=f"No automatic compensation for {action.action_type}"
|
||||
)
|
||||
```
|
||||
|
||||
### Recommendation
|
||||
|
||||
Implement **comprehensive action logging** with rollback capabilities. Define compensation strategies for non-reversible actions. Enable point-in-time recovery for project state.
|
||||
|
||||
---
|
||||
|
||||
## 7. Security Considerations for Autonomous Agents
|
||||
|
||||
### Threat Model
|
||||
|
||||
| Threat | Risk | Mitigation |
|
||||
|--------|------|------------|
|
||||
| Agent executes malicious code | High | Sandboxed execution, code review gates |
|
||||
| Agent exfiltrates data | High | Network isolation, output filtering |
|
||||
| Prompt injection via user input | Medium | Input sanitization, prompt hardening |
|
||||
| Agent credential abuse | Medium | Least-privilege tokens, short TTL |
|
||||
| Agent collusion | Low | Independent agent instances, monitoring |
|
||||
|
||||
### Security Architecture
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Security Layers │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Layer 4: Output Filtering │
|
||||
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||
│ │ - Code scan before commit │ │
|
||||
│ │ - Secrets detection │ │
|
||||
│ │ - Policy compliance check │ │
|
||||
│ └─────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Layer 3: Action Authorization │
|
||||
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||
│ │ - Role-based permissions │ │
|
||||
│ │ - Project scope enforcement │ │
|
||||
│ │ - Sensitive action approval │ │
|
||||
│ └─────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Layer 2: Input Sanitization │
|
||||
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||
│ │ - Prompt injection detection │ │
|
||||
│ │ - Content filtering │ │
|
||||
│ │ - Schema validation │ │
|
||||
│ └─────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
│ Layer 1: Infrastructure Isolation │
|
||||
│ ┌─────────────────────────────────────────────────────────┐ │
|
||||
│ │ - Container sandboxing │ │
|
||||
│ │ - Network segmentation │ │
|
||||
│ │ - File system restrictions │ │
|
||||
│ └─────────────────────────────────────────────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Recommendation
|
||||
|
||||
Implement **defense-in-depth** with multiple security layers. Assume agents can be compromised and design for containment.
|
||||
|
||||
---
|
||||
|
||||
## Summary of Recommendations
|
||||
|
||||
| Area | Recommendation | Priority |
|
||||
|------|----------------|----------|
|
||||
| Memory | Tiered memory with context compression | High |
|
||||
| Knowledge | Privacy-aware extraction with human gate | Medium |
|
||||
| Specialization | Layered capabilities with role-specific top | Medium |
|
||||
| Collaboration | Real-time dashboard with intervention | High |
|
||||
| Testing | Multi-layer with prompt evals | High |
|
||||
| Recovery | Action logging with rollback engine | High |
|
||||
| Security | Defense-in-depth, assume compromise | High |
|
||||
|
||||
---
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. **Validate with spike research** - Update based on spike findings
|
||||
2. **Create detailed ADRs** - For memory, recovery, security
|
||||
3. **Prototype critical paths** - Memory system, rollback engine
|
||||
4. **Security review** - External audit before production
|
||||
|
||||
---
|
||||
|
||||
*This document captures architectural thinking to guide implementation. It should be updated as spikes complete and design evolves.*
|
||||
339
docs/architecture/IMPLEMENTATION_ROADMAP.md
Normal file
339
docs/architecture/IMPLEMENTATION_ROADMAP.md
Normal file
@@ -0,0 +1,339 @@
|
||||
# Syndarix Implementation Roadmap
|
||||
|
||||
**Version:** 1.0
|
||||
**Date:** 2025-12-29
|
||||
**Status:** Draft
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
This roadmap outlines the phased implementation approach for Syndarix, prioritizing foundational infrastructure before advanced features. Each phase builds upon the previous, with clear milestones and deliverables.
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Foundation (Weeks 1-2)
|
||||
**Goal:** Establish development infrastructure and basic platform
|
||||
|
||||
### 0.1 Repository Setup
|
||||
- [x] Fork PragmaStack to Syndarix
|
||||
- [x] Create spike backlog in Gitea
|
||||
- [x] Complete architecture documentation
|
||||
- [ ] Rebrand codebase (Issue #13 - in progress)
|
||||
- [ ] Configure CI/CD pipelines
|
||||
- [ ] Set up development environment documentation
|
||||
|
||||
### 0.2 Core Infrastructure
|
||||
- [ ] Configure Redis for cache + pub/sub
|
||||
- [ ] Set up Celery worker infrastructure
|
||||
- [ ] Configure pgvector extension
|
||||
- [ ] Create MCP server directory structure
|
||||
- [ ] Set up Docker Compose for local development
|
||||
|
||||
### Deliverables
|
||||
- Fully branded Syndarix repository
|
||||
- Working local development environment
|
||||
- CI/CD pipeline running tests
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Core Platform (Weeks 3-6)
|
||||
**Goal:** Basic project and agent management without LLM integration
|
||||
|
||||
### 1.1 Data Model
|
||||
- [ ] Create Project entity and CRUD
|
||||
- [ ] Create AgentType entity and CRUD
|
||||
- [ ] Create AgentInstance entity and CRUD
|
||||
- [ ] Create Issue entity with external tracker fields
|
||||
- [ ] Create Sprint entity and CRUD
|
||||
- [ ] Database migrations with Alembic
|
||||
|
||||
### 1.2 API Layer
|
||||
- [ ] Project management endpoints
|
||||
- [ ] Agent type configuration endpoints
|
||||
- [ ] Agent instance management endpoints
|
||||
- [ ] Issue CRUD endpoints
|
||||
- [ ] Sprint management endpoints
|
||||
|
||||
### 1.3 Real-time Infrastructure
|
||||
- [ ] Implement EventBus with Redis Pub/Sub
|
||||
- [ ] Create SSE endpoint for project events
|
||||
- [ ] Implement event types enum
|
||||
- [ ] Add keepalive mechanism
|
||||
- [ ] Client-side SSE handling
|
||||
|
||||
### 1.4 Frontend Foundation
|
||||
- [ ] Project dashboard page
|
||||
- [ ] Agent configuration UI
|
||||
- [ ] Issue list and detail views
|
||||
- [ ] Real-time activity feed component
|
||||
- [ ] Basic navigation and layout
|
||||
|
||||
### Deliverables
|
||||
- CRUD operations for all core entities
|
||||
- Real-time event streaming working
|
||||
- Basic admin UI for configuration
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: MCP Integration (Weeks 7-10)
|
||||
**Goal:** Build MCP servers for external integrations
|
||||
|
||||
### 2.1 MCP Client Infrastructure
|
||||
- [ ] Create MCPClientManager class
|
||||
- [ ] Implement server registry
|
||||
- [ ] Add connection management with reconnection
|
||||
- [ ] Create tool call routing
|
||||
|
||||
### 2.2 LLM Gateway MCP (Priority 1)
|
||||
- [ ] Create FastMCP server structure
|
||||
- [ ] Implement LiteLLM integration
|
||||
- [ ] Add model group routing
|
||||
- [ ] Implement failover chain
|
||||
- [ ] Add cost tracking callbacks
|
||||
- [ ] Create token usage logging
|
||||
|
||||
### 2.3 Knowledge Base MCP (Priority 2)
|
||||
- [ ] Create pgvector schema for embeddings
|
||||
- [ ] Implement document ingestion pipeline
|
||||
- [ ] Create chunking strategies (code, markdown, text)
|
||||
- [ ] Implement semantic search
|
||||
- [ ] Add hybrid search (vector + keyword)
|
||||
- [ ] Per-project collection isolation
|
||||
|
||||
### 2.4 Git MCP (Priority 3)
|
||||
- [ ] Create Git operations wrapper
|
||||
- [ ] Implement clone, commit, push operations
|
||||
- [ ] Add branch management
|
||||
- [ ] Create PR operations
|
||||
- [ ] Add Gitea API integration
|
||||
- [ ] Implement GitHub/GitLab adapters
|
||||
|
||||
### 2.5 Issues MCP (Priority 4)
|
||||
- [ ] Create issue sync service
|
||||
- [ ] Implement Gitea issue operations
|
||||
- [ ] Add GitHub issue adapter
|
||||
- [ ] Add GitLab issue adapter
|
||||
- [ ] Implement bi-directional sync
|
||||
- [ ] Create conflict resolution logic
|
||||
|
||||
### Deliverables
|
||||
- 4 working MCP servers
|
||||
- LLM calls routed through gateway
|
||||
- RAG search functional
|
||||
- Git operations working
|
||||
- Issue sync with external trackers
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: Agent Orchestration (Weeks 11-14)
|
||||
**Goal:** Enable agents to perform autonomous work
|
||||
|
||||
### 3.1 Agent Runner
|
||||
- [ ] Create AgentRunner class
|
||||
- [ ] Implement context assembly
|
||||
- [ ] Add memory management (short-term, long-term)
|
||||
- [ ] Implement action execution
|
||||
- [ ] Add tool call handling
|
||||
- [ ] Create agent error handling
|
||||
|
||||
### 3.2 Agent Orchestrator
|
||||
- [ ] Implement spawn_agent method
|
||||
- [ ] Create terminate_agent method
|
||||
- [ ] Implement send_message routing
|
||||
- [ ] Add broadcast functionality
|
||||
- [ ] Create agent status tracking
|
||||
- [ ] Implement agent recovery
|
||||
|
||||
### 3.3 Inter-Agent Communication
|
||||
- [ ] Define message format schema
|
||||
- [ ] Implement message persistence
|
||||
- [ ] Create message routing logic
|
||||
- [ ] Add @mention parsing
|
||||
- [ ] Implement priority queues
|
||||
- [ ] Add conversation threading
|
||||
|
||||
### 3.4 Background Task Integration
|
||||
- [ ] Create Celery task wrappers
|
||||
- [ ] Implement progress reporting
|
||||
- [ ] Add task chaining for workflows
|
||||
- [ ] Create agent queue routing
|
||||
- [ ] Implement task retry logic
|
||||
|
||||
### Deliverables
|
||||
- Agents can be spawned and communicate
|
||||
- Agents can call MCP tools
|
||||
- Background tasks for long operations
|
||||
- Agent activity visible in real-time
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: Workflow Engine (Weeks 15-18)
|
||||
**Goal:** Implement structured workflows for software delivery
|
||||
|
||||
### 4.1 State Machine Foundation
|
||||
- [ ] Create workflow state machine base
|
||||
- [ ] Implement state persistence
|
||||
- [ ] Add transition validation
|
||||
- [ ] Create state history logging
|
||||
- [ ] Implement compensation patterns
|
||||
|
||||
### 4.2 Core Workflows
|
||||
- [ ] Requirements Discovery workflow
|
||||
- [ ] Architecture Spike workflow
|
||||
- [ ] Sprint Planning workflow
|
||||
- [ ] Story Implementation workflow
|
||||
- [ ] Sprint Demo workflow
|
||||
|
||||
### 4.3 Approval Gates
|
||||
- [ ] Create approval checkpoint system
|
||||
- [ ] Implement approval UI components
|
||||
- [ ] Add notification triggers
|
||||
- [ ] Create timeout handling
|
||||
- [ ] Implement escalation logic
|
||||
|
||||
### 4.4 Autonomy Levels
|
||||
- [ ] Implement FULL_CONTROL mode
|
||||
- [ ] Implement MILESTONE mode
|
||||
- [ ] Implement AUTONOMOUS mode
|
||||
- [ ] Create autonomy configuration UI
|
||||
- [ ] Add per-action approval overrides
|
||||
|
||||
### Deliverables
|
||||
- Structured workflows executing
|
||||
- Approval gates working
|
||||
- Autonomy levels configurable
|
||||
- Full sprint cycle possible
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: Advanced Features (Weeks 19-22)
|
||||
**Goal:** Polish and production readiness
|
||||
|
||||
### 5.1 Cost Management
|
||||
- [ ] Real-time cost tracking dashboard
|
||||
- [ ] Budget configuration per project
|
||||
- [ ] Alert threshold system
|
||||
- [ ] Cost optimization recommendations
|
||||
- [ ] Historical cost analytics
|
||||
|
||||
### 5.2 Audit & Compliance
|
||||
- [ ] Comprehensive action logging
|
||||
- [ ] Audit trail viewer UI
|
||||
- [ ] Export functionality
|
||||
- [ ] Retention policy implementation
|
||||
- [ ] Compliance report generation
|
||||
|
||||
### 5.3 Human-Agent Collaboration
|
||||
- [ ] Live activity dashboard
|
||||
- [ ] Intervention panel (pause, guide, undo)
|
||||
- [ ] Agent chat interface
|
||||
- [ ] Context inspector
|
||||
- [ ] Decision explainer
|
||||
|
||||
### 5.4 Additional MCP Servers
|
||||
- [ ] File System MCP
|
||||
- [ ] Code Analysis MCP
|
||||
- [ ] CI/CD MCP
|
||||
|
||||
### Deliverables
|
||||
- Production-ready system
|
||||
- Full observability
|
||||
- Cost controls active
|
||||
- Audit compliance
|
||||
|
||||
---
|
||||
|
||||
## Phase 6: Polish & Launch (Weeks 23-24)
|
||||
**Goal:** Production deployment
|
||||
|
||||
### 6.1 Performance Optimization
|
||||
- [ ] Load testing
|
||||
- [ ] Query optimization
|
||||
- [ ] Caching optimization
|
||||
- [ ] Memory profiling
|
||||
|
||||
### 6.2 Security Hardening
|
||||
- [ ] Security audit
|
||||
- [ ] Penetration testing
|
||||
- [ ] Secrets management
|
||||
- [ ] Rate limiting tuning
|
||||
|
||||
### 6.3 Documentation
|
||||
- [ ] User documentation
|
||||
- [ ] API documentation
|
||||
- [ ] Deployment guide
|
||||
- [ ] Runbook
|
||||
|
||||
### 6.4 Deployment
|
||||
- [ ] Production environment setup
|
||||
- [ ] Monitoring & alerting
|
||||
- [ ] Backup & recovery
|
||||
- [ ] Launch checklist
|
||||
|
||||
---
|
||||
|
||||
## Risk Register
|
||||
|
||||
| Risk | Impact | Probability | Mitigation |
|
||||
|------|--------|-------------|------------|
|
||||
| LLM API outages | High | Medium | Multi-provider failover |
|
||||
| Cost overruns | High | Medium | Budget enforcement, local models |
|
||||
| Agent hallucinations | High | Medium | Approval gates, code review |
|
||||
| Performance bottlenecks | Medium | Medium | Load testing, caching |
|
||||
| Integration failures | Medium | Low | Contract testing, mocks |
|
||||
|
||||
---
|
||||
|
||||
## Success Metrics
|
||||
|
||||
| Metric | Target | Measurement |
|
||||
|--------|--------|-------------|
|
||||
| Agent task success rate | >90% | Completed tasks / total tasks |
|
||||
| Response time (P95) | <2s | API latency |
|
||||
| Cost per project | <$50/sprint | LLM + compute costs |
|
||||
| Time to first commit | <1 hour | From requirements to PR |
|
||||
| Client satisfaction | >4/5 | Post-sprint survey |
|
||||
|
||||
---
|
||||
|
||||
## Dependencies
|
||||
|
||||
```
|
||||
Phase 0 ─────▶ Phase 1 ─────▶ Phase 2 ─────▶ Phase 3 ─────▶ Phase 4 ─────▶ Phase 5 ─────▶ Phase 6
|
||||
Foundation Core Platform MCP Integration Agent Orch Workflows Advanced Launch
|
||||
│
|
||||
│
|
||||
Depends on:
|
||||
- LLM Gateway
|
||||
- Knowledge Base
|
||||
- Real-time events
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Resource Requirements
|
||||
|
||||
### Development Team
|
||||
- 1 Backend Engineer (Python/FastAPI)
|
||||
- 1 Frontend Engineer (React/Next.js)
|
||||
- 0.5 DevOps Engineer
|
||||
- 0.25 Product Manager
|
||||
|
||||
### Infrastructure
|
||||
- PostgreSQL (managed or self-hosted)
|
||||
- Redis (managed or self-hosted)
|
||||
- Celery workers (2-4 instances)
|
||||
- MCP servers (7 containers)
|
||||
- API server (2+ instances)
|
||||
- Frontend (static hosting or SSR)
|
||||
|
||||
### External Services
|
||||
- Anthropic API (primary LLM)
|
||||
- OpenAI API (fallback)
|
||||
- Ollama (local models, optional)
|
||||
- Gitea/GitHub/GitLab (issue tracking)
|
||||
|
||||
---
|
||||
|
||||
*This roadmap will be refined as spikes complete and requirements evolve.*
|
||||
1326
docs/spikes/SPIKE-002-agent-orchestration-pattern.md
Normal file
1326
docs/spikes/SPIKE-002-agent-orchestration-pattern.md
Normal file
File diff suppressed because it is too large
Load Diff
1259
docs/spikes/SPIKE-006-knowledge-base-pgvector.md
Normal file
1259
docs/spikes/SPIKE-006-knowledge-base-pgvector.md
Normal file
File diff suppressed because it is too large
Load Diff
1496
docs/spikes/SPIKE-007-agent-communication-protocol.md
Normal file
1496
docs/spikes/SPIKE-007-agent-communication-protocol.md
Normal file
File diff suppressed because it is too large
Load Diff
1513
docs/spikes/SPIKE-008-workflow-state-machine.md
Normal file
1513
docs/spikes/SPIKE-008-workflow-state-machine.md
Normal file
File diff suppressed because it is too large
Load Diff
1494
docs/spikes/SPIKE-009-issue-synchronization.md
Normal file
1494
docs/spikes/SPIKE-009-issue-synchronization.md
Normal file
File diff suppressed because it is too large
Load Diff
1821
docs/spikes/SPIKE-010-cost-tracking.md
Normal file
1821
docs/spikes/SPIKE-010-cost-tracking.md
Normal file
File diff suppressed because it is too large
Load Diff
1064
docs/spikes/SPIKE-011-audit-logging.md
Normal file
1064
docs/spikes/SPIKE-011-audit-logging.md
Normal file
File diff suppressed because it is too large
Load Diff
1662
docs/spikes/SPIKE-012-client-approval-flow.md
Normal file
1662
docs/spikes/SPIKE-012-client-approval-flow.md
Normal file
File diff suppressed because it is too large
Load Diff
@@ -1,4 +1,4 @@
|
||||
# PragmaStack - Frontend
|
||||
# Syndarix - Frontend
|
||||
|
||||
Production-ready Next.js 16 frontend with TypeScript, authentication, admin panel, and internationalization.
|
||||
|
||||
|
||||
@@ -273,7 +273,7 @@ NEXT_PUBLIC_DEMO_MODE=true npm run dev
|
||||
**1. Fork Repository**
|
||||
|
||||
```bash
|
||||
gh repo fork your-repo/fast-next-template
|
||||
git clone https://gitea.pragmazest.com/cardosofelipe/syndarix.git
|
||||
```
|
||||
|
||||
**2. Connect to Vercel**
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Internationalization (i18n) Guide
|
||||
|
||||
This document describes the internationalization implementation in the PragmaStack.
|
||||
This document describes the internationalization implementation in Syndarix.
|
||||
|
||||
## Overview
|
||||
|
||||
|
||||
@@ -4,10 +4,10 @@
|
||||
|
||||
## Logo
|
||||
|
||||
The **PragmaStack** logo represents the core values of the project: structure, speed, and clarity.
|
||||
The **Syndarix** logo represents the core values of the project: structure, speed, and clarity.
|
||||
|
||||
<div align="center">
|
||||
<img src="../../public/logo.svg" alt="PragmaStack Logo" width="300" />
|
||||
<img src="../../public/logo.svg" alt="Syndarix Logo" width="300" />
|
||||
<p><em>The Stack: Geometric layers representing the full-stack architecture.</em></p>
|
||||
</div>
|
||||
|
||||
@@ -16,7 +16,7 @@ The **PragmaStack** logo represents the core values of the project: structure, s
|
||||
For smaller contexts (favicons, headers), we use the simplified icon:
|
||||
|
||||
<div align="center">
|
||||
<img src="../../public/logo-icon.svg" alt="PragmaStack Icon" width="64" />
|
||||
<img src="../../public/logo-icon.svg" alt="Syndarix Icon" width="64" />
|
||||
</div>
|
||||
|
||||
For now, we use the **Lucide React** icon set for all iconography. Icons should be used sparingly and meaningfully to enhance understanding, not just for decoration.
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Branding Guidelines
|
||||
|
||||
Welcome to the **PragmaStack** branding guidelines. This section defines who we are, how we speak, and how we look.
|
||||
Welcome to the **Syndarix** branding guidelines. This section defines who we are, how we speak, and how we look.
|
||||
|
||||
## Contents
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Quick Start Guide
|
||||
|
||||
Get up and running with the PragmaStack design system immediately. This guide covers the essential patterns you need to build 80% of interfaces.
|
||||
Get up and running with the Syndarix design system immediately. This guide covers the essential patterns you need to build 80% of interfaces.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# AI Code Generation Guidelines
|
||||
|
||||
**For AI Assistants**: This document contains strict rules for generating code in the PragmaStack project. Follow these rules to ensure generated code matches the design system perfectly.
|
||||
**For AI Assistants**: This document contains strict rules for generating code in the Syndarix project. Follow these rules to ensure generated code matches the design system perfectly.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Quick Reference
|
||||
|
||||
**Bookmark this page** for instant lookups of colors, spacing, typography, components, and common patterns. Your go-to cheat sheet for the PragmaStack design system.
|
||||
**Bookmark this page** for instant lookups of colors, spacing, typography, components, and common patterns. Your go-to cheat sheet for the Syndarix design system.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Design System Documentation
|
||||
|
||||
**PragmaStack Design System** - A comprehensive guide to building consistent, accessible, and beautiful user interfaces.
|
||||
**Syndarix Design System** - A comprehensive guide to building consistent, accessible, and beautiful user interfaces.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@ test.describe('Homepage - Desktop Navigation', () => {
|
||||
|
||||
test('should display header with logo and navigation', async ({ page }) => {
|
||||
// Logo should be visible
|
||||
await expect(page.getByRole('link', { name: /PragmaStack/i })).toBeVisible();
|
||||
await expect(page.getByRole('link', { name: /Syndarix/i })).toBeVisible();
|
||||
|
||||
// Desktop navigation links should be visible (use locator to find within header)
|
||||
const header = page.locator('header').first();
|
||||
@@ -23,8 +23,8 @@ test.describe('Homepage - Desktop Navigation', () => {
|
||||
});
|
||||
|
||||
test('should display GitHub link with star badge', async ({ page }) => {
|
||||
// Find GitHub link by checking for one that has github.com in href
|
||||
const githubLink = page.locator('a[href*="github.com"]').first();
|
||||
// Find GitHub link by checking for one that has gitea.pragmazest.com in href
|
||||
const githubLink = page.locator('a[href*="gitea.pragmazest.com"]').first();
|
||||
await expect(githubLink).toBeVisible();
|
||||
await expect(githubLink).toHaveAttribute('target', '_blank');
|
||||
});
|
||||
@@ -120,7 +120,7 @@ test.describe('Homepage - Hero Section', () => {
|
||||
test('should navigate to GitHub when clicking View on GitHub', async ({ page }) => {
|
||||
const githubLink = page.getByRole('link', { name: /View on GitHub/i }).first();
|
||||
await expect(githubLink).toBeVisible();
|
||||
await expect(githubLink).toHaveAttribute('href', expect.stringContaining('github.com'));
|
||||
await expect(githubLink).toHaveAttribute('href', expect.stringContaining('gitea.pragmazest.com'));
|
||||
});
|
||||
|
||||
test('should navigate to components when clicking Explore Components', async ({ page }) => {
|
||||
@@ -250,7 +250,7 @@ test.describe('Homepage - Feature Sections', () => {
|
||||
});
|
||||
|
||||
test('should display philosophy section', async ({ page }) => {
|
||||
await expect(page.getByRole('heading', { name: /Why PragmaStack/i })).toBeVisible();
|
||||
await expect(page.getByRole('heading', { name: /Why Syndarix/i })).toBeVisible();
|
||||
await expect(page.getByText(/MIT licensed/i).first()).toBeVisible();
|
||||
});
|
||||
});
|
||||
@@ -264,7 +264,7 @@ test.describe('Homepage - Footer', () => {
|
||||
// Scroll to footer
|
||||
await page.locator('footer').scrollIntoViewIfNeeded();
|
||||
|
||||
await expect(page.getByText(/PragmaStack. MIT Licensed/i)).toBeVisible();
|
||||
await expect(page.getByText(/Syndarix. MIT Licensed/i)).toBeVisible();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -285,7 +285,7 @@ test.describe('Homepage - Accessibility', () => {
|
||||
});
|
||||
|
||||
test('should have accessible links with proper attributes', async ({ page }) => {
|
||||
const githubLink = page.locator('a[href*="github.com"]').first();
|
||||
const githubLink = page.locator('a[href*="gitea.pragmazest.com"]').first();
|
||||
await expect(githubLink).toHaveAttribute('target', '_blank');
|
||||
await expect(githubLink).toHaveAttribute('rel', 'noopener noreferrer');
|
||||
});
|
||||
|
||||
@@ -7,42 +7,42 @@
|
||||
* - Please do NOT modify this file.
|
||||
*/
|
||||
|
||||
const PACKAGE_VERSION = '2.12.3';
|
||||
const INTEGRITY_CHECKSUM = '4db4a41e972cec1b64cc569c66952d82';
|
||||
const IS_MOCKED_RESPONSE = Symbol('isMockedResponse');
|
||||
const activeClientIds = new Set();
|
||||
const PACKAGE_VERSION = '2.12.3'
|
||||
const INTEGRITY_CHECKSUM = '4db4a41e972cec1b64cc569c66952d82'
|
||||
const IS_MOCKED_RESPONSE = Symbol('isMockedResponse')
|
||||
const activeClientIds = new Set()
|
||||
|
||||
addEventListener('install', function () {
|
||||
self.skipWaiting();
|
||||
});
|
||||
self.skipWaiting()
|
||||
})
|
||||
|
||||
addEventListener('activate', function (event) {
|
||||
event.waitUntil(self.clients.claim());
|
||||
});
|
||||
event.waitUntil(self.clients.claim())
|
||||
})
|
||||
|
||||
addEventListener('message', async function (event) {
|
||||
const clientId = Reflect.get(event.source || {}, 'id');
|
||||
const clientId = Reflect.get(event.source || {}, 'id')
|
||||
|
||||
if (!clientId || !self.clients) {
|
||||
return;
|
||||
return
|
||||
}
|
||||
|
||||
const client = await self.clients.get(clientId);
|
||||
const client = await self.clients.get(clientId)
|
||||
|
||||
if (!client) {
|
||||
return;
|
||||
return
|
||||
}
|
||||
|
||||
const allClients = await self.clients.matchAll({
|
||||
type: 'window',
|
||||
});
|
||||
})
|
||||
|
||||
switch (event.data) {
|
||||
case 'KEEPALIVE_REQUEST': {
|
||||
sendToClient(client, {
|
||||
type: 'KEEPALIVE_RESPONSE',
|
||||
});
|
||||
break;
|
||||
})
|
||||
break
|
||||
}
|
||||
|
||||
case 'INTEGRITY_CHECK_REQUEST': {
|
||||
@@ -52,12 +52,12 @@ addEventListener('message', async function (event) {
|
||||
packageVersion: PACKAGE_VERSION,
|
||||
checksum: INTEGRITY_CHECKSUM,
|
||||
},
|
||||
});
|
||||
break;
|
||||
})
|
||||
break
|
||||
}
|
||||
|
||||
case 'MOCK_ACTIVATE': {
|
||||
activeClientIds.add(clientId);
|
||||
activeClientIds.add(clientId)
|
||||
|
||||
sendToClient(client, {
|
||||
type: 'MOCKING_ENABLED',
|
||||
@@ -67,51 +67,54 @@ addEventListener('message', async function (event) {
|
||||
frameType: client.frameType,
|
||||
},
|
||||
},
|
||||
});
|
||||
break;
|
||||
})
|
||||
break
|
||||
}
|
||||
|
||||
case 'CLIENT_CLOSED': {
|
||||
activeClientIds.delete(clientId);
|
||||
activeClientIds.delete(clientId)
|
||||
|
||||
const remainingClients = allClients.filter((client) => {
|
||||
return client.id !== clientId;
|
||||
});
|
||||
return client.id !== clientId
|
||||
})
|
||||
|
||||
// Unregister itself when there are no more clients
|
||||
if (remainingClients.length === 0) {
|
||||
self.registration.unregister();
|
||||
self.registration.unregister()
|
||||
}
|
||||
|
||||
break;
|
||||
break
|
||||
}
|
||||
}
|
||||
});
|
||||
})
|
||||
|
||||
addEventListener('fetch', function (event) {
|
||||
const requestInterceptedAt = Date.now();
|
||||
const requestInterceptedAt = Date.now()
|
||||
|
||||
// Bypass navigation requests.
|
||||
if (event.request.mode === 'navigate') {
|
||||
return;
|
||||
return
|
||||
}
|
||||
|
||||
// Opening the DevTools triggers the "only-if-cached" request
|
||||
// that cannot be handled by the worker. Bypass such requests.
|
||||
if (event.request.cache === 'only-if-cached' && event.request.mode !== 'same-origin') {
|
||||
return;
|
||||
if (
|
||||
event.request.cache === 'only-if-cached' &&
|
||||
event.request.mode !== 'same-origin'
|
||||
) {
|
||||
return
|
||||
}
|
||||
|
||||
// Bypass all requests when there are no active clients.
|
||||
// Prevents the self-unregistered worked from handling requests
|
||||
// after it's been terminated (still remains active until the next reload).
|
||||
if (activeClientIds.size === 0) {
|
||||
return;
|
||||
return
|
||||
}
|
||||
|
||||
const requestId = crypto.randomUUID();
|
||||
event.respondWith(handleRequest(event, requestId, requestInterceptedAt));
|
||||
});
|
||||
const requestId = crypto.randomUUID()
|
||||
event.respondWith(handleRequest(event, requestId, requestInterceptedAt))
|
||||
})
|
||||
|
||||
/**
|
||||
* @param {FetchEvent} event
|
||||
@@ -119,18 +122,23 @@ addEventListener('fetch', function (event) {
|
||||
* @param {number} requestInterceptedAt
|
||||
*/
|
||||
async function handleRequest(event, requestId, requestInterceptedAt) {
|
||||
const client = await resolveMainClient(event);
|
||||
const requestCloneForEvents = event.request.clone();
|
||||
const response = await getResponse(event, client, requestId, requestInterceptedAt);
|
||||
const client = await resolveMainClient(event)
|
||||
const requestCloneForEvents = event.request.clone()
|
||||
const response = await getResponse(
|
||||
event,
|
||||
client,
|
||||
requestId,
|
||||
requestInterceptedAt,
|
||||
)
|
||||
|
||||
// Send back the response clone for the "response:*" life-cycle events.
|
||||
// Ensure MSW is active and ready to handle the message, otherwise
|
||||
// this message will pend indefinitely.
|
||||
if (client && activeClientIds.has(client.id)) {
|
||||
const serializedRequest = await serializeRequest(requestCloneForEvents);
|
||||
const serializedRequest = await serializeRequest(requestCloneForEvents)
|
||||
|
||||
// Clone the response so both the client and the library could consume it.
|
||||
const responseClone = response.clone();
|
||||
const responseClone = response.clone()
|
||||
|
||||
sendToClient(
|
||||
client,
|
||||
@@ -151,11 +159,11 @@ async function handleRequest(event, requestId, requestInterceptedAt) {
|
||||
},
|
||||
},
|
||||
},
|
||||
responseClone.body ? [serializedRequest.body, responseClone.body] : []
|
||||
);
|
||||
responseClone.body ? [serializedRequest.body, responseClone.body] : [],
|
||||
)
|
||||
}
|
||||
|
||||
return response;
|
||||
return response
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -167,30 +175,30 @@ async function handleRequest(event, requestId, requestInterceptedAt) {
|
||||
* @returns {Promise<Client | undefined>}
|
||||
*/
|
||||
async function resolveMainClient(event) {
|
||||
const client = await self.clients.get(event.clientId);
|
||||
const client = await self.clients.get(event.clientId)
|
||||
|
||||
if (activeClientIds.has(event.clientId)) {
|
||||
return client;
|
||||
return client
|
||||
}
|
||||
|
||||
if (client?.frameType === 'top-level') {
|
||||
return client;
|
||||
return client
|
||||
}
|
||||
|
||||
const allClients = await self.clients.matchAll({
|
||||
type: 'window',
|
||||
});
|
||||
})
|
||||
|
||||
return allClients
|
||||
.filter((client) => {
|
||||
// Get only those clients that are currently visible.
|
||||
return client.visibilityState === 'visible';
|
||||
return client.visibilityState === 'visible'
|
||||
})
|
||||
.find((client) => {
|
||||
// Find the client ID that's recorded in the
|
||||
// set of clients that have registered the worker.
|
||||
return activeClientIds.has(client.id);
|
||||
});
|
||||
return activeClientIds.has(client.id)
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -203,34 +211,36 @@ async function resolveMainClient(event) {
|
||||
async function getResponse(event, client, requestId, requestInterceptedAt) {
|
||||
// Clone the request because it might've been already used
|
||||
// (i.e. its body has been read and sent to the client).
|
||||
const requestClone = event.request.clone();
|
||||
const requestClone = event.request.clone()
|
||||
|
||||
function passthrough() {
|
||||
// Cast the request headers to a new Headers instance
|
||||
// so the headers can be manipulated with.
|
||||
const headers = new Headers(requestClone.headers);
|
||||
const headers = new Headers(requestClone.headers)
|
||||
|
||||
// Remove the "accept" header value that marked this request as passthrough.
|
||||
// This prevents request alteration and also keeps it compliant with the
|
||||
// user-defined CORS policies.
|
||||
const acceptHeader = headers.get('accept');
|
||||
const acceptHeader = headers.get('accept')
|
||||
if (acceptHeader) {
|
||||
const values = acceptHeader.split(',').map((value) => value.trim());
|
||||
const filteredValues = values.filter((value) => value !== 'msw/passthrough');
|
||||
const values = acceptHeader.split(',').map((value) => value.trim())
|
||||
const filteredValues = values.filter(
|
||||
(value) => value !== 'msw/passthrough',
|
||||
)
|
||||
|
||||
if (filteredValues.length > 0) {
|
||||
headers.set('accept', filteredValues.join(', '));
|
||||
headers.set('accept', filteredValues.join(', '))
|
||||
} else {
|
||||
headers.delete('accept');
|
||||
headers.delete('accept')
|
||||
}
|
||||
}
|
||||
|
||||
return fetch(requestClone, { headers });
|
||||
return fetch(requestClone, { headers })
|
||||
}
|
||||
|
||||
// Bypass mocking when the client is not active.
|
||||
if (!client) {
|
||||
return passthrough();
|
||||
return passthrough()
|
||||
}
|
||||
|
||||
// Bypass initial page load requests (i.e. static assets).
|
||||
@@ -238,11 +248,11 @@ async function getResponse(event, client, requestId, requestInterceptedAt) {
|
||||
// means that MSW hasn't dispatched the "MOCK_ACTIVATE" event yet
|
||||
// and is not ready to handle requests.
|
||||
if (!activeClientIds.has(client.id)) {
|
||||
return passthrough();
|
||||
return passthrough()
|
||||
}
|
||||
|
||||
// Notify the client that a request has been intercepted.
|
||||
const serializedRequest = await serializeRequest(event.request);
|
||||
const serializedRequest = await serializeRequest(event.request)
|
||||
const clientMessage = await sendToClient(
|
||||
client,
|
||||
{
|
||||
@@ -253,20 +263,20 @@ async function getResponse(event, client, requestId, requestInterceptedAt) {
|
||||
...serializedRequest,
|
||||
},
|
||||
},
|
||||
[serializedRequest.body]
|
||||
);
|
||||
[serializedRequest.body],
|
||||
)
|
||||
|
||||
switch (clientMessage.type) {
|
||||
case 'MOCK_RESPONSE': {
|
||||
return respondWithMock(clientMessage.data);
|
||||
return respondWithMock(clientMessage.data)
|
||||
}
|
||||
|
||||
case 'PASSTHROUGH': {
|
||||
return passthrough();
|
||||
return passthrough()
|
||||
}
|
||||
}
|
||||
|
||||
return passthrough();
|
||||
return passthrough()
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -277,18 +287,21 @@ async function getResponse(event, client, requestId, requestInterceptedAt) {
|
||||
*/
|
||||
function sendToClient(client, message, transferrables = []) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const channel = new MessageChannel();
|
||||
const channel = new MessageChannel()
|
||||
|
||||
channel.port1.onmessage = (event) => {
|
||||
if (event.data && event.data.error) {
|
||||
return reject(event.data.error);
|
||||
return reject(event.data.error)
|
||||
}
|
||||
|
||||
resolve(event.data);
|
||||
};
|
||||
resolve(event.data)
|
||||
}
|
||||
|
||||
client.postMessage(message, [channel.port2, ...transferrables.filter(Boolean)]);
|
||||
});
|
||||
client.postMessage(message, [
|
||||
channel.port2,
|
||||
...transferrables.filter(Boolean),
|
||||
])
|
||||
})
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -301,17 +314,17 @@ function respondWithMock(response) {
|
||||
// instance will have status code set to 0. Since it's not possible to create
|
||||
// a Response instance with status code 0, handle that use-case separately.
|
||||
if (response.status === 0) {
|
||||
return Response.error();
|
||||
return Response.error()
|
||||
}
|
||||
|
||||
const mockedResponse = new Response(response.body, response);
|
||||
const mockedResponse = new Response(response.body, response)
|
||||
|
||||
Reflect.defineProperty(mockedResponse, IS_MOCKED_RESPONSE, {
|
||||
value: true,
|
||||
enumerable: true,
|
||||
});
|
||||
})
|
||||
|
||||
return mockedResponse;
|
||||
return mockedResponse
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -332,5 +345,5 @@ async function serializeRequest(request) {
|
||||
referrerPolicy: request.referrerPolicy,
|
||||
body: await request.arrayBuffer(),
|
||||
keepalive: request.keepalive,
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
@@ -10,7 +10,7 @@ import { Footer } from '@/components/layout/Footer';
|
||||
|
||||
export const metadata: Metadata = {
|
||||
title: {
|
||||
template: '%s | PragmaStack',
|
||||
template: '%s | Syndarix',
|
||||
default: 'Dashboard',
|
||||
},
|
||||
};
|
||||
|
||||
@@ -12,7 +12,7 @@ import { AdminSidebar, Breadcrumbs } from '@/components/admin';
|
||||
|
||||
export const metadata: Metadata = {
|
||||
title: {
|
||||
template: '%s | Admin | PragmaStack',
|
||||
template: '%s | Admin | Syndarix',
|
||||
default: 'Admin Dashboard',
|
||||
},
|
||||
};
|
||||
|
||||
@@ -26,8 +26,8 @@ import { Badge } from '@/components/ui/badge';
|
||||
import { Separator } from '@/components/ui/separator';
|
||||
|
||||
export const metadata: Metadata = {
|
||||
title: 'Demo Tour | PragmaStack',
|
||||
description: 'Try all features with demo credentials - comprehensive guide to the PragmaStack',
|
||||
title: 'Demo Tour | Syndarix',
|
||||
description: 'Try all features with demo credentials - comprehensive guide to the Syndarix',
|
||||
};
|
||||
|
||||
const demoCategories = [
|
||||
|
||||
@@ -120,7 +120,7 @@ export default function DocsHub() {
|
||||
<h2 className="text-4xl font-bold tracking-tight mb-4">Design System Documentation</h2>
|
||||
<p className="text-lg text-muted-foreground mb-8">
|
||||
Comprehensive guides, best practices, and references for building consistent,
|
||||
accessible, and maintainable user interfaces with the PragmaStack design system.
|
||||
accessible, and maintainable user interfaces with the Syndarix design system.
|
||||
</p>
|
||||
<div className="flex flex-wrap gap-3 justify-center">
|
||||
<Link href="/dev/docs/design-system/00-quick-start">
|
||||
|
||||
@@ -14,7 +14,7 @@ import { Badge } from '@/components/ui/badge';
|
||||
import { Separator } from '@/components/ui/separator';
|
||||
|
||||
export const metadata: Metadata = {
|
||||
title: 'Design System Hub | PragmaStack',
|
||||
title: 'Design System Hub | Syndarix',
|
||||
description:
|
||||
'Interactive design system demonstrations with live examples - explore components, layouts, spacing, and forms built with shadcn/ui and Tailwind CSS',
|
||||
};
|
||||
@@ -90,7 +90,7 @@ export default function DesignSystemHub() {
|
||||
</div>
|
||||
<p className="text-lg text-muted-foreground">
|
||||
Interactive demonstrations, live examples, and comprehensive documentation for the
|
||||
PragmaStack design system. Built with shadcn/ui + Tailwind CSS 4.
|
||||
Syndarix design system. Built with shadcn/ui + Tailwind CSS 4.
|
||||
</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
/* istanbul ignore file -- @preserve Landing page with complex interactions tested via E2E */
|
||||
/**
|
||||
* Homepage / Landing Page
|
||||
* Main landing page for the PragmaStack project
|
||||
* Main landing page for the Syndarix project
|
||||
* Showcases features, tech stack, and provides demos for developers
|
||||
*/
|
||||
|
||||
@@ -68,7 +68,7 @@ export default function Home() {
|
||||
<div className="container mx-auto px-6 py-8">
|
||||
<div className="flex flex-col md:flex-row items-center justify-between gap-4">
|
||||
<div className="text-sm text-muted-foreground">
|
||||
© {new Date().getFullYear()} PragmaStack. MIT Licensed.
|
||||
© {new Date().getFullYear()} Syndarix. MIT Licensed.
|
||||
</div>
|
||||
<div className="flex items-center gap-6 text-sm text-muted-foreground">
|
||||
<Link href="/demos" className="hover:text-foreground transition-colors">
|
||||
|
||||
@@ -1,7 +1,7 @@
|
||||
@import 'tailwindcss';
|
||||
|
||||
/**
|
||||
* PragmaStack Design System
|
||||
* Syndarix Design System
|
||||
* Theme: Modern Minimal (from tweakcn.com)
|
||||
* Primary: Blue | Color Space: OKLCH
|
||||
*
|
||||
|
||||
@@ -96,12 +96,12 @@ export function DevLayout({ children }: DevLayoutProps) {
|
||||
<div className="flex items-center gap-3 shrink-0">
|
||||
<Image
|
||||
src="/logo-icon.svg"
|
||||
alt="PragmaStack Logo"
|
||||
alt="Syndarix Logo"
|
||||
width={24}
|
||||
height={24}
|
||||
className="h-6 w-6"
|
||||
/>
|
||||
<h1 className="text-base font-semibold">PragmaStack</h1>
|
||||
<h1 className="text-base font-semibold">Syndarix</h1>
|
||||
<Badge variant="secondary" className="text-xs">
|
||||
Dev
|
||||
</Badge>
|
||||
|
||||
@@ -14,8 +14,8 @@ import { Link } from '@/lib/i18n/routing';
|
||||
|
||||
const commands = [
|
||||
{ text: '# Clone the repository', delay: 0 },
|
||||
{ text: '$ git clone https://github.com/your-org/fast-next-template.git', delay: 800 },
|
||||
{ text: '$ cd fast-next-template', delay: 1600 },
|
||||
{ text: '$ git clone https://gitea.pragmazest.com/cardosofelipe/syndarix.git', delay: 800 },
|
||||
{ text: '$ cd syndarix', delay: 1600 },
|
||||
{ text: '', delay: 2200 },
|
||||
{ text: '# Start with Docker (one command)', delay: 2400 },
|
||||
{ text: '$ docker-compose up', delay: 3200 },
|
||||
|
||||
@@ -49,7 +49,7 @@ export function CTASection({ onOpenDemoModal }: CTASectionProps) {
|
||||
<div className="flex flex-col sm:flex-row items-center justify-center gap-4 pt-4">
|
||||
<Button asChild size="lg" className="gap-2 text-base group">
|
||||
<a
|
||||
href="https://github.com/your-org/fast-next-template"
|
||||
href="https://gitea.pragmazest.com/cardosofelipe/syndarix"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
>
|
||||
@@ -75,7 +75,7 @@ export function CTASection({ onOpenDemoModal }: CTASectionProps) {
|
||||
</Button>
|
||||
<Button asChild size="lg" variant="ghost" className="gap-2 text-base group">
|
||||
<a
|
||||
href="https://github.com/your-org/fast-next-template#documentation"
|
||||
href="https://gitea.pragmazest.com/cardosofelipe/syndarix#documentation"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
>
|
||||
|
||||
@@ -44,7 +44,7 @@ const features = [
|
||||
'12+ documentation guides covering architecture, design system, testing patterns, deployment, and AI code generation guidelines. Interactive API docs with Swagger and ReDoc',
|
||||
highlight: 'Developer-first docs',
|
||||
ctaText: 'Browse Docs',
|
||||
ctaHref: 'https://github.com/your-org/fast-next-template#documentation',
|
||||
ctaHref: 'https://gitea.pragmazest.com/cardosofelipe/syndarix#documentation',
|
||||
},
|
||||
{
|
||||
icon: Server,
|
||||
@@ -53,7 +53,7 @@ const features = [
|
||||
'Docker deployment configs, database migrations with Alembic helpers, connection pooling, health checks, monitoring setup, and production security headers',
|
||||
highlight: 'Deploy with confidence',
|
||||
ctaText: 'Deployment Guide',
|
||||
ctaHref: 'https://github.com/your-org/fast-next-template#deployment',
|
||||
ctaHref: 'https://gitea.pragmazest.com/cardosofelipe/syndarix#deployment',
|
||||
},
|
||||
{
|
||||
icon: Code,
|
||||
|
||||
@@ -48,13 +48,13 @@ export function Header({ onOpenDemoModal }: HeaderProps) {
|
||||
>
|
||||
<Image
|
||||
src="/logo-icon.svg"
|
||||
alt="PragmaStack Logo"
|
||||
alt="Syndarix Logo"
|
||||
width={32}
|
||||
height={32}
|
||||
className="h-8 w-8"
|
||||
/>
|
||||
<span className="bg-gradient-to-r from-primary to-primary/60 bg-clip-text text-transparent">
|
||||
PragmaStack
|
||||
Syndarix
|
||||
</span>
|
||||
</Link>
|
||||
|
||||
@@ -72,7 +72,7 @@ export function Header({ onOpenDemoModal }: HeaderProps) {
|
||||
|
||||
{/* GitHub Link with Star */}
|
||||
<a
|
||||
href="https://github.com/your-org/fast-next-template"
|
||||
href="https://gitea.pragmazest.com/cardosofelipe/syndarix"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="flex items-center gap-2 text-sm font-medium text-muted-foreground hover:text-foreground transition-colors"
|
||||
@@ -135,7 +135,7 @@ export function Header({ onOpenDemoModal }: HeaderProps) {
|
||||
|
||||
{/* GitHub Link */}
|
||||
<a
|
||||
href="https://github.com/your-org/fast-next-template"
|
||||
href="https://gitea.pragmazest.com/cardosofelipe/syndarix"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
onClick={() => setMobileMenuOpen(false)}
|
||||
|
||||
@@ -72,7 +72,7 @@ export function HeroSection({ onOpenDemoModal }: HeroSectionProps) {
|
||||
animate={{ opacity: 1, y: 0 }}
|
||||
transition={{ duration: 0.5, delay: 0.2 }}
|
||||
>
|
||||
Opinionated, secure, and production-ready. PragmaStack gives you the solid foundation
|
||||
Opinionated, secure, and production-ready. Syndarix gives you the solid foundation
|
||||
you need to stop configuring and start shipping.{' '}
|
||||
<span className="text-foreground font-medium">Start building features on day one.</span>
|
||||
</motion.p>
|
||||
@@ -93,7 +93,7 @@ export function HeroSection({ onOpenDemoModal }: HeroSectionProps) {
|
||||
</Button>
|
||||
<Button asChild size="lg" variant="outline" className="gap-2 text-base group">
|
||||
<a
|
||||
href="https://github.com/your-org/fast-next-template"
|
||||
href="https://gitea.pragmazest.com/cardosofelipe/syndarix"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
>
|
||||
|
||||
@@ -33,7 +33,7 @@ export function PhilosophySection() {
|
||||
viewport={{ once: true, margin: '-100px' }}
|
||||
transition={{ duration: 0.6 }}
|
||||
>
|
||||
<h2 className="text-3xl md:text-4xl font-bold mb-6">Why PragmaStack?</h2>
|
||||
<h2 className="text-3xl md:text-4xl font-bold mb-6">Why Syndarix?</h2>
|
||||
<div className="space-y-4 text-lg text-muted-foreground leading-relaxed">
|
||||
<p>
|
||||
We built this template after rebuilding the same authentication, authorization, and
|
||||
|
||||
@@ -13,8 +13,8 @@ import { vscDarkPlus } from 'react-syntax-highlighter/dist/esm/styles/prism';
|
||||
import { Button } from '@/components/ui/button';
|
||||
|
||||
const codeString = `# Clone and start with Docker
|
||||
git clone https://github.com/your-org/fast-next-template.git
|
||||
cd fast-next-template
|
||||
git clone https://gitea.pragmazest.com/cardosofelipe/syndarix.git
|
||||
cd syndarix
|
||||
docker-compose up
|
||||
|
||||
# Or set up locally
|
||||
|
||||
@@ -18,12 +18,12 @@ export function Footer() {
|
||||
<div className="flex items-center gap-2 text-center text-sm text-muted-foreground md:text-left">
|
||||
<Image
|
||||
src="/logo-icon.svg"
|
||||
alt="PragmaStack Logo"
|
||||
alt="Syndarix Logo"
|
||||
width={20}
|
||||
height={20}
|
||||
className="h-5 w-5 opacity-70"
|
||||
/>
|
||||
<span>© {currentYear} PragmaStack. All rights reserved.</span>
|
||||
<span>© {currentYear} Syndarix. All rights reserved.</span>
|
||||
</div>
|
||||
<div className="flex space-x-6">
|
||||
<Link
|
||||
@@ -33,7 +33,7 @@ export function Footer() {
|
||||
Settings
|
||||
</Link>
|
||||
<a
|
||||
href="https://github.com/cardosofelipe/pragmastack"
|
||||
href="https://gitea.pragmazest.com/cardosofelipe/syndarix"
|
||||
target="_blank"
|
||||
rel="noopener noreferrer"
|
||||
className="text-sm text-muted-foreground hover:text-foreground transition-colors"
|
||||
|
||||
@@ -86,12 +86,12 @@ export function Header() {
|
||||
<Link href="/" className="flex items-center space-x-2">
|
||||
<Image
|
||||
src="/logo-icon.svg"
|
||||
alt="PragmaStack Logo"
|
||||
alt="Syndarix Logo"
|
||||
width={32}
|
||||
height={32}
|
||||
className="h-8 w-8"
|
||||
/>
|
||||
<span className="text-xl font-bold text-foreground">PragmaStack</span>
|
||||
<span className="text-xl font-bold text-foreground">Syndarix</span>
|
||||
</Link>
|
||||
|
||||
{/* Navigation Links */}
|
||||
|
||||
@@ -13,8 +13,8 @@ export type Locale = 'en' | 'it';
|
||||
*/
|
||||
export const siteConfig = {
|
||||
name: {
|
||||
en: 'PragmaStack',
|
||||
it: 'PragmaStack',
|
||||
en: 'Syndarix',
|
||||
it: 'Syndarix',
|
||||
},
|
||||
description: {
|
||||
en: 'Production-ready FastAPI + Next.js full-stack template with authentication, admin panel, and comprehensive testing',
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
/**
|
||||
* Tests for Home Page
|
||||
* Tests for the new PragmaStack landing page
|
||||
* Tests for the new Syndarix landing page
|
||||
*/
|
||||
|
||||
import { render, screen, within, fireEvent } from '@testing-library/react';
|
||||
@@ -87,13 +87,13 @@ describe('HomePage', () => {
|
||||
it('renders header with logo', () => {
|
||||
render(<Home />);
|
||||
const header = screen.getByRole('banner');
|
||||
expect(within(header).getByText('PragmaStack')).toBeInTheDocument();
|
||||
expect(within(header).getByText('Syndarix')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('renders footer with copyright', () => {
|
||||
render(<Home />);
|
||||
const footer = screen.getByRole('contentinfo');
|
||||
expect(within(footer).getByText(/PragmaStack. MIT Licensed/i)).toBeInTheDocument();
|
||||
expect(within(footer).getByText(/Syndarix. MIT Licensed/i)).toBeInTheDocument();
|
||||
});
|
||||
});
|
||||
|
||||
@@ -210,7 +210,7 @@ describe('HomePage', () => {
|
||||
describe('Philosophy Section', () => {
|
||||
it('renders why this template exists', () => {
|
||||
render(<Home />);
|
||||
expect(screen.getByText(/Why PragmaStack\?/i)).toBeInTheDocument();
|
||||
expect(screen.getByText(/Why Syndarix\?/i)).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('renders what you wont find section', () => {
|
||||
|
||||
@@ -71,7 +71,7 @@ describe('CTASection', () => {
|
||||
);
|
||||
|
||||
const githubLink = screen.getByRole('link', { name: /get started on github/i });
|
||||
expect(githubLink).toHaveAttribute('href', 'https://github.com/your-org/fast-next-template');
|
||||
expect(githubLink).toHaveAttribute('href', 'https://gitea.pragmazest.com/cardosofelipe/syndarix');
|
||||
expect(githubLink).toHaveAttribute('target', '_blank');
|
||||
expect(githubLink).toHaveAttribute('rel', 'noopener noreferrer');
|
||||
});
|
||||
@@ -101,7 +101,7 @@ describe('CTASection', () => {
|
||||
const docsLink = screen.getByRole('link', { name: /read documentation/i });
|
||||
expect(docsLink).toHaveAttribute(
|
||||
'href',
|
||||
'https://github.com/your-org/fast-next-template#documentation'
|
||||
'https://gitea.pragmazest.com/cardosofelipe/syndarix#documentation'
|
||||
);
|
||||
expect(docsLink).toHaveAttribute('target', '_blank');
|
||||
expect(docsLink).toHaveAttribute('rel', 'noopener noreferrer');
|
||||
|
||||
@@ -55,7 +55,7 @@ describe('Header', () => {
|
||||
/>
|
||||
);
|
||||
|
||||
expect(screen.getByText('PragmaStack')).toBeInTheDocument();
|
||||
expect(screen.getByText('Syndarix')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('logo links to homepage', () => {
|
||||
@@ -67,7 +67,7 @@ describe('Header', () => {
|
||||
/>
|
||||
);
|
||||
|
||||
const logoLink = screen.getByRole('link', { name: /PragmaStack/i });
|
||||
const logoLink = screen.getByRole('link', { name: /Syndarix/i });
|
||||
expect(logoLink).toHaveAttribute('href', '/');
|
||||
});
|
||||
|
||||
@@ -97,12 +97,12 @@ describe('Header', () => {
|
||||
|
||||
const githubLinks = screen.getAllByRole('link', { name: /github/i });
|
||||
const desktopGithubLink = githubLinks.find((link) =>
|
||||
link.getAttribute('href')?.includes('github.com')
|
||||
link.getAttribute('href')?.includes('gitea.pragmazest.com')
|
||||
);
|
||||
|
||||
expect(desktopGithubLink).toHaveAttribute(
|
||||
'href',
|
||||
'https://github.com/your-org/fast-next-template'
|
||||
'https://gitea.pragmazest.com/cardosofelipe/syndarix'
|
||||
);
|
||||
expect(desktopGithubLink).toHaveAttribute('target', '_blank');
|
||||
expect(desktopGithubLink).toHaveAttribute('rel', 'noopener noreferrer');
|
||||
|
||||
@@ -100,7 +100,7 @@ describe('HeroSection', () => {
|
||||
);
|
||||
|
||||
const githubLink = screen.getByRole('link', { name: /view on github/i });
|
||||
expect(githubLink).toHaveAttribute('href', 'https://github.com/your-org/fast-next-template');
|
||||
expect(githubLink).toHaveAttribute('href', 'https://gitea.pragmazest.com/cardosofelipe/syndarix');
|
||||
expect(githubLink).toHaveAttribute('target', '_blank');
|
||||
expect(githubLink).toHaveAttribute('rel', 'noopener noreferrer');
|
||||
});
|
||||
|
||||
@@ -20,7 +20,7 @@ describe('Footer', () => {
|
||||
|
||||
const currentYear = new Date().getFullYear();
|
||||
expect(
|
||||
screen.getByText(`© ${currentYear} PragmaStack. All rights reserved.`)
|
||||
screen.getByText(`© ${currentYear} Syndarix. All rights reserved.`)
|
||||
).toBeInTheDocument();
|
||||
});
|
||||
|
||||
|
||||
@@ -63,7 +63,7 @@ describe('Header', () => {
|
||||
|
||||
render(<Header />);
|
||||
|
||||
expect(screen.getByText('PragmaStack')).toBeInTheDocument();
|
||||
expect(screen.getByText('Syndarix')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('renders theme toggle', () => {
|
||||
|
||||
@@ -27,8 +27,8 @@ describe('metadata utilities', () => {
|
||||
});
|
||||
|
||||
it('should have English and Italian names', () => {
|
||||
expect(siteConfig.name.en).toBe('PragmaStack');
|
||||
expect(siteConfig.name.it).toBe('PragmaStack');
|
||||
expect(siteConfig.name.en).toBe('Syndarix');
|
||||
expect(siteConfig.name.it).toBe('Syndarix');
|
||||
});
|
||||
|
||||
it('should have English and Italian descriptions', () => {
|
||||
|
||||
Reference in New Issue
Block a user