Infrastructure

The 7-container stack that every Praetor instance runs — services, responsibilities, and how they connect.

Infrastructure

Every Praetor instance runs as a stack of 7 containers. The containers communicate over a private network. No container is optional — each serves a distinct role that the others depend on.


Service Topology

graph TB
  subgraph Instance ["Praetor Instance"]
    APP["praetor-app\nNext.js :4001"]
    CI["code-intel\nHono :8010"]
    PG["postgres\nPostgreSQL 16 :5432"]
    CF["centrifugo\nCentrifugo v6 :8000"]
    RD["redis\nRedis 7 :6379"]
    GR["garage\nGarage v2 :3900"]
    IN["inngest\nInngest :8288"]
  end

  APP -->|SQL| PG
  APP -->|publish/subscribe| CF
  APP -->|cache / sessions| RD
  APP -->|file read/write| GR
  APP -->|trigger workflows| IN
  APP -->|vector search / RAG| CI

  CI -->|SQL + pgvector| PG
  CI -->|cache| RD

  CF -->|broker backend| RD

  IN -->|job state| PG

  classDef app fill:#6366f1,color:#fff,stroke:none
  classDef data fill:#0ea5e9,color:#fff,stroke:none
  classDef infra fill:#10b981,color:#fff,stroke:none

  class APP,CI app
  class PG,RD,GR data
  class CF,IN infra

Services

praetor-app

Image: Node 20 (Next.js 15 App Router)
Port: 4001

The primary application container. Hosts the entire user-facing product: project dashboard, graph studio, brownfield analysis UI, settings, and the developer portal. Also runs the backend API (Hono routes mounted under /api), the graph engine, the codegen pipeline orchestrator, and the event persistence worker — a polling loop that drains event_outbox rows and publishes them to Centrifugo.

praetor-app is the only container that end users and external clients interact with directly.

code-intel

Image: Node 20 (Hono)
Port: 8010

A dedicated service for computationally expensive code intelligence operations:

  • Vector search — semantic nearest-neighbor queries against pgvector embeddings
  • RAG (Retrieval-Augmented Generation) — chunked context retrieval for LLM calls that need codebase awareness
  • Semantic code search — natural-language queries against indexed code symbols and documentation

Separating code intelligence into its own service prevents heavy embedding operations from affecting application response times. praetor-app calls code-intel over HTTP for any query that involves vector similarity.

postgres

Image: PostgreSQL 16
Port: 5432
Extensions: pgvector, pg_trgm, btree_gin, pgcrypto, pg_stat_statements

The primary data store for all application state. Stores:

  • Project graph nodes and edges (context_artifacts, context_artifact_dependencies)
  • Spec data, generation runs, Kit outputs
  • Event outbox and event logs
  • File objects under 5 MB (stored as BYTEA in file_objects)
  • LLM call logs, audit records, knowledge base entries

Row-Level Security (RLS) enforces tenant isolation at the database layer. All queries must set app.tenant_id via withTenant().

centrifugo

Image: Centrifugo v6
Port: 8000 (publish API), 8001 (admin), 8080 (WebSocket/SockJS)

The event mesh. Centrifugo serves two roles:

  1. Internal event buspraetor-app's event persistence worker publishes events from the event_outbox table to Centrifugo channels. Other internal services can subscribe to these channels.

  2. User-facing realtime — browser clients connect to Centrifugo over WebSocket (JWT-authenticated). Generation run progress, convergence score updates, and graph change notifications are delivered in real time.

Events flow through the transactional outbox pattern: events are written to event_outbox inside database transactions, then the persistence worker picks them up and publishes to Centrifugo. This ensures no events are lost even if Centrifugo is temporarily unavailable.

redis

Image: Redis 7
Port: 6379

Three distinct uses:

  1. Centrifugo broker backend — Centrifugo uses Redis as its pub/sub broker, enabling multi-node deployments and channel history persistence.
  2. Application cache — frequently-read data (graph queries, pattern scores, tenant config) is cached with TTL.
  3. Session storage — Stytch session tokens and E2E test mode session state.

garage

Image: Garage v2
Port: 3900 (S3 API)

S3-compatible object storage for files larger than 5 MB. The StorageService in @strata19/core implements automatic size-based routing: small files go to Postgres BYTEA, large files go to Garage. This keeps the database lean while still supporting large artifact storage (code archives, analysis outputs, generated project archives).

Garage runs as a single-node instance per Praetor deployment. For the S3 API, clients use forcePathStyle: true.

inngest

Image: Inngest (self-hosted)
Port: 8288

Durable workflow orchestration. Long-running operations are modeled as Inngest functions rather than HTTP request handlers:

  • Brownfield import pipeline (14 analysis steps)
  • Instance provisioning and lifecycle management
  • Codegen generation runs
  • Background convergence re-checks

Inngest persists function state to Postgres, providing automatic retry, step-level checkpointing, and a local development dashboard for inspecting workflow execution.


Data Routing Rules

DataGoes To
Files < 5 MBPostgres file_objects table (BYTEA)
Files ≥ 5 MBGarage S3
Events (transactional)Postgres event_outbox → Centrifugo via worker
Vector embeddingsPostgres pgvector columns, queried via code-intel
Session stateRedis
Long-running jobsInngest (state in Postgres)
LLM tracesPostgres llm_traces table

Network

All 7 containers share a private Docker network. External traffic reaches only praetor-app (port 4001). The Centrifugo WebSocket endpoint is proxied through praetor-app's Next.js rewrites rather than exposed directly.

In production Railway deployments, each service runs as a separate Railway service within a project, communicating over Railway's private network using internal hostnames.

Command Palette

Search for a command to run...