- Services:
database_postgres(DB on${PLANEXE_POSTGRES_PORT:-5432}),frontend_single_user(UI on 7860),worker_plan(API on 8000),frontend_multi_user(UI on${PLANEXE_FRONTEND_MULTIUSER_PORT:-5001}), plus DB workers (worker_plan_database_1/2/3by default;worker_plan_databaseinmanualprofile), andmcp_cloud(MCP interface, stdio);frontend_single_userwaits for the worker to be healthy andfrontend_multi_userwaits for Postgres health. - Shared host files:
.envand./llm_config/mounted read-only;./runbind-mounted so outputs persist;.envis also loaded viaenv_file. - Postgres defaults to user/db/password
planexe; override via env or.env; data lives in thedatabase_postgres_datavolume. - Env defaults live in
docker-compose.ymlbut can be overridden in.envor your shell (URLs, timeouts, run dirs, optional auth and opener URL). develop.watchsyncs code/config forworker_planandfrontend_single_user; rebuild with--no-cacheafter big moves or dependency changes; restart policy isunless-stopped.
- Up (single user):
docker compose up worker_plan frontend_single_user. - Up (multi user):
docker compose up frontend_multi_user database_postgres worker_plan worker_plan_database_1 worker_plan_database_2 worker_plan_database_3. - Up (MCP server):
docker compose up mcp_cloud(requiresdatabase_postgresto be running). - Down:
docker compose down(add--remove-orphansif stray containers linger). - Rebuild clean:
docker compose build --no-cache database_postgres worker_plan frontend_single_user frontend_multi_user worker_plan_database worker_plan_database_1 worker_plan_database_2 worker_plan_database_3 mcp_cloud. - UI: single user -> http://localhost:7860; multi user -> http://localhost:5001 after the stack is up.
- MCP: configure your MCP client to connect to the
mcp_cloudcontainer via stdio. - Logs:
docker compose logs -f worker_planor... frontend_single_useror... mcp_cloud. - One-off inside a container:
docker compose run --rm worker_plan python -m worker_plan_internal.fiction.fiction_writer(useexecif already running). - Ensure
.envandllm_config/exist; copy.env.docker-exampleto.envif you need a starter.
- Dependency hell: when one Python package requires version A of a dependency while another requires version B (or a different Python), so
pipcannot satisfy everything in one environment; the resolver loops, pins conflict, or installs a set that breaks another part of the app. System-level deps (libssl) can also clash, and "fixes" often mean uninstalling or downgrading unrelated packages. - I want to experiment with the
uvpackage manager; to try it, installuvduring the image build and replace thepip install ...lines withuv pip install .... Compose keeps that change isolated per service so it doesn’t spill onto the other containers or host Python. - Compose solves this by isolating environments per service: each image pins its own base Python, OS libs, and
requirements.txt, so the frontend and worker no longer fight over versions. - Builds are reproducible: the
Dockerfileinstalls a clean env from scratch, so you avoid ghosts from previous virtualenvs or globally-installed wheels. - If a dependency change fails, you can rebuild from zero or switch base images without nuking your host Python setup.
- Reusable local stack with consistent env/paths under
/appin each container. - Shared run dir:
PLANEXE_RUN_DIR=/app/runin the containers, bound to${PLANEXE_HOST_RUN_DIR:-${PWD}/run}on the host so outputs persist. - Postgres data volume:
database_postgres_datakeeps the database files outside the repo tree.
- Purpose: Storage in a Postgres database for future queue + event logging work; exposes
${PLANEXE_POSTGRES_PORT:-5432}on the host mapped to 5432 in the container. - Build:
database_postgres/Dockerfile(uses the official Postgres image). - Env defaults:
PLANEXE_POSTGRES_USER=planexe,PLANEXE_POSTGRES_PASSWORD=planexe,PLANEXE_POSTGRES_DB=planexe,PLANEXE_POSTGRES_PORT=5432(override with env/.env). - Data/health: data in the named volume
database_postgres_data; healthcheck usespg_isready.
The default PostgreSQL port is 5432. On developer machines, this port is often already occupied by a local PostgreSQL installation:
- macOS: Postgres.app, Homebrew PostgreSQL, or pgAdmin's bundled server
- Linux: System PostgreSQL installed via apt/yum/dnf
- Windows: PostgreSQL installer, pgAdmin, or other database tools
If port 5432 is in use, Docker will fail to start database_postgres with a "port already in use" error.
Solution: Set PLANEXE_POSTGRES_PORT to a different value before starting:
export PLANEXE_POSTGRES_PORT=5433
docker compose upImportant: This only affects the HOST port mapping (how you access Postgres from your machine, e.g., via DBeaver or psql). Inside Docker, containers always communicate with each other on the internal port 5432—this is hardcoded and not affected by PLANEXE_POSTGRES_PORT.
- Purpose: Single user Gradio UI; waits for a healthy worker and serves on port 7860. Does not use database.
- Build:
frontend_single_user/Dockerfile. - Env defaults:
PLANEXE_WORKER_PLAN_URL=http://worker_plan:8000, timeout, server host/port, optional password, optionalPLANEXE_OPEN_DIR_SERVER_URLfor the host opener. - Volumes: mirrors the worker (
.envro,llm_config/ro,run/rw) so both share config and outputs. - Watch: sync frontend code, shared API code in
worker_plan/, and config files; rebuild onworker_plan/pyproject.toml; restart on compose edits.
- Purpose: Multi-user Flask UI with admin views (tasks/events/nonce/workers) backed by Postgres.
- Build:
frontend_multi_user/Dockerfile. - Env defaults: DB host
database_postgres, port5432, db/user/passwordplanexe(followsPLANEXE_POSTGRES_*); admin credentials must be provided viaPLANEXE_FRONTEND_MULTIUSER_ADMIN_USERNAME/PLANEXE_FRONTEND_MULTIUSER_ADMIN_PASSWORD(compose will fail if missing); container listens on fixed port5000, host maps${PLANEXE_FRONTEND_MULTIUSER_PORT:-5001}. - Health: depends on
database_postgreshealth; its own healthcheck hits/healthcheckon port 5000.
- Purpose: runs the PlanExe pipeline and exposes the API on port 8000; the frontend depends on its health.
- Build:
worker_plan/Dockerfile. - Env:
PLANEXE_CONFIG_PATH=/app,PLANEXE_RUN_DIR=/app/run,PLANEXE_HOST_RUN_DIR=${PWD}/run,PLANEXE_WORKER_RELAY_PROCESS_OUTPUT=true. - Health:
http://localhost:8000/healthcheckchecked via the compose healthcheck. - Volumes:
.env(ro),llm_config/(ro),run/(rw). - Watch: sync
worker_plan/into/app/worker_plan, rebuild onworker_plan/pyproject.toml, restart on compose edits.
- Purpose: polls
PlanItemrows in Postgres, marks them processing, runs the PlanExe pipeline, and writes progress/events back to the DB; no HTTP port exposed. - Build:
worker_plan_database/Dockerfile(shipsworker_plancode, shareddatabase_apimodels, and this worker subclass). - Depends on:
database_postgreshealth. - Env defaults: derives
SQLALCHEMY_DATABASE_URIfromPLANEXE_POSTGRES_HOST|PORT|DB|USER|PASSWORD(fallbacks todatabase_postgres+planexe/planexeon 5432);PLANEXE_CONFIG_PATH=/app,PLANEXE_RUN_DIR=/app/run; MachAI confirmation URLs default tohttps://example.com/iframe_generator_confirmationfor bothPLANEXE_IFRAME_GENERATOR_CONFIRMATION_PRODUCTION_URLandPLANEXE_IFRAME_GENERATOR_CONFIRMATION_DEVELOPMENT_URL(override with real endpoints). - Volumes:
.env(ro),llm_config/(ro),run/(rw for pipeline output). - Entrypoint:
python -m worker_plan_database.app(runs the long-lived poller loop). - Multiple workers: compose defines
worker_plan_database_1/2/3withPLANEXE_WORKER_IDset to1/2/3. Start the trio with:docker compose up -d worker_plan_database_1 worker_plan_database_2 worker_plan_database_3- (Use
worker_plan_databasealone only via profile:docker compose --profile manual up worker_plan_database.)
- Purpose: Model Context Protocol (MCP) server that provides a standardized interface for AI agents and developer tools to interact with PlanExe. Communicates with
worker_plan_databasevia the shared Postgres database. - Build:
mcp_cloud/Dockerfile(ships shareddatabase_apimodels and the MCP server implementation). - Depends on:
database_postgreshealth. - Env defaults: derives
SQLALCHEMY_DATABASE_URIfromPLANEXE_POSTGRES_HOST|PORT|DB|USER|PASSWORD(fallbacks todatabase_postgres+planexe/planexeon 5432);PLANEXE_CONFIG_PATH=/app,PLANEXE_RUN_DIR=/app/run;PLANEXE_MCP_PUBLIC_BASE_URL=http://localhost:8001for report download URLs. - Volumes:
run/(rw for artifact access). - Entrypoint:
python -m mcp_cloud.app(runs the MCP server over stdio). - Communication: the server communicates over stdio (standard input/output) following the MCP protocol. Configure your MCP client to connect to this container. The container runs with
stdin_open: trueandtty: trueto enable stdio communication. - MCP tools: implements the specification in
docs/mcp/planexe_mcp_interface.mdincluding session management, artifact operations, and event streaming.
- Ports: host
8000->worker_plan,7860->frontend_single_user,${PLANEXE_FRONTEND_MULTIUSER_PORT:-5001}->frontend_multi_user,PLANEXE_POSTGRES_PORT (default 5432)->database_postgres; change mappings indocker-compose.ymlif needed. .envmust exist beforedocker compose up; it is both loaded and mounted read-only. Same forllm_config/. If missing, start from.env.docker-example.- Host opener: set
PLANEXE_OPEN_DIR_SERVER_URLso the frontend can reach your host opener service (seedocs/docker.mdfor OS-specific URLs and optionalextra_hostson Linux). - To relocate outputs, set
PLANEXE_HOST_RUN_DIR(or edit the bind mount) to another host path. - Database: connect on
localhost:${PLANEXE_POSTGRES_PORT:-5432}withplanexe/planexeby default; data persists via thedatabase_postgres_datavolume.
Snapshot from docker compose ps on a live stack with two numbered DB workers; your timestamps, ports, and container names may differ:
PROMPT> docker compose ps
NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS
database_postgres planexe-database_postgres "docker-entrypoint.s…" database_postgres 8 hours ago Up 8 hours (healthy) 0.0.0.0:5433->5432/tcp, [::]:5433->5432/tcp
frontend_multi_user planexe-frontend_multi_user "python /app/fronten…" frontend_multi_user 8 hours ago Up 2 minutes (healthy) 0.0.0.0:5001->5000/tcp, [::]:5001->5000/tcp
worker_plan planexe-worker_plan "uvicorn worker_plan…" worker_plan 2 minutes ago Up 2 minutes (healthy) 0.0.0.0:8000->8000/tcp, [::]:8000->8000/tcp
worker_plan_database_1 planexe-worker_plan_database_1 "python -m worker_pl…" worker_plan_database_1 15 seconds ago Up 13 seconds
worker_plan_database_2 planexe-worker_plan_database_2 "python -m worker_pl…" worker_plan_database_2 15 seconds ago Up 13 seconds