Skip to content

Latest commit

 

History

History
362 lines (270 loc) · 13.2 KB

File metadata and controls

362 lines (270 loc) · 13.2 KB
description Hosting the headless openhuman-core in the cloud - DigitalOcean App Platform or Docker Compose on any VPS.
icon cloud

Cloud deployment

OpenHuman is a desktop app, but its Rust core (openhuman-core) is a headless JSON-RPC server that can be hosted in the cloud. Deploying the core separately is useful for:

  • Multi-device access, point several desktop clients at the same hosted core
  • Internal testers without local Rust toolchains
  • Long-running cron jobs / webhooks that should outlive a laptop session

This guide covers three deploy paths, easiest first:

  1. DigitalOcean App Platform: one-click
  2. DigitalOcean App Platform: manual via doctl
  3. Any VPS via Docker Compose

What gets deployed in every path: a single container running openhuman-core serve on port 7788, behind the provider's TLS. The desktop app already knows how to talk to a remote core, set OPENHUMAN_CORE_RPC_URL=https://your-host/rpc and OPENHUMAN_CORE_TOKEN=... in app/.env.local and launch.


Single source of truth for the bearer token

Every /rpc call carries Authorization: Bearer <token>. The core has two ways to load that token at startup (src/core/auth.rs):

  1. OPENHUMAN_CORE_TOKEN environment variable — pre-seeded by the caller (Tauri shell, Docker, App Platform, systemd unit, …). The core uses this value as-is and never writes a file.
  2. {workspace}/core.token file — generated by the core on first boot only when OPENHUMAN_CORE_TOKEN is unset. Standalone openhuman core run uses this so CLI clients can cat the file.

Rule of thumb for any remote / dockerized deploy: always set OPENHUMAN_CORE_TOKEN. Do not rely on core.token in a container — ephemeral filesystems lose it on redeploy, and any client trying to read the file from outside the container will get a stale or empty value. The two paths are deliberately mutually exclusive at startup; mixing them is the most common reason behind "the dashboard gets 401 after I redeployed".

To check what the running core is using, run scripts/print-core-token.sh on the host (or inside the container with docker compose exec):

scripts/print-core-token.sh --where     # prints 'env' or 'file:/path'
scripts/print-core-token.sh --redact    # first 8 hex chars + '…' (safe for logs)
scripts/print-core-token.sh             # full value (pipe straight into a client)

The desktop app's first-run picker also exposes a Test connection button next to the Core RPC URL + token fields, which fires core.ping against the URL with the typed token and reports Connected ✓ / Auth failed / Unreachable inline before persisting the configuration.


What you need before you start

Setting Required Notes
OPENHUMAN_CORE_TOKEN yes Bearer token clients send to /rpc. Generate with openssl rand -hex 32. Anyone with this token can drive the core.
BACKEND_URL yes Tinyhumans backend the core talks to (https://api.tinyhumans.ai for prod).
OPENHUMAN_APP_ENV no production or staging. Defaults to production.
OPENHUMAN_CORE_HOST no Defaults to 0.0.0.0 in the container.
OPENHUMAN_CORE_PORT no Defaults to 7788.
RUST_LOG no info is fine; debug for triage.

Endpoints exposed by the running container:

  • GET /health, public liveness probe. Used by every deploy path's healthcheck.
  • POST /rpc, bearer-protected JSON-RPC entrypoint.
  • GET /events, GET /ws/dictation, public streaming channels.

The OPENHUMAN_WORKSPACE directory (/home/openhuman/.openhuman inside the container) holds the core's config, sqlite databases, and skill state. Mount it on a persistent volume in every production deploy or you will lose data on restart.


1. DigitalOcean App Platform: one-click

Click the button below to create a new App Platform application from this repository's .do/app.yaml:

Deploy to DO

Then, in the App Platform UI, before the first deploy completes:

  1. Open the Settings → App-Level Environment Variables tab.
  2. Replace the placeholder OPENHUMAN_CORE_TOKEN value with a strong secret (openssl rand -hex 32). Mark it encrypted.
  3. If you are deploying staging, change OPENHUMAN_APP_ENV to staging and BACKEND_URL to https://staging-api.tinyhumans.ai.
  4. Hit Save. App Platform redeploys with the new secret.

App Platform handles TLS, restart-on-crash, log streaming, and rolling redeploys on git push (set deploy_on_push: true in .do/app.yaml to opt-in).

Persistence note: App Platform Basic does not provide block storage. The core's workspace lives in the container's ephemeral filesystem and is lost on redeploy. For durable storage, attach a managed database or upgrade to a tier that supports volumes. See the Compose path for a self-host alternative with persistent volumes out of the box.


2. DigitalOcean App Platform: manual via doctl

If you'd rather not click through the UI:

# One-time: install doctl and authenticate.
doctl auth init

# Edit .do/app.yaml - set OPENHUMAN_CORE_TOKEN to a real value (or pass it in
# at create time via --spec with envsubst). Then:
doctl apps create --spec .do/app.yaml

# Watch the build:
doctl apps list
doctl apps logs <app-id> --type build --follow

Update an existing app after editing the spec:

doctl apps update <app-id> --spec .do/app.yaml

3. Any VPS via Docker Compose

Works on any host with Docker Engine ≥ 24 and the Compose plugin. DigitalOcean Droplet, Hetzner, Linode, EC2, a home server.

Each production release publishes a multi-tagged image to GHCR:

docker pull ghcr.io/tinyhumansai/openhuman-core:latest        # tracks the latest prod cut
docker pull ghcr.io/tinyhumansai/openhuman-core:v1.2.4        # pinned by GitHub Release tag
docker pull ghcr.io/tinyhumansai/openhuman-core:1.2.4         # pinned by SemVer

The image is linux/amd64. arm64 hosts pull the standalone tarball attached to the same GitHub Release (openhuman-core-<version>-aarch64-unknown-linux-gnu.tar.gz) or build the image from source on an arm64 builder.

Quick run with a published image:

docker run -d --name openhuman-core -p 7788:7788 \
  -e OPENHUMAN_CORE_TOKEN="$(openssl rand -hex 32)" \
  -e BACKEND_URL=https://api.tinyhumans.ai \
  -e OPENHUMAN_APP_ENV=production \
  -v openhuman-workspace:/home/openhuman/.openhuman \
  ghcr.io/tinyhumansai/openhuman-core:latest

Or use the in-repo Compose file (still builds the image locally from Dockerfile; switch the image: field to ghcr.io/tinyhumansai/openhuman-core:latest in docker-compose.yml to consume the published image instead):

# On the server:
git clone https://github.com/tinyhumansai/openhuman.git
cd openhuman

# Configure secrets:
cp .env.example .env
# Edit .env - at minimum:
#   BACKEND_URL=https://api.tinyhumans.ai
#   OPENHUMAN_CORE_TOKEN=<openssl rand -hex 32>
#   OPENHUMAN_APP_ENV=production

# Build and start:
docker compose up -d

# Verify:
docker compose ps
curl -fsS http://localhost:7788/health

Headless install without Docker

If you can't run Docker on the host, grab the standalone CLI tarball attached to the latest GitHub Release:

# Pick the tarball that matches your host arch.
ARCH="$(uname -m)"
case "$ARCH" in
  x86_64)  TARGET=x86_64-unknown-linux-gnu  ;;
  aarch64) TARGET=aarch64-unknown-linux-gnu ;;
  *) echo "Unsupported arch: $ARCH"; exit 1 ;;
esac
VERSION=1.2.4   # set to the release you want
curl -fsSL "https://github.com/tinyhumansai/openhuman/releases/download/v${VERSION}/openhuman-core-${VERSION}-${TARGET}.tar.gz" \
  | tar -xz -C /usr/local/bin
openhuman-core --version

Then run openhuman-core serve under your service manager of choice (systemd, supervisord, …) with the same environment variables documented above.

Headless self-update contract

Headless deployments should treat openhuman.update_apply as the safe primitive: it downloads the release asset, writes it atomically next to the current binary, and returns. Nothing exits automatically.

openhuman.update_run follows config.update.restart_strategy:

  • self_replace (default): stage the binary, publish an in-process restart request, and let the running core respawn itself.
  • supervisor: stage the binary and return restart_requested=false. Your outer service manager must restart the process.

For long-running Linux services, set:

[update]
restart_strategy = "supervisor"
rpc_mutations_enabled = false

or the equivalent env vars:

OPENHUMAN_AUTO_UPDATE_RESTART_STRATEGY=supervisor
OPENHUMAN_AUTO_UPDATE_RPC_MUTATIONS_ENABLED=false

Recommended systemd stance:

Restart=always
ExecReload=/bin/kill -HUP $MAINPID

Operator flow:

  1. Call openhuman.update_check to discover a release.
  2. Configure restart_strategy = "supervisor" in your update.toml (or set OPENHUMAN_AUTO_UPDATE_RESTART_STRATEGY=supervisor) so the core stages the new binary without trying to re-exec itself, then call openhuman.update_apply or openhuman.update_run. restart_strategy is a configuration setting, not an RPC parameter.
  3. Restart the unit explicitly: systemctl restart openhuman.

If download or staging fails, the running binary is left in place and no restart is requested. If a staged binary proves bad after restart, roll back by restoring the previous binary from your package manager, image tag, or release artifact and restarting the supervisor again.

The Compose file (docker-compose.yml) maps the core on :7788, mounts a named volume openhuman-workspace for persistence, and sets restart: unless-stopped so the core comes back after host reboots.

Updating

git pull
docker compose build
docker compose up -d

For RPC-exposed production deployments, prefer leaving mutating update RPCs disabled (OPENHUMAN_AUTO_UPDATE_RPC_MUTATIONS_ENABLED=false) and perform rollouts through your existing image tag or package-management flow instead.

Logs

docker compose logs -f openhuman-core

Rotating the bearer token

OPENHUMAN_CORE_TOKEN is the only thing standing between the public internet and full RPC access. Rotate it on a schedule and after any suspected leak:

# 1. Generate a new token and update the server-side .env.
openssl rand -hex 32 > /tmp/new-token
sed -i.bak "s|^OPENHUMAN_CORE_TOKEN=.*|OPENHUMAN_CORE_TOKEN=$(cat /tmp/new-token)|" .env
rm /tmp/new-token .env.bak

# 2. Restart the container so the new value reaches the core process.
docker compose up -d --force-recreate openhuman-core

# 3. Confirm the running container is using the new token (redacted).
docker compose exec openhuman-core /bin/sh -c \
  'echo -n "$OPENHUMAN_CORE_TOKEN" | head -c 8; echo "…"'

# 4. Update every desktop client (Switch mode → re-paste in the picker, or
# edit OPENHUMAN_CORE_TOKEN in app/.env.local and relaunch). Clients that
# still hold the old token will get HTTP 401 on the next /rpc call — that
# is expected, not a regression.

For App Platform, do the same in Settings → App-Level Environment Variables: edit the OPENHUMAN_CORE_TOKEN secret and let App Platform redeploy. There is no separate token file to delete; the env var is the only state.

Putting it behind TLS

Use Caddy, nginx, or Traefik as a reverse proxy in front of :7788. A minimal Caddyfile:

core.example.com {
  reverse_proxy localhost:7788
}

Pointing the desktop app at a hosted core

In the desktop app's environment file (app/.env.local):

# Use the hosted core instead of spawning a local sidecar.
OPENHUMAN_CORE_RUN_MODE=external
OPENHUMAN_CORE_RPC_URL=https://core.example.com/rpc
OPENHUMAN_CORE_TOKEN=<the same token you set on the server>

Restart the desktop app. The provider chain in App.tsx will route all RPC calls to the remote core; nothing else changes.


Smoke test

The repo ships .github/workflows/deploy-smoke.yml, which runs on every PR that touches the deploy artifacts. It builds the Docker image, boots it, and polls /health, so a regression in the cloud deploy path fails CI before it lands on main.

To run the same check locally:

docker build -t openhuman-core:smoke .
docker run -d --name oh-smoke -p 7788:7788 \
  -e OPENHUMAN_CORE_TOKEN=smoke-test-token \
  openhuman-core:smoke
# Wait ~15s for the binary to come up, then:
curl -fsS http://localhost:7788/health
docker rm -f oh-smoke