| description | Hosting the headless openhuman-core in the cloud - DigitalOcean App Platform or Docker Compose on any VPS. |
|---|---|
| icon | cloud |
OpenHuman is a desktop app, but its Rust core (openhuman-core) is a
headless JSON-RPC server that can be hosted in the cloud. Deploying the core
separately is useful for:
- Multi-device access, point several desktop clients at the same hosted core
- Internal testers without local Rust toolchains
- Long-running cron jobs / webhooks that should outlive a laptop session
This guide covers three deploy paths, easiest first:
- DigitalOcean App Platform: one-click
- DigitalOcean App Platform: manual via doctl
- Any VPS via Docker Compose
What gets deployed in every path: a single container running
openhuman-core serve on port 7788, behind the provider's TLS. The desktop
app already knows how to talk to a remote core, set
OPENHUMAN_CORE_RPC_URL=https://your-host/rpc and OPENHUMAN_CORE_TOKEN=...
in app/.env.local and launch.
Every /rpc call carries Authorization: Bearer <token>. The core has two
ways to load that token at startup (src/core/auth.rs):
OPENHUMAN_CORE_TOKENenvironment variable — pre-seeded by the caller (Tauri shell, Docker, App Platform, systemd unit, …). The core uses this value as-is and never writes a file.{workspace}/core.tokenfile — generated by the core on first boot only whenOPENHUMAN_CORE_TOKENis unset. Standaloneopenhuman core runuses this so CLI clients cancatthe file.
Rule of thumb for any remote / dockerized deploy: always set
OPENHUMAN_CORE_TOKEN. Do not rely on core.token in a container —
ephemeral filesystems lose it on redeploy, and any client trying to read the
file from outside the container will get a stale or empty value. The two
paths are deliberately mutually exclusive at startup; mixing them is the most
common reason behind "the dashboard gets 401 after I redeployed".
To check what the running core is using, run scripts/print-core-token.sh
on the host (or inside the container with docker compose exec):
scripts/print-core-token.sh --where # prints 'env' or 'file:/path'
scripts/print-core-token.sh --redact # first 8 hex chars + '…' (safe for logs)
scripts/print-core-token.sh # full value (pipe straight into a client)The desktop app's first-run picker also exposes a Test connection button
next to the Core RPC URL + token fields, which fires core.ping against the
URL with the typed token and reports Connected ✓ / Auth failed /
Unreachable inline before persisting the configuration.
| Setting | Required | Notes |
|---|---|---|
OPENHUMAN_CORE_TOKEN |
yes | Bearer token clients send to /rpc. Generate with openssl rand -hex 32. Anyone with this token can drive the core. |
BACKEND_URL |
yes | Tinyhumans backend the core talks to (https://api.tinyhumans.ai for prod). |
OPENHUMAN_APP_ENV |
no | production or staging. Defaults to production. |
OPENHUMAN_CORE_HOST |
no | Defaults to 0.0.0.0 in the container. |
OPENHUMAN_CORE_PORT |
no | Defaults to 7788. |
RUST_LOG |
no | info is fine; debug for triage. |
Endpoints exposed by the running container:
GET /health, public liveness probe. Used by every deploy path's healthcheck.POST /rpc, bearer-protected JSON-RPC entrypoint.GET /events,GET /ws/dictation, public streaming channels.
The OPENHUMAN_WORKSPACE directory (/home/openhuman/.openhuman inside the
container) holds the core's config, sqlite databases, and skill state. Mount
it on a persistent volume in every production deploy or you will lose data on
restart.
Click the button below to create a new App Platform application from this
repository's .do/app.yaml:
Then, in the App Platform UI, before the first deploy completes:
- Open the Settings → App-Level Environment Variables tab.
- Replace the placeholder
OPENHUMAN_CORE_TOKENvalue with a strong secret (openssl rand -hex 32). Mark it encrypted. - If you are deploying staging, change
OPENHUMAN_APP_ENVtostagingandBACKEND_URLtohttps://staging-api.tinyhumans.ai. - Hit Save. App Platform redeploys with the new secret.
App Platform handles TLS, restart-on-crash, log streaming, and rolling
redeploys on git push (set deploy_on_push: true in .do/app.yaml to
opt-in).
Persistence note: App Platform Basic does not provide block storage. The core's workspace lives in the container's ephemeral filesystem and is lost on redeploy. For durable storage, attach a managed database or upgrade to a tier that supports volumes. See the Compose path for a self-host alternative with persistent volumes out of the box.
If you'd rather not click through the UI:
# One-time: install doctl and authenticate.
doctl auth init
# Edit .do/app.yaml - set OPENHUMAN_CORE_TOKEN to a real value (or pass it in
# at create time via --spec with envsubst). Then:
doctl apps create --spec .do/app.yaml
# Watch the build:
doctl apps list
doctl apps logs <app-id> --type build --followUpdate an existing app after editing the spec:
doctl apps update <app-id> --spec .do/app.yamlWorks on any host with Docker Engine ≥ 24 and the Compose plugin. DigitalOcean Droplet, Hetzner, Linode, EC2, a home server.
Each production release publishes a multi-tagged image to GHCR:
docker pull ghcr.io/tinyhumansai/openhuman-core:latest # tracks the latest prod cut
docker pull ghcr.io/tinyhumansai/openhuman-core:v1.2.4 # pinned by GitHub Release tag
docker pull ghcr.io/tinyhumansai/openhuman-core:1.2.4 # pinned by SemVerThe image is linux/amd64. arm64 hosts pull the standalone tarball
attached to the same GitHub Release (openhuman-core-<version>-aarch64-unknown-linux-gnu.tar.gz)
or build the image from source on an arm64 builder.
Quick run with a published image:
docker run -d --name openhuman-core -p 7788:7788 \
-e OPENHUMAN_CORE_TOKEN="$(openssl rand -hex 32)" \
-e BACKEND_URL=https://api.tinyhumans.ai \
-e OPENHUMAN_APP_ENV=production \
-v openhuman-workspace:/home/openhuman/.openhuman \
ghcr.io/tinyhumansai/openhuman-core:latestOr use the in-repo Compose file (still builds the image locally from
Dockerfile; switch the image: field to ghcr.io/tinyhumansai/openhuman-core:latest
in docker-compose.yml to consume the published image instead):
# On the server:
git clone https://github.com/tinyhumansai/openhuman.git
cd openhuman
# Configure secrets:
cp .env.example .env
# Edit .env - at minimum:
# BACKEND_URL=https://api.tinyhumans.ai
# OPENHUMAN_CORE_TOKEN=<openssl rand -hex 32>
# OPENHUMAN_APP_ENV=production
# Build and start:
docker compose up -d
# Verify:
docker compose ps
curl -fsS http://localhost:7788/healthIf you can't run Docker on the host, grab the standalone CLI tarball attached to the latest GitHub Release:
# Pick the tarball that matches your host arch.
ARCH="$(uname -m)"
case "$ARCH" in
x86_64) TARGET=x86_64-unknown-linux-gnu ;;
aarch64) TARGET=aarch64-unknown-linux-gnu ;;
*) echo "Unsupported arch: $ARCH"; exit 1 ;;
esac
VERSION=1.2.4 # set to the release you want
curl -fsSL "https://github.com/tinyhumansai/openhuman/releases/download/v${VERSION}/openhuman-core-${VERSION}-${TARGET}.tar.gz" \
| tar -xz -C /usr/local/bin
openhuman-core --versionThen run openhuman-core serve under your service manager of choice
(systemd, supervisord, …) with the same environment variables documented
above.
Headless deployments should treat openhuman.update_apply as the safe primitive:
it downloads the release asset, writes it atomically next to the current binary,
and returns. Nothing exits automatically.
openhuman.update_run follows config.update.restart_strategy:
self_replace(default): stage the binary, publish an in-process restart request, and let the running core respawn itself.supervisor: stage the binary and returnrestart_requested=false. Your outer service manager must restart the process.
For long-running Linux services, set:
[update]
restart_strategy = "supervisor"
rpc_mutations_enabled = falseor the equivalent env vars:
OPENHUMAN_AUTO_UPDATE_RESTART_STRATEGY=supervisor
OPENHUMAN_AUTO_UPDATE_RPC_MUTATIONS_ENABLED=falseRecommended systemd stance:
Restart=always
ExecReload=/bin/kill -HUP $MAINPIDOperator flow:
- Call
openhuman.update_checkto discover a release. - Configure
restart_strategy = "supervisor"in yourupdate.toml(or setOPENHUMAN_AUTO_UPDATE_RESTART_STRATEGY=supervisor) so the core stages the new binary without trying to re-exec itself, then callopenhuman.update_applyoropenhuman.update_run.restart_strategyis a configuration setting, not an RPC parameter. - Restart the unit explicitly:
systemctl restart openhuman.
If download or staging fails, the running binary is left in place and no restart is requested. If a staged binary proves bad after restart, roll back by restoring the previous binary from your package manager, image tag, or release artifact and restarting the supervisor again.
The Compose file (docker-compose.yml) maps the core
on :7788, mounts a named volume openhuman-workspace for persistence, and
sets restart: unless-stopped so the core comes back after host reboots.
git pull
docker compose build
docker compose up -dFor RPC-exposed production deployments, prefer leaving mutating update RPCs
disabled (OPENHUMAN_AUTO_UPDATE_RPC_MUTATIONS_ENABLED=false) and perform
rollouts through your existing image tag or package-management flow instead.
docker compose logs -f openhuman-coreOPENHUMAN_CORE_TOKEN is the only thing standing between the public internet
and full RPC access. Rotate it on a schedule and after any suspected leak:
# 1. Generate a new token and update the server-side .env.
openssl rand -hex 32 > /tmp/new-token
sed -i.bak "s|^OPENHUMAN_CORE_TOKEN=.*|OPENHUMAN_CORE_TOKEN=$(cat /tmp/new-token)|" .env
rm /tmp/new-token .env.bak
# 2. Restart the container so the new value reaches the core process.
docker compose up -d --force-recreate openhuman-core
# 3. Confirm the running container is using the new token (redacted).
docker compose exec openhuman-core /bin/sh -c \
'echo -n "$OPENHUMAN_CORE_TOKEN" | head -c 8; echo "…"'
# 4. Update every desktop client (Switch mode → re-paste in the picker, or
# edit OPENHUMAN_CORE_TOKEN in app/.env.local and relaunch). Clients that
# still hold the old token will get HTTP 401 on the next /rpc call — that
# is expected, not a regression.For App Platform, do the same in Settings → App-Level Environment
Variables: edit the OPENHUMAN_CORE_TOKEN secret and let App Platform
redeploy. There is no separate token file to delete; the env var is the only
state.
Use Caddy, nginx, or Traefik as a reverse proxy in front of :7788. A minimal
Caddyfile:
core.example.com {
reverse_proxy localhost:7788
}In the desktop app's environment file (app/.env.local):
# Use the hosted core instead of spawning a local sidecar.
OPENHUMAN_CORE_RUN_MODE=external
OPENHUMAN_CORE_RPC_URL=https://core.example.com/rpc
OPENHUMAN_CORE_TOKEN=<the same token you set on the server>Restart the desktop app. The provider chain in App.tsx will route all RPC
calls to the remote core; nothing else changes.
The repo ships .github/workflows/deploy-smoke.yml,
which runs on every PR that touches the deploy artifacts. It builds the
Docker image, boots it, and polls /health, so a regression in the cloud
deploy path fails CI before it lands on main.
To run the same check locally:
docker build -t openhuman-core:smoke .
docker run -d --name oh-smoke -p 7788:7788 \
-e OPENHUMAN_CORE_TOKEN=smoke-test-token \
openhuman-core:smoke
# Wait ~15s for the binary to come up, then:
curl -fsS http://localhost:7788/health
docker rm -f oh-smoke