Skip to content

un-seen/cli-app

Repository files navigation

hai

Deploy on Railway

One Go codebase, multiple binaries. Each binary reads OpenAPI specs at build time and turns them into a typed CLI + MCP server. You get commands, flags, help text, and auth handling for free. No runtime dependencies.

The binaries

Binary What it talks to Config
hai-compute Portal (sandboxes, agents) configs/compute.yaml
hai-social X, LinkedIn, Instagram, Web configs/social.yaml
hai-renderer PDF rendering, image processing configs/renderer.yaml
hai-documents Document indexing and search configs/documents.yaml

Each config is just a YAML file pointing at OpenAPI spec URLs. Adding a new service is adding a new YAML file. That's it.

Building

Pick a config, run make:

make build CONFIG_FILE=configs/social.yaml

This fetches the OpenAPI specs, generates Go code from them, and compiles a single static binary. You now have ./hai-social in your directory.

Cross-compile for all platforms:

make build-one CONFIG_FILE=configs/social.yaml

Build everything:

make build-all

Using the CLI

Set your token and go:

export HEDWIGAI_AUTH_TOKEN="your-token"
./hai-social x get-x-profile --username karpathy
./hai-social web search --query "transformer architecture"
./hai-compute portal health-check

Every command has --help. The flag names, types, and descriptions all come straight from the OpenAPI specs.

Using as MCP server

Three ways to run it. Pick the one that matches your setup.

1. Local stdio (Claude Code, local agents)

The simplest option. Add to .mcp.json:

{
  "mcpServers": {
    "hai-social": {
      "command": "./hai-social",
      "args": ["mcp", "server", "--stdio"],
      "env": {
        "HEDWIGAI_AUTH_TOKEN": "your-token"
      }
    }
  }
}

Claude Code talks to the binary over stdin/stdout. Tools show up immediately.

2. HTTP server (Claude Desktop, remote clients)

./hai-social mcp server --port 8080

Clients connect to:

  • POST /mcp - streamable HTTP (Claude Code, newer clients)
  • GET /sse + POST /message - legacy SSE (Claude Desktop)

Auth goes in the Authorization: Bearer header. OAuth discovery at /.well-known/oauth-protected-resource.

3. Multi-config server (all binaries, one process)

This is what runs at mcp.hedwigai.com. One server, all configs:

./hai-compute mcp server --config-dir configs/ --port 8080

Each config gets its own prefix:

  • /compute/mcp - portal tools
  • /social/mcp - X, LinkedIn, Instagram, Web tools
  • /renderer/mcp - PDF and image tools
  • /documents/mcp - document tools

Hit GET / for a JSON index of all available servers and tool counts.

Listing tools

See what you've got:

./hai-social mcp list
x            x_get-x-profile                     Get user profile
x            x_get-x-posts                       Get user posts
x            x_search-x                          Search posts
linkedin     linkedin_get-linked-in-member-profile Get member profile
...

Connecting to hosted MCP

HedwigAI runs the multi-config server at mcp.hedwigai.com. No local binary needed. Point your MCP client at the right prefix:

Service Endpoint
Compute https://mcp.hedwigai.com/compute/mcp
Social https://mcp.hedwigai.com/social/mcp
Renderer https://mcp.hedwigai.com/renderer/mcp
Documents https://mcp.hedwigai.com/documents/mcp

Auth is Bearer token in the Authorization header. The server supports OAuth discovery (RFC 9728) so compliant MCP clients can negotiate auth automatically via identity.hedwigai.com.

OneCLI integration

OneCLI is an open-source credential vault for AI agents. Instead of scattering JWTs across env vars and config files, you store them once in OneCLI and it injects them transparently into outbound requests.

Setup

git clone https://github.com/onecli/onecli.git && cd onecli
docker compose -f docker/docker-compose.yml up

Dashboard at localhost:10254. Gateway at localhost:10255.

Wire it up

  1. Create an agent in the OneCLI dashboard
  2. Store your HEDWIGAI_AUTH_TOKEN, matched to host *.hedwigai.com
  3. Point the CLI at the gateway:
export HTTPS_PROXY=http://localhost:10255
./hai-social x get-x-posts --username karpathy

The gateway intercepts the request, swaps in the real token, forwards to hedwigai. The CLI never sees the actual credential.

For MCP servers, same idea:

{
  "mcpServers": {
    "hai-social": {
      "command": "./hai-social",
      "args": ["mcp", "server", "--stdio"],
      "env": {
        "HTTPS_PROXY": "http://localhost:10255"
      }
    }
  }
}

No HEDWIGAI_AUTH_TOKEN in the config. OneCLI handles it.

Why bother?

  • Tokens stay in one encrypted vault, not in dotfiles and env vars everywhere
  • Rotate once, all agents pick it up
  • Audit trail of which agent called what
  • Without OneCLI everything works exactly the same (proxy support is a no-op when HTTPS_PROXY is unset)

Releasing

make build-one CONFIG_FILE=configs/social.yaml
make release-one CONFIG_FILE=configs/social.yaml

This cross-compiles for linux/darwin/windows (amd64+arm64), generates checksums, creates an install script, and pushes to GitHub Releases.

How it works

configs/*.yaml          - what to build (binary name, auth, OpenAPI spec URLs)
tools/generate/         - reads specs at build time, writes generated/operations.go
internal/spec/          - fetches and parses OpenAPI specs
internal/command/       - Cobra CLI builder + HTTP executor
internal/mcpbridge/     - MCP server (stdio, HTTP, multi-config)
generated/operations.go - auto-generated, don't touch

The basic idea: OpenAPI spec goes in, CLI commands and MCP tools come out. Everything interesting happens at build time. The binary ships with no spec files, no config, no runtime dependencies. Just commands.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors