Turn engineering processes into workflows with guardrails for humans and AI. Define steps, validation, approvals, and fallbacks in YAML, then trace every run with OpenTelemetry.
In real engineering teams, we use checklists, reviews, and approvals to reduce mistakes. Visor lets you encode those guardrails as workflows: explicit steps, validated outputs, bounded loops, and safe fallbacks.
Map discovery, triage, review, and release into minimal, auditable stages.
Schema checks keep AI output structured and reliable.
Require sign-off where it matters, automate the rest.
Timeouts, retries, and failure routes are defined up front.
Limit what agents can call and make every tool invocation auditable.
Trace every step so you can answer “what happened?” with evidence.
Define once in YAML, run anywhere with full observability
PR opened, Slack thread, webhook, scheduleAI + MCP + GitHub + HTTP + shellSchema validation + quality gatesStaged reviews with schema outputs, policy gates, and consistent comments/checks.
Auto-classify Jira/Zendesk tickets, estimate severity, route to the right team, and draft a first response.
Support, PM, QA, and sales can query source-of-truth context and generate escalation packets.
Run release, deploy, and incident workflows from webhooks - not only GitHub events.
Fan out to multiple repos/services, run per-repo analysis, aggregate results, enforce policy gates.
Nightly security audits and periodic health checks with consistent reporting and traceability.
Once you define the guardrails, these primitives make them executable: explicit steps, schemas, routing, and safe failure handling.
One YAML defines steps, routing, schemas, templates, and outputs.
Dependencies, fan-out, fan-in, routing, and bounded loops are first-class.
Schemas + templates produce stable outputs for humans and machines.
AI, GitHub, HTTP, shell, MCP tools, memory, and reusable sub-workflows.
Built-in memory namespaces and isolated run state without external DBs.
OpenTelemetry traces + log correlation (trace_id/span_id).
Validate behavior with fixtures/mocks before rolling workflows out.
AI is a step - not the system.
CI tools are optimized for building, testing, and deploying code. Visor is optimized for governed automation across teams and tools - where you need routing, human-in-loop, validated outputs, and auditable runs.
This example shows composition, observability, state, frontends, safe loops, MCP tools, GitHub automation, and more - all in one file.
extends: [default, ./environments/prod.yaml]
telemetry: { enabled: true }
routing: { max_loops: 6 }
slack: { threads: required }
steps:
pr-overview:
type: ai
on: [pr_opened, pr_updated]
schema: overview
prompt: Summarize the PR in 5 bullets.
lint-fix:
type: command
exec: "npm run lint -- --fix"
on_fail: { run: [lint-remediate] }
lint-remediate:
type: claude-code
depends_on: [lint-fix]
allowedTools: [Read, Edit]
maxTurns: 3
prompt: |
Fix the lint errors below:
{{ outputs['lint-fix'].stderr }}
security-scan:
type: mcp
command: npx
args: ["-y", "@semgrep/mcp"]
method: scan
trusted-note:
type: github
if: "hasMinPermission('MEMBER')"
op: comment.create
nightly-audit:
type: ai
schedule: "0 2 * * *"
schema: code-review
extends layers org defaults, team overrides, and env-specific config.
Native Semgrep, custom tools, and any MCP server - declared and auditable.
hasMinPermission('MEMBER') restricts steps to trusted contributors.
PR events, Slack threads, webhooks, and cron schedules in one workflow.
AI steps return structured JSON that validates against schema: overview.
on_fail triggers Claude Code with linter output - bounded tools, max turns, traceable.
Visor is designed for organizations where different teams own different services - and still need to execute as one system.
Keep defaults in one place; override per team/env via extends.
Fail fast on violations (fail_if), route failures deterministically, require human approval when needed.
Gate actions (hasMinPermission('MEMBER')) so workflows stay safe.
Retries and loops are capped; "self-healing" is deterministic and observable.
This is how you scale automation without centralizing every decision.
Visor emits OpenTelemetry traces and correlates logs for every step. You can answer "what happened?" with evidence, not guesswork.
Visor includes an integration test framework. Write tests in the same YAML as your workflows, run them in CI, and catch regressions before they hit production.
tests:
pr-overview-returns-schema:
trigger:
event: pr_opened
fixture: ./fixtures/small-pr.json
mock:
ai: ./mocks/overview-response.json
assert:
- "outputs['pr-overview'].bullets.length === 5"
- "outputs['pr-overview'].risk != null"
lint-fix-retries-on-fail:
trigger:
event: pr_opened
mock:
command: { exitCode: 1, then: 0 }
assert:
- "steps['lint-fix'].retries === 1"
- "steps['lint-fix'].status === 'ok'"
Simulate PR payloads, AI responses, and command outputs without calling real services.
Verify schema shapes, step statuses, retry counts, and routing decisions.
visor test ./workflows/ runs all tests and fails the build on regressions.
Capture real inputs, replay them in tests, and assert the same outputs.
In your YAML you're already combining multiple "providers" (step types) in a single run.
You can keep AI steps narrow and predictable - most reliability comes from the workflow design.
Clone the repo and run examples/pr-review.yaml locally.
Enable issues + PR workflows with a GitHub App.
Same pipeline, easier debugging.
No. Visor makes the workflow behavior deterministic: explicit steps, constrained tools, validated outputs, bounded loops, and traceability. AI is a controlled step inside that system.
Not necessarily. CI is still great for builds/tests/deploys. Visor complements CI when you need routing, human-in-loop, multi-provider automation, schema outputs, and observable agent workflows.
Workflows declare tools and MCP servers explicitly. AI steps can disable tools entirely or run with allowlists. Tool usage is auditable.
Use extends and imports: org defaults live centrally, teams override per environment or repo. Policy gates enforce what must remain true.
Yes - Visor is designed to run on your infrastructure and to support provider/model choice (the workflow defines what you use).
OpenTelemetry tracing + log correlation are built-in (telemetry.enabled). You can inspect step timings, retries, loop iterations, and validation outcomes.
If you're moving toward agent-first development, Visor gives you the control plane: explicit steps, schemas, bounded loops, and full observability - so automation scales without chaos.