GitHub Assistant
Install the GitHub Assistant to get:
- PR reviews: multi-dimensional reviews posted on every pull request
- Issue auto-responses: code-grounded answers and triage on new issues
- Auto-labeling: consistent tags for PRs and issues based on code analysis
- Conversational follow-ups: ask questions in PR/issue threads and get contextual replies
Everything runs inside your GitHub Actions runner, so responses are grounded in your repository. You're in full control: keep the defaults, add new rules, or disable checks you don't want. No required third-party services beyond your chosen LLM provider, and you can use any LLM supported by Visor.
Quick Start
Add this GitHub Action to your repository:
name: Visor
on:
issues:
types: [opened]
pull_request:
types: [opened, synchronize]
issue_comment:
types: [created]
jobs:
visor:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: probelabs/visor-action@v1
with:
github-token: ${{ secrets.GITHUB_TOKEN }}That's it. Visor will now automatically review PRs and respond to issues.
Defaults and Overrides
The GitHub Assistant uses Visor's default workflow configuration: defaults/visor.yaml
To extend or customize code review steps, see Intelligent Code Review.
Issue Auto-Response
When users open issues, Visor analyzes your codebase and provides helpful responses grounded in actual code:
Rate limiting is configured in config/api.yaml:
rate_limit:
enabled: true
requests_per_minute: 100See src/middleware/rate_limiter.go:45 for implementation details.
PR Reviews
Multi-dimensional code review covering security, performance, architecture, and code quality - automatically posted to your PRs:
Security Review
- JWT validation properly implemented
- Password hashing uses bcrypt with appropriate cost
- No hardcoded secrets detected
Performance Review
- Consider caching user lookups (
src/auth/user.go:78)
For detailed review configuration, see Intelligent Code Review.
Auto-Labels
Automatically categorize PRs and issues based on code analysis:
Auto-detected:
- Modifies:
src/db/pool.go,src/db/connection.go - Impact: High (core infrastructure)
- Reviewers suggested: @db-team
Conversational Follow-ups
Mention @visor-bot in any comment to ask questions or get clarification:
@visor-bot can you explain why you suggested caching for user lookups?
The GetUser() function is called 3 times per request:
middleware.go:45- auth checkhandler.go:78- permission checkresponse.go:23- user data serialization
Each call queries the database. A request-scoped cache would reduce DB calls from 3 to 1.
Example: src/cache/request_cache.go
GitHub Action Options
- uses: probelabs/visor-action@v1
with:
# Required
github-token: ${{ secrets.GITHUB_TOKEN }}
# Optional - GitHub App auth (better rate limits/permissions)
app-id: ${{ secrets.PROBE_APP_ID }}
private-key: ${{ secrets.PROBE_APP_PRIVATE_KEY }}
installation-id: ${{ secrets.PROBE_APP_INSTALLATION_ID }}
# Optional - custom config file (default: visor.yaml)
config-file: .visor.yaml
# Optional - AI provider (default: anthropic)
ai-provider: anthropic
# Optional - fail on critical/error issues (default: true)
fail-on-issues: true
# Optional - max parallel steps (default: 4)
max-parallelism: 4LLM Provider Configuration
# Anthropic (Claude)
export ANTHROPIC_API_KEY=your-api-key
# Google (Gemini)
export GOOGLE_API_KEY=your-api-key
# OpenAI (GPT-4)
export OPENAI_API_KEY=your-api-key
# Azure OpenAI
export AZURE_OPENAI_API_KEY=your-api-key
export AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com
# AWS Bedrock
export AWS_ACCESS_KEY_ID=your-access-key
export AWS_SECRET_ACCESS_KEY=your-secret-key
export AWS_REGION=us-east-1
# Self-hosted (OpenAI-compatible)
export OPENAI_API_KEY=your-api-key
export OPENAI_BASE_URL=http://localhost:8080/v1
# GitHub Actions example (pick one provider)
# env:
# ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }}Next Steps
- Intelligent Code Review - Local CLI, agentic self-review, visor.yaml configuration
- Chat with Code - Interactive codebase exploration
- Visor Product Page - Full product overview
I'm trying to set up rate limiting for my API endpoints but can't find the configuration options.