Skip to content

intertwine/security-verifiers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

157 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Security Verifiers

CI License: MIT

A composable suite of security and alignment RL environments with executable, verifiable rewards. Built for Prime Intellect's Verifiers framework.

Vision

Security Verifiers demonstrates how executable rewards can advance both security and alignment research. Rather than relying on LLM-as-judge, our environments use real tools (OPA, Semgrep, test suites) to verify agent behavior, producing rewards that are:

  • Executable: Rewards come from running actual security tools
  • Calibrated: Agents are rewarded for well-calibrated confidence
  • Cost-aware: Asymmetric penalties reflect real operational costs (missing malware >> false alarms)
  • Composable: Shared schemas and tools enable transfer across tasks

Environments

Environment Type Task Status
E1: network-logs SingleTurn Anomaly detection with calibration & abstention Production
E2: config-verification ToolEnv Security auditing with OPA/KubeLinter/Semgrep Production
E3: code-vulnerability ToolEnv Vulnerability detection and repair WIP
E4: phishing-detection SingleTurn Phishing classification with evidence WIP
E5: redteam-attack MultiTurn Red team attack scenarios WIP
E6: redteam-defense MultiTurn Red team defense scenarios WIP

Quick Start

# Setup
make setup && source .venv/bin/activate

# Configure API keys
cp .env.example .env  # Edit with your OPENAI_API_KEY
set -a && source .env && set +a

# Run your first evaluation
make eval-e1 MODELS="gpt-5-mini" N=10

See docs/getting-started.md for detailed setup instructions.

Evaluation

# E1: Network log anomaly detection
make eval-e1 MODELS="gpt-5-mini,gpt-4.1-mini" N=100

# E2: Configuration verification (multi-turn with tools)
make eval-e2 MODELS="gpt-5-mini" N=10 INCLUDE_TOOLS=true

# Generate metrics reports
make report-network-logs
make report-config-verification

Results are written to outputs/evals/<env>--<model>/<run_id>/.

Hub Deployment

Deploy environments to Prime Intellect's Environments Hub:

make hub-deploy E=network-logs
vf-eval your-org/sv-env-network-logs --model gpt-5-mini --num-examples 10

See docs/hub-deployment.md for complete deployment guide.

Project Structure

security-verifiers/
├── environments/       # E1-E6 environment packages
├── sv_shared/          # Shared parsers, rewards, utilities
├── scripts/            # Evaluation and data building scripts
├── docs/               # Documentation
├── plans/              # Roadmap and productionization plans
└── outputs/            # Evaluation results

Documentation

Document Description
Getting Started Installation and first evaluation
Development Guide Contributing, testing, CI
Hub Deployment Deploy to Prime Intellect Hub
Datasets Guide Dataset access and management
Logging Guide Weave tracing configuration
CLAUDE.md Agent/LLM instructions

Baselines

Run quick baselines on the public mini sets:

make baseline-e1 MODEL="gpt-5-mini"
make baseline-e2 MODEL="gpt-5-mini" INCLUDE_TOOLS=true

Scoreboards are written to bench/scoreboards/.

Roadmap

See plans/ROADMAP-Q1-2026.md for current development priorities:

  • WP0: Benchmark integrity hardening
  • WP1: Metrics contracts and report generator
  • WP2: Baselines and public mini sets
  • WP3: Canonical RL training runs
  • WP4: Multi-reward RL stability research
  • WP5: SV-Bench v0.1 release

Contributing

See CONTRIBUTING.md for contribution guidelines.

License

MIT License - see LICENSE for details.

About

Reinforcement Learning Verifiers for Cybersecurity

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages