Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
-
Updated
Feb 3, 2026 - Python
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
This repository contains Cursor Security Rules designed to improve the security of both development workflows and AI agent usage within the Cursor environment. These rules aim to enforce safe coding practices, control sensitive operations, and reduce risk in AI-assisted development.
A plugin-based gateway that orchestrates other MCPs and allows developers to build upon it enterprise-grade agents.
A native policy enforcement layer for AI coding agents. Built on OPA/Rego.
Build Secure and Compliant AI agents and MCP Servers. YC W23
AI-first security scanner with 74+ analyzers, 180+ AI agent security rules, intelligent false positive reduction. Supports all languages. CVE detection for React2Shell, mcp-remote RCE.
See what your AI agents can access. Scan MCP configs for exposed secrets, shadow APIs, and AI models. Generate AI-BOMs for compliance.
Scan A2A agents for potential threats and security issues
Agent orchestration & security template featuring MCP tool building, agent2agent workflows, mechanistic interpretability on sleeper agents, and agent integration via DLL injection and CLI wrappers.
Build your own Swarm Detection & Response (SDR) platform and OpenClaw security infrastructure with Clawdstrike. Become a cyber industry.
AIM - The open-source NHI platform for AI agents. Cryptographic identity, governance, and access control.
Open-source firewall for AI agents. Policy engine that controls what OpenClaw, Claude Code, Cursor, Codex, and any AI tool can do on your machine.
Local open-source dev tool to debug, secure, and evaluate LLM agents. Provides static analysis, dynamic security checks, and runtime monitoring - integrates with Cursor and Claude Code.
Runtime security proxy for MCP: lockfile enforcement, drift detection, artifact pinning, Sigstore/Ed25519 signing, CEL policy, OpenTelemetry tracing. Works with Claude Desktop, LangChain, AutoGen, CrewAI.
Secure credential management for AI agents
Define what your agent can't do. Because if it gets compromised, those limits are all you've got.
The ultimate OWASP MCP Top 10 security checklist and pentesting framework for Model Context Protocol (MCP), AI agents, and LLM-powered systems.
🛡️ Community-built integrations, SDKs, and tools for APort - the neutral trust rail for AI agents. Join Hacktoberfest 2025!
🚀 Streamline your Next.js development with practical rules and tested patterns for efficient coding and minimal bugs.
The missing safety layer for AI Agents. Adaptive High-Friction Guardrails (Time-locks, Biometrics) for critical operations to prevent catastrophic errors.
Add a description, image, and links to the agent-security topic page so that developers can more easily learn about it.
To associate your repository with the agent-security topic, visit your repo's landing page and select "manage topics."