Block rogue AI agents in application memory before they burn your budget. In-process circuit breaker. No proxy. No network roundtrip. Patent pending.
AeneasSoft is an open-source AI agent observability and active defense tool. It intercepts every LLM API call your agents make — and blocks the dangerous ones before they leave your process.
Two lines of code. Zero configuration. No proxy.
import agentwatch
agentwatch.init() # Every LLM call is now monitored. Rogue agents blocked in RAM.| Feature | AeneasSoft | Langfuse | Helicone |
|---|---|---|---|
| Open Source | Yes (MIT) | Yes | Partial |
| In-Process Blocking | Yes (in RAM) | No | No (Proxy) |
| Proxy Required | No | No | Yes (SPOF) |
| Setup | 2 lines | Callbacks + config | base_url change |
| EU AI Act Reports | Yes (Enterprise) | No | No |
| Patent Protected | Yes (USPTO) | No | No |
| Framework Lock-in | None | Partial | None |
pip install aeneas-agentwatchimport agentwatch
agentwatch.init()
# That's it. Make any LLM call:
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello"}]
)
# Trace automatically captured. Cost tracked. Agent defended.docker compose -f docker-compose.local.yml up -d
# Open http://localhost:3001/healthNo account needed. No API key. No cloud dependency.
The killer feature. Block runaway agents in application memory before the request leaves your process:
agentwatch.init(
budget_per_hour=10.0, # Alert if agent spends > $10/hour
max_error_rate=0.5, # Alert if > 50% errors
max_calls_per_minute=100, # Detect infinite loops
block_on_threshold=True # Actually BLOCK the request (opt-in)
)
# Per-agent budgets:
with agentwatch.agent("ExpensiveBot", budget_per_hour=5.0, block_on_threshold=True):
result = client.chat.completions.create(...)
# → CircuitBreakerException if budget exceeded. Request never sent.How it works: We patch HTTP transport libraries (httpx, requests, aiohttp) at the lowest level. Before every outgoing request, we check budget/error/loop thresholds. If exceeded, we raise CircuitBreakerException — the request never leaves RAM. No proxy. No network roundtrip. No single point of failure.
Your Code
|
v
[agentwatch.init()]
|
+--→ Layer 1: SDK Patcher (OpenAI, Anthropic)
| set_sdk_active(True) → prevents duplicate spans
|
+--→ Layer 2: HTTP Interceptor (httpx, requests, aiohttp)
is_sdk_active()? → skip logging (dedup)
Active Defense → budget/error/loop check
|
v
AI Provider (OpenAI, Anthropic, Gemini, etc.)
Layer 1 wraps SDK methods for rich structured data. Layer 2 catches everything at HTTP level for framework-agnostic coverage. The deduplication flag (thread-local + ContextVar) ensures no duplicate spans.
Works automatically with: OpenAI, Anthropic, Gemini, Mistral, Groq, Cohere, Together AI, Fireworks, Azure OpenAI, Ollama — and any provider accessible via HTTP.
npm install @aeneassoft/sdk-nodeimport { init } from '@aeneassoft/sdk-node';
init({ apiKey: 'local' });docker compose -f docker-compose.local.yml up -dStarts ClickHouse + Backend. No Kafka. No auth. Dashboard at localhost:3001.
docker compose up -dStarts ClickHouse + Kafka + Backend + Proxy. Configure via .env.
We believe transparency builds more trust than hiding behind "Beta" labels.
- Streaming: Non-streaming calls captured with 100% accuracy. Streaming calls capture request + final usage summary (tokens + cost), not individual chunks. Full chunk-level tracing ships Q3 2026.
- Cost calculation: Uses list prices for 20+ models. Batch API, cached tokens, and fine-tuned model rates are not reflected.
- Single-process: Each process needs its own
init()call. No central proxy config. This is a trade-off of in-process architecture. - Monkey-patching: We modify HTTP library internals at runtime. When libraries release major versions, our patchers may need updates. We ship SDK updates within 48 hours of breaking changes.
Automated compliance scoring + RSA-2048 signed PDF reports. Available as Enterprise feature. Contact us for details.
MIT License — the SDK, interceptor, circuit breaker, and dashboard are all open source.
The method (Dual-Layer Telemetry Interception and Active Defense) is protected by a USPTO Provisional Patent (April 2026).
Join the Aeneas Community on Discord — get help, share your agent setups, and shape the product:
- Discord: discord.gg/3QjFDQmCJ
- Website: aeneassoft.com
- PyPI: aeneas-agentwatch
- npm: @aeneassoft/sdk-node
- Documentation: aeneassoft.com/docs
"Daa, I love the project" 🦄🛸