Detect the 7 ways AI chatbots silently fail customers — from existing conversation logs, using tools your ML team doesn't need to install.
Your chatbot dashboard says "95% of conversations resolved." The real number is usually 60–70%. The gap is customers who gave up, got stuck in loops, asked for a human and were refused, or silently walked away. Your chatbot records all of them as "resolved."
chatbot-auditor reads your conversation logs and tells you the truth.
- Why this exists
- The 7 failure modes
- Quick start
- Features
- Architecture
- Where this fits
- Installation
- Documentation
- Contributing
- License
- 75% of consumers are frustrated by AI customer support
- 56% of unhappy customers leave without complaining
- 88% will not return after a negative chatbot interaction
- Chatbot platforms (Intercom, Zendesk, Drift) grade their own homework — their dashboards are designed to make the chatbot look good
- Air Canada was sued because its chatbot promised a non-existent refund policy
- DPD's chatbot swore at a customer and wrote a poem about how terrible the company was
- Klarna reversed course after firing 700 agents when customer satisfaction tanked
Companies don't need a better chatbot. They need to know when their current chatbot is failing. That's what this library does.
| # | Mode | What it catches | Detector |
|---|---|---|---|
| 1 | Death Loop | Bot gives the same answer 3+ times | DeathLoopDetector |
| 2 | Silent Churn | Customer left without saying anything | SilentChurnDetector |
| 3 | Escalation Burial | Bot refused to transfer to a human | EscalationBurialDetector |
| 4 | Sentiment Collapse | Customer got frustrated, bot didn't notice | SentimentCollapseDetector |
| 5 | Confident Lies | Bot promised something outside policy | ConfidentLiesDetector |
| 6 | Brand Damage | Bot said something embarrassing | BrandDamageDetector |
| 7 | Confident Misinformation | Bot stated wrong facts | ConfidentMisinformationDetector |
pip install chatbot-auditorfrom chatbot_auditor import audit, Conversation, Message, Role
conv = Conversation(
id="demo",
messages=[
Message(role=Role.USER, content="I need a refund for my order"),
Message(role=Role.BOT, content="Please check our FAQ at example.com/faq."),
Message(role=Role.USER, content="I already did. Can someone process it?"),
Message(role=Role.BOT, content="Please check our FAQ at example.com/faq."),
Message(role=Role.USER, content="this is useless"),
Message(role=Role.BOT, content="Please check our FAQ at example.com/faq."),
],
)
for d in audit([conv]):
print(f"[{d.severity.value.upper()}] {d.detector}: {d.explanation}")Output:
[MEDIUM] death_loop: Bot gave 3 consecutive similar responses ...
[HIGH] silent_churn: Conversation of 6 messages ended without ...
Or from the command line:
chatbot-audit analyze conversations.json --format markdown --output audit.md| Category | What you get |
|---|---|
| 7 detectors | Full framework for chatbot-specific failures. Works out of the box. |
| 4 adapters | JSON, CSV, Intercom, Zendesk. Point at any source, same pipeline. |
| Zero-dep defaults | Stdlib-only detection for the core five detectors. No API keys required. |
| Pluggable backends | Optional sentence-transformers semantic similarity, LLM moderation, etc. |
| Report generator | Markdown and self-contained HTML. Email-safe, Slack-compatible, XSS-escaped. |
| FastAPI server | Drop-in HTTP service with bearer-token auth, Docker-ready. |
| CLI | analyze, analyze-intercom, analyze-zendesk with --format and --output. |
| Knowledge bases | Optional PolicyBase / FactBase to cross-check bot claims. |
| Full type safety | mypy --strict passes on every public symbol. |
| Benchmarked | Precision / recall / F1 on a synthetic corpus. Reproducible. |
Detectors are pure functions over Conversation objects. Adapters feed conversations in; reporters format detections out. Every layer is independently swappable.
flowchart LR
subgraph Sources
A[Intercom API]
B[Zendesk API]
C[JSON / JSONL files]
D[CSV / TSV files]
E[Your custom source]
end
A --> F
B --> F
C --> F
D --> F
E --> F
subgraph Pipeline
F[Adapter.fetch] --> G[Conversation]
G --> H[DetectorRegistry.run]
H --> I[Detection]
end
subgraph Outputs
I --> J[Python API]
I --> K[JSON]
I --> L[Markdown]
I --> M[HTML]
I --> N[REST API]
end
classDef src fill:#e0f2fe,stroke:#0284c7
classDef pipe fill:#fef3c7,stroke:#d97706
classDef out fill:#dcfce7,stroke:#16a34a
class A,B,C,D,E src
class F,G,H,I pipe
class J,K,L,M,N out
Full system design in ARCHITECTURE.md.
The AI observability and conversational-quality space is crowded, and rightly so — there are several strong tools depending on who you are and what you're measuring. chatbot-auditor is deliberately narrow in scope:
-
It's built for CX and support leaders, not ML engineers. The output uses language the VP of Support already thinks in — "silent churn", "escalation burial", "confident lies" — rather than ML metrics like "faithfulness" or "perplexity". That framing is the whole point.
-
It operates on existing conversation logs, not during the LLM call. You don't need to instrument your chatbot, change the runtime, or add an SDK to your production path. You point the library at logs you already have and it tells you what's wrong.
-
The 7-mode framework is the contribution, not the code. The detection algorithms are mostly simple statistics. What makes this project different is that each failure mode is named, defined, and detectable with clear false-positive characteristics — a vocabulary teams can use to discuss chatbot quality the way DevOps teams discuss SLIs and SLOs.
-
It runs zero-cost by default. The five core detectors need no API keys, no model downloads, no cloud accounts — Python's stdlib is enough. Richer backends (embeddings, LLM moderation) are opt-in when the defaults aren't enough.
-
It stops at detection. There's no chatbot to replace, no agent to train, no dashboard SaaS to sign up for. Detections leave as JSON / Markdown / HTML / HTTP responses, and what your team does with them is up to you.
If you're evaluating LLMs during invocation, tracing prompts, or running model-level evals, the LLM observability tools in the ecosystem will serve you better. If you're coaching human agents or running call-center QA, the voice-QA category is a different product. This library is specifically for "my AI chatbot is in production handling real customers — what's it actually doing, and where is it quietly failing?"
# Minimum — just the core library and CLI
pip install chatbot-auditor
# With API adapters
pip install "chatbot-auditor[intercom,zendesk]"
# With semantic similarity for paraphrased loop detection
pip install "chatbot-auditor[llm]"
# With the HTTP server
pip install "chatbot-auditor[server]"
# Everything
pip install "chatbot-auditor[intercom,zendesk,llm,server]"Verify:
chatbot-audit version- Full docs site — tutorials, reference, guides
- Getting started — run your first audit in under a minute
- The 7 failure modes — what each detector catches and why
- Audit Intercom data — end-to-end with real data
- Write a custom detector — add your own failure modes
- Self-host the HTTP server — Docker, auth, deployment
- LLM & embedding backends — swap in richer scorers
- API reference — auto-generated from docstrings
- Architecture — system design, extension points, design decisions
Runnable scripts in examples/:
01_audit_json_file.py— smallest end-to-end audit02_audit_csv_file.py— CSV export from any platform03_custom_detector.py— write your own failure mode04_knowledge_bases.py—PolicyBase+FactBase05_embeddings_backend.py— semantic similarity for high-paraphrase loops
Issues and pull requests welcome. See CONTRIBUTING.md for dev setup, coding standards, and how to add a detector.
- Report a bug → GitHub issues
- Propose a feature → same, pick the feature-request template
- Report a security issue → see SECURITY.md — do not open a public issue
- Get help / ask a question → SUPPORT.md
git clone https://github.com/HemantBK/chatbot-auditor.git
cd chatbot-auditor
uv sync --all-extras
uv run pytest # 203 tests
uv run mypy # strict type check
uv run ruff check . # lint
uv run ruff format . # format
uv run mkdocs serve # live docsApache License 2.0 — see LICENSE and NOTICE.
Original author: BK. If you use, fork, or build on this project, preserve the NOTICE file as required by the license. The copyright header Copyright 2026 BK must remain intact in every derivative source file.
Framework and failure-mode research grounded in public incident reports: Air Canada (2024), DPD (2024), Klarna (2025), Cursor (2024), and industry analyses from Qualtrics, Gartner, and others cited in the failure modes docs.