Open-source ground truth for AI. A portable, vendor-neutral way to ground any LLM in your team's private truth — with a built-in signal for when that grounding runs out.
spec— the protocol RFC + v0.1 specificationevals— open eval suite measuring whether the protocol actually changes model behavior. Tier 1 = methodology + raw runs published; Tier 2 = aggregate numbers (gated)reference-sdk— Python + TypeScript reference implementationsregistry— community-contributed example factbooks (payments, ML pipeline, frontend conventions)factbook— the org's own operational factbook, dogfooded as a self-hosted reference examplefactlet.ai— source for the factlet.ai website
- Factlet — one atomic truth about your private information (a decision, a constraint, an anti-pattern)
- FactMap — the structured collection of factlets covering one body of work
- Factbook — a packaged FactMap, versioned in git, portable across implementations
- FactSignal — how strong the grounding is at any point, measured in bars (0-5)
- Low-FactSignal warning — fires when a model is about to answer in a zone with no relevant factlets
The protocol is testable in any LLM today via four copy-paste prompts. Build a starter Factbook for your project, use it to answer a question with citations, score FactSignal coverage, and run an A/B ROI test (with vs without Factbook). See factlet.ai/getting-started.
Kernora's Nora is the maintained reference implementation. Other tools — Cursor, Claude Code, Continue.dev, Aider, Goose, OpenCode — could implement the protocol and read the same factbooks. The protocol exists with or without any one implementation.
v0.1 draft — the spec is open for RFC. v0.2 ships within 90 days incorporating community feedback. The protocol working group, not Kernora, owns the spec going forward.
See each repo's CONTRIBUTING.md. Spec discussions happen in spec/discussions; RFC PRs land in spec/rfcs.
Spec, registry, and reference SDK are MIT-licensed.