Programmable memory
for AI agents.

Start with a template. Customize when you need to. Never migrate.

uvx synix init my-project Copied! Get Started

Working memory in 60 seconds

# Create a project from the default template
$ uvx synix init my-project && cd my-project

# Add your API key
$ cp .env.example .env

# Add your source data to ./sources/, then build
$ uvx synix build
$ uvx synix release HEAD --to local
$ uvx synix search "how did the onboarding go" --release local

Episode summaries. Monthly rollups. Core memory document. Full-text search. Every insight traces back to its source.

The problem

Memory is harder than it looks. You won't get it right the first time — nobody does. The question is what happens when you need to change it.

Every agent memory tool gives you one flat bucket. Same storage, same rules, same lifecycle for everything your agent knows. When memory breaks, it breaks silently — contradictions, stale context, hallucinated recall. And when you realize your memory architecture is wrong, you're looking at a migration or starting over.

What Synix does

Start simple

Pick a template, run four commands. You get structured memory with search and full provenance. No pipeline code required to start.

Customize everything

When you need more, open pipeline.py. Change prompts, add layers, swap grouping strategies. Your memory architecture is Python code you control.

Course-correct freely

A/B test your memory architecture. Try topic-based rollups instead of monthly, compare the outputs, keep what works. Every experiment is just a rebuild — only affected layers reprocess. Your memory system grows with your needs.

How your agent uses it

Synix runs offline — it processes sources into structured memory. Your agent reads the output at inference time. They're decoupled.

Python SDKproject.release("local").search("query") — search memory or load flat context from Python
MCP ServerConfigure as an MCP server for Claude, Cursor, or any MCP-compatible agent
CLIuvx synix search "query" --release local — pipe into automation
Direct accesssearch.db (SQLite FTS5) + context.md (flat file for system prompts)
import synix

project = synix.open_project("./my-project")
mem = project.release("local")

# Search memory at inference time
results = mem.search("return policy", limit=5)

# Or load core memory as flat context
context = mem.flat_file("context-doc")
# → inject into your agent's system prompt

How it works

A pipeline is a directed graph of transforms you define in Python. Sources go in, LLM transforms process them layer by layer, and typed artifacts come out — each tracked with a content-addressed fingerprint.

SourcesConversations, documents, reports, transactions — any data your pipeline processes
TransformsLLM-backed: MapSynthesis (1:1), GroupSynthesis (N:M), ReduceSynthesis (N:1), FoldSynthesis (sequential), Chunk (1:N, no LLM)
ArtifactsImmutable, content-addressed. Full provenance chain back to source.
ProjectionsSynixSearch (FTS5 + optional semantic) and FlatFile (markdown for agent prompts)
from synix import FlatFile, Pipeline, SearchSurface, Source, SynixSearch
from synix.ext import CoreSynthesis, EpisodeSummary, MonthlyRollup

pipeline = Pipeline("agent-memory")
pipeline.source_dir = "./sources"
pipeline.llm_config = {
    "provider": "anthropic",
    "model": "claude-haiku-4-5-20251001",
}

transcripts = Source("transcripts")
episodes = EpisodeSummary("episodes", depends_on=[transcripts])
monthly = MonthlyRollup("monthly", depends_on=[episodes])
core = CoreSynthesis("core", depends_on=[monthly])

search = SearchSurface("search", sources=[episodes, monthly, core], modes=["fulltext"])

pipeline.add(transcripts, episodes, monthly, core, search)
pipeline.add(SynixSearch("idx", surface=search))
pipeline.add(FlatFile("context-doc", sources=[core]))

This is the default template's pipeline. Change a prompt — only downstream artifacts rebuild. Add new sources — only new episodes process.

Beyond agent memory

The pipeline primitives are general-purpose. Anything that flows through sources → transforms → artifacts works.

Photo library → life timeline

Photos + EXIF metadata → vision model captions each image → cluster by time and location into events → compress into a searchable life timeline. Swap your captioning model — only captions rebuild.

Codebase → architectural memory

Git log + PRs + design docs → summarize each into a decision record → cluster by system area → synthesize into an architectural knowledge base. "Why did we switch from REST to gRPC?"

IoT → operational knowledge

Sensor readings + maintenance logs → extract events → correlate across clusters → build evolving equipment health profiles. Every insight traces back to the raw data.

The pattern is the same: raw data → structured knowledge → searchable artifacts, with incremental rebuilds and full provenance. Agent memory is where most people start, but the architecture doesn't care what your sources are.

Where Synix fits

Mem0 Letta Graphiti LangMem Synix
Approach Memory API Agent-managed Temporal KG Taxonomy Programmable
You define the rules Yes — in Python
Change architecture Migration Migration Migration Migration Incremental rebuild
Provenance Full chain to source
Schema Fixed Fixed Fixed Fixed You define it

When Synix is the right choice: You want to control how memory is structured, not just store things. You need provenance. You expect your memory architecture to evolve.

When something else is better: You just need key-value memory (→ Mem0). You need a knowledge graph (→ Graphiti). You need real-time memory during inference (→ not yet).

What you get

Fingerprint-based caching

Every artifact stores a build fingerprint — inputs, prompt, model config, transform source. Change any component and only affected artifacts rebuild.

Full provenance

Every artifact chains back to the sources that produced it. synix lineage shows the full tree. Search results include provenance links.

Altitude-aware search

Query episode summaries, monthly rollups, or core memory. Full-text and optional semantic/hybrid modes. Filter by layer to control detail level.

Templates

Agent memory, chatbot export synthesis, chunked search, team reports, and more. Pick a template, run four commands, customize later.

Current status

StablePipeline definition, build, release, search, provenance, incremental rebuilds, CLI, Python SDK
EarlyTemplates, MCP server for agent integration
ExperimentalValidation/repair, batch builds (OpenAI Batch API), distributed builds (mesh)
PlannedReal-time runtime, agent-driven memory evolution, multi-tenancy

Synix is a working tool used in production for personal and project memory pipelines. Pre-1.0. Solo-maintainer project. On-disk formats may evolve with a compatibility path.

Case study

14 months of conversations. One build.

1,871 conversations across Claude and ChatGPT. Two export formats, one pipeline definition, four layers — from raw chat exports to structured core memory with full provenance.

Template: 01-chatbot-export-synthesis

Architecture direction

Synix is designed around the idea that different kinds of memory need different management — four tiers from execution context (milliseconds) to identity (permanent). Today, Synix manages the experience tier via programmable pipelines. The architecture is designed to expand across all four tiers.

Eventually, agents will program their own memory.

Read the architecture doc →

Get started

Four commands to working memory. Customize when you need to.

uvx synix init my-project Copied! View on GitHub