Open-source context retrieval layer for AI agents
-
Updated
Mar 5, 2026 - Python
Open-source context retrieval layer for AI agents
Local persistent memory store for LLM applications including claude desktop, github copilot, codex, antigravity, etc.
Semantica 🧠 — A framework for building semantic layers, context graphs, and decision intelligence systems with explainability and provenance.
Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.
Grov automatically captures the context from your private AI sessions and syncs it to a shared team memory. It auto injects relevant memories across developers and future sessions to save tokens and time spent on tasks.
Route inference across LLM providers. Track cost per request.
Distributed data mesh for real-time access, migration, and replication across diverse databases — built for AI, security, and scale.
A Rust runtime that unifies relational tables, graph relationships, and vector embeddings in a single tensor-based storage layer with distributed consensus and semantic search
NPU powered On-device AI Mobile applications using Melange
Stop paying for AI APIs during development. LocalCloud runs everything locally - GPT-level models, databases, all free.
A curated list of awesome tools, frameworks, platforms, and resources for building scalable and efficient AI infrastructure, including distributed training, model serving, MLOps, and deployment.
CX Linux — AI-powered Linux OS. Natural language system administration for Ubuntu & Debian. The AI layer for Linux infrastructure.
Predictive memory layer for AI agents. MongoDB + Qdrant + Neo4j with multi-tier caching, custom schema support & GraphQL. 91% Stanford STARK accuracy, <100ms on-device retrieval
AI Infrastructure Engineer Learning Track - Production ML infrastructure curriculum (2-4 years experience)
TME: Structured memory engine for LLM agents to plan, rollback, and reason across multi-step tasks.
GPU-aware inference mesh for large-scale AI serving
Production-ready AI infrastructure: RAG with smart reindexing, persistent memory, browser automation, and MCP integration. Stop rebuilding tools for every AI project.
Zero-code LLM security & observability proxy. Real-time prompt injection detection, PII scanning, and cost control for OpenAI-compatible APIs. Built in Rust.
Kubernetes operator for GPU-accelerated LLM inference - air-gapped, edge-native, production-ready
ARF is an agentic reliability intelligence platform that separates decision intelligence (OSS) from governed execution (Enterprise), enabling autonomous operations with deterministic safety guarantees.
Add a description, image, and links to the ai-infrastructure topic page so that developers can more easily learn about it.
To associate your repository with the ai-infrastructure topic, visit your repo's landing page and select "manage topics."