This repository demonstrates a real-world migration of a monolithic application to a microservices architecture using the Strangler Fig Pattern and Change Data Capture (CDC).
The project simulates a transformation from a single Go monolith with a shared database to independent microservices, each with its own isolated database, kept in sync via Kafka and Debezium.
- Source: Go Monolith (Users & Orders) + PostgreSQL
- Streaming Backbone: Kafka + ZooKeeper + Debezium
- Target Microservice 1: User Service (Go) + Isolated PostgreSQL
- Target Microservice 2: Order Service (Go) + Isolated PostgreSQL
- Entry Point: API Gateway (Go) implementing Strangler Fig Fallback
/monolith: The legacy application (Source of Truth)./api-gateway: The "Strangler" entry point with transparent fallback logic./user-service: The new decomposed User microservice./order-service: The new decomposed Order microservice./traffic-generator: Simulates real-world load on the monolith./stream-consumer: A verification tool for Kafka events./docs: Detailed guides for each migration phase.
- Phase 1: Foundation - Baseline monolith and infrastructure.
- Phase 2: CDC Pipeline - Turning the DB into an event source.
- Phase 3: Decomposition - Implementing the User service and data sync.
- Phase 4: Scaling Out - Implementing the Order service.
- Phase 5: Traffic Cutover - Implementing the API Gateway and Strangler Pattern.
- Phase 6: Historical Data Backfill - Bulk migration utility.
- Phase 7: Migration Observability - Real-time Mission Control Dashboard.
- Language: Go 1.21+
- Databases: PostgreSQL 15, SQLite (optional for quick tests)
- Message Broker: Kafka (Confluent Platform)
- CDC Tool: Debezium Connect
- Deployment: Docker Compose
- Start the infrastructure:
docker compose up -d