A production-oriented prototype for ingesting, processing, and analyzing LTE S1AP Protocol Data Units (PDUs) with a modular, replayable real-time pipeline.
Repository: https://github.com/melrosenetworks/S1-SEE
S1-SEE implements a multi-stage pipeline:
- Stage 0: Ingress Spooler - Ingests messages via multiple transports (gRPC, Kafka, AMQP, NATS) and durably spools them before ACKing upstream
- Stage 1: Decoder + Normaliser - Decodes S1AP PDUs and normalizes them into canonical messages
- Stage 2: Correlator - Maintains UE contexts and correlates messages to subscribers
- Stage 3: Event Engine - Applies declarative YAML rules to emit events
- Stage 5: Sinks - Publishes events to various outputs (stdout, JSONL, Kafka, gRPC, etc.)
- No-loss upgradeability: Messages are durably spooled before ACKing
- Spool as system of record: Append-only log with partitions, offsets, and replay capability
- Evidence chain: Every event carries pointers to spool offsets for underlying messages
- Transport-agnostic core: All transports feed into a unified SignalMessage model
- Declarative rules: YAML-based event rules (single-message triggers + two-step sequences)
- CMake 3.20+
- C++20 compatible compiler (GCC 10+, Clang 12+)
- Protobuf 3.x
- gRPC
- yaml-cpp
- libpcap (optional, for PCAP file processing)
mkdir build
cd build
cmake ..
make -j$(nproc)This will build:
s1see_spoolerd- Ingress spooler daemons1see_processor- Main processing pipelines1see_demo_generator- Demo tool for generating test messages
./s1see_spoolerd [listen_address] [spool_dir]Example:
./s1see_spoolerd 0.0.0.0:50051 spool_dataThe spooler listens for gRPC streaming connections and durably stores all incoming messages.
In another terminal:
./s1see_demo_generator [server_address] [num_messages]Example:
./s1see_demo_generator localhost:50051 10./s1see_processor [spool_dir] [ruleset_file] [output_file] [continuous]Example:
./s1see_processor spool_data config/rulesets/mobility.yaml events.jsonl trueThe processor will:
- Read messages from the spool
- Decode and normalize them
- Correlate to UE contexts
- Apply rules to emit events
- Write events to stdout and JSONL file
Rulesets are defined in YAML format. See config/rulesets/mobility.yaml for an example:
ruleset:
id: "mobility"
version: "1.0"
single_message_rules:
- event_name: "Mobility.Handover.Commanded"
msg_type: "HandoverRequest"
attributes:
category: "mobility"
action: "commanded"
sequence_rules:
- event_name: "Mobility.Handover.Completed"
first_msg_type: "HandoverRequest"
second_msg_type: "HandoverNotify"
time_window_ms: 15000
attributes:
category: "mobility"
action: "completed"The spool can be configured via code or configuration file:
base_dir: Directory for spool datanum_partitions: Number of partitions (default: 1)max_segment_size: Maximum segment size before rotationmax_retention_bytes: Maximum total size before pruningmax_retention_seconds: Maximum age before pruningfsync_on_append: Whether to fsync on each append (default: true)
The spool is implemented as a local disk-based Write-Ahead Log (WAL) with:
- Segmented log files:
segment_{partition}_{baseOffset}.log - Index files:
segment_{partition}_{baseOffset}.idxmapping offsets to file positions - Partitioning: Messages are partitioned by hash(source_id + source_sequence)
- Consumer groups: Support for multiple consumer groups with independent offsets
- Replay: Full replay capability by reading from any offset
- gRPC: Fully implemented streaming ingest server
- Kafka: Skeleton (integrate librdkafka for production)
- AMQP: Skeleton (integrate RabbitMQ-C for production)
- NATS: Skeleton (integrate nats.c for production)
The decoder uses a real S1AP parser implementation (RealS1APDecoder) that:
- Extracts S1AP PDUs from SCTP packets (PayloadProtocolID = 18)
- Parses S1AP Information Elements using PER (Packed Encoding Rules) decoding
- Extracts UE identifiers: IMSI, TMSI, IMEISV, MME-UE-S1AP-ID, eNB-UE-S1AP-ID
- Extracts TEIDs from E-RAB setup messages
- Parses embedded NAS PDUs to extract additional identifiers
- Based on 3GPP TS 36.413 (S1AP) and TS 24.301 (EPS NAS) specifications
The correlator consists of two main components:
S1apUeCorrelator: Maintains subscriber records with all identifiers:
- Stable identifiers: IMSI, TMSI, IMEISV
- Network identifiers: MME-UE-S1AP-ID, eNB-UE-S1AP-ID
- Tunnel identifiers: TEIDs (GTP tunnel endpoint identifiers)
- Tracks associations between identifiers and handles conflicts
- Automatically removes S1AP IDs when UEContextReleaseComplete is received
UE Context Correlator: Maintains UE contexts indexed by:
- IMSI (globally unique subscriber identifier)
- TMSI (temporary mobile subscriber identity)
- TMSI + ECGI (location-scoped)
- MME composite (mme_id + mme_ue_s1ap_id)
- eNB composite (enb_id + enb_ue_s1ap_id)
- IMEISV (device identifier)
The correlator handles context merging during handovers and automatically cleans up expired contexts.
Every event includes an EvidenceChain with spool offsets pointing to the underlying messages. This allows:
- Retrieving raw bytes from spool
- Replaying events deterministically
- Auditing and debugging
The system uses RealS1APDecoder which wraps the s1ap_parser implementation. To customize:
- Implement
S1APDecoderWrapperinterface - In
Pipeline::Pipeline(), replace the default decoder:pipeline.set_decoder(std::make_unique<YourS1APDecoder>());
The current implementation includes:
- Full S1AP PDU parsing with PER decoding
- NAS PDU parsing for embedded messages
- Identifier extraction (IMSI, TMSI, IMEISV, S1AP IDs, TEIDs)
- Support for all major S1AP procedures
- Kafka: Implement
KafkaIngestAdapter::start()using librdkafka - AMQP: Implement
AMQPIngestAdapter::start()using RabbitMQ-C - NATS: Implement
NATSIngestAdapter::start()using nats.c
See the adapter headers for the interface contract.
The spool interface is abstracted. To use Kafka:
- Implement a
KafkaSpoolclass with the same interface asSpool - Replace
WALLogwith Kafka consumer/producer logic - Update
Pipelineto use the new spool implementation
The system is designed for deterministic replay:
- Messages are stored with monotonic offsets
- Events include evidence chains pointing to source messages
- Replaying from the same spool with the same rules produces identical events
- Decode failures: Raw bytes are preserved,
decode_failedflag is set - Spool failures: Exceptions are thrown (never drop data silently)
- Rule evaluation: Failures are logged, processing continues
cd build
./test_ue_context
./test_correlator
./test_integrationProcess a PCAP file containing S1AP traffic:
cd build
./test_pcap [path/to/file.pcap]
# Or use default location: ./test_pcap (looks for ../test_data/sample.pcap)The PCAP test will:
- Read all packets from the PCAP file
- Extract S1AP PDUs from SCTP packets
- Process through the full pipeline (spool → decode → correlate → rules)
- Emit events to stdout and
test_pcap_events.jsonl
Run the demo:
# Terminal 1
./s1see_spoolerd
# Terminal 2
./s1see_demo_generator localhost:50051 20
# Terminal 3
./s1see_processor spool_data config/rulesets/mobility.yaml events.jsonl trueCheck events.jsonl for emitted events.
S1-SEE/
├── proto/ # Protobuf definitions
├── include/s1see/ # Header files
│ ├── spool/ # Spool/WAL implementation
│ ├── ingest/ # Transport adapters
│ ├── decode/ # S1AP decoder wrapper
│ ├── correlate/ # UE context correlator
│ ├── rules/ # Event rule engine
│ ├── sinks/ # Event sinks
│ ├── processor/ # Main pipeline
│ └── utils/ # Utility functions (PCAP reader)
├── src/ # Implementation files
│ ├── s1ap_parser.* # S1AP PDU parser (PER decoding)
│ ├── nas_parser.* # NAS message parser
│ ├── s1ap_ue_correlator.* # Subscriber correlation
│ └── ... # Other components
├── apps/ # Main applications
├── config/ # Configuration files (rulesets)
├── test_data/ # Test PCAP files
└── CMakeLists.txt # Build configuration
- Full Kafka/AMQP/NATS adapter implementations
- Metrics and observability
- Distributed processing support
- Advanced rule conditions (regex, ranges, etc.)
- Event aggregation and windowing
- Web UI for monitoring
Copyright (c) 2026 Melrose Networks (Melrose Labs Ltd)
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Contributions are welcome! Please see CONTRIBUTING.md for detailed guidelines.
Quick start:
- Fork the repository from https://github.com/melrosenetworks/S1-SEE and create a feature branch
- Follow the existing code style and formatting conventions
- Add tests for new functionality
- Update documentation as needed
- Submit a pull request with a clear description of changes
Please use the GitHub issue tracker to report bugs or request features. Include:
- Description of the issue
- Steps to reproduce (if applicable)
- Expected vs. actual behavior
- Environment details (OS, compiler version, etc.)