Real-time object detection and tracking system
YOLO11 × BoT-SORT × FastAPI
Demo • Quick Start • Features • Usage • Docker
Pedestrian Detection
|
Traffic Detection
|
Real-time tracking of 20+ pedestrians with trajectory prediction
📥 Download Full Quality Video (34MB MP4)
Try it in 3 commands:
git clone https://github.com/bcpmarvel/sentinel.git
cd sentinel
uv run detect image demos/inputs/images/pedestrian.jpgModels download automatically on first run. Output saved to demos/outputs/.
- Python 3.12+
uvpackage manager (install)
git clone https://github.com/bcpmarvel/sentinel.git
cd sentinel
uv syncDetect objects in demo images:
# Pedestrian detection
uv run detect image demos/inputs/images/pedestrian.jpg
# Traffic detection
uv run detect image demos/inputs/images/traffic.jpgRun on webcam:
uv run detect video --source 0Track objects in video:
uv run detect video --source demos/inputs/videos/mot17-05.mp4 --trackStart API server:
uv run serveTest the API:
curl -X POST http://localhost:8000/api/detect \
-F "file=@demos/inputs/images/pedestrian.jpg"| Feature | Description | Status |
|---|---|---|
| 🎯 YOLO11 Detection | State-of-the-art object detection at 30+ FPS | ✅ |
| 🔄 BoT-SORT Tracking | Multi-object tracking with trajectory prediction | ✅ |
| 📊 Zone Analytics | Count objects, measure dwell time, detect entries/exits | ✅ |
| 🚀 REST API | Production-ready FastAPI server | ✅ |
| 🔌 WebSocket Streaming | Real-time video streaming | ✅ |
| ⚡ GPU Acceleration | MPS (Apple Silicon) and CUDA support | ✅ |
| 🐳 Docker Ready | Containerized deployment with docker-compose | ✅ |
# Development (hot reload)
docker-compose up api
# Production
docker-compose --profile production up api-prodImage Detection:
uv run detect image <image_path> [--conf 0.5] [--model yolo11m.pt]Video Detection:
# Basic detection
uv run detect video --source 0 # Webcam
uv run detect video --source video.mp4 # Video file
uv run detect video --source rtsp://camera.ip # RTSP stream
# With tracking
uv run detect video --source video.mp4 --track
# Advanced options
uv run detect video --source 0 \
--model yolo11s.pt \
--conf 0.6 \
--track \
--analytics \
--zones zones.jsonAvailable Models:
yolo11n.pt- Nano (fastest)yolo11s.pt- Smallyolo11m.pt- Medium (default, balanced)yolo11l.pt- Largeyolo11x.pt- Extra large (most accurate)
Advanced CLI Options
uv run detect video --helpCommon options:
--conf: Confidence threshold (0-1, default: 0.25)--device: Device (cpu/mps/cuda, auto-detected)--model: YOLO model path--track: Enable object tracking--analytics: Enable zone analytics--zones: Path to zones.json file--no-display: Run without GUI window--save-video: Save output video
Start the server:
uv run serveHealth Check:
curl http://localhost:8000/api/healthDetect Objects:
curl -X POST http://localhost:8000/api/detect \
-F "file=@demos/inputs/images/pedestrian.jpg" \
-F "conf_threshold=0.5"Response:
{
"detections": [
{
"x1": 123.4, "y1": 456.7,
"x2": 789.0, "y2": 321.5,
"confidence": 0.89,
"class_id": 0,
"class_name": "person"
}
],
"image_width": 1280,
"image_height": 720,
"processing_time_ms": 45.2,
"model_name": "yolo11m.pt",
"device": "mps"
}Interactive API Docs:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
More API Examples
With custom confidence threshold:
curl -X POST http://localhost:8000/api/detect \
-F "[email protected]" \
-F "conf_threshold=0.7"Using Python requests:
import requests
with open("image.jpg", "rb") as f:
response = requests.post(
"http://localhost:8000/api/detect",
files={"file": f},
data={"conf_threshold": 0.5}
)
print(response.json())Zone Analytics
Create zones.json to define monitoring zones:
[
{
"id": "zone_1",
"name": "Entrance",
"polygon": [[100, 100], [500, 100], [500, 400], [100, 400]],
"color": [255, 0, 0]
}
]Metrics tracked:
- Object count in zone
- Average/max dwell time
- Entry/exit events
Use .env for environment variables:
cp .env.example .envMODEL_NAME=yolo11m.pt
DEVICE=cpu
API_HOST=0.0.0.0
API_PORT=8000
LOG_FORMAT=jsonConfiguration is managed through:
- Command-line options for per-run settings
- Environment variables (via
.env) for persistent settings
See Docker/API Deployment section above for .env configuration.
Note: Architecture diagram placeholder
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ Input │─────▶│ YOLO11 │─────▶│ BoT-SORT │
│ (Video/API) │ │ Detection │ │ Tracking │
└─────────────┘ └──────────────┘ └─────────────┘
│
▼
┌─────────────┐ ┌──────────────┐ ┌─────────────┐
│ Output │◀─────│ Visualization│◀─────│ Analytics │
│ (API/Stream)│ │ & Rendering │ │ (Zones) │
└─────────────┘ └──────────────┘ └─────────────┘
Project Structure
src/sentinel/
├── api/ # FastAPI routes, schemas, dependencies
│ ├── app.py
│ ├── routes.py
│ ├── schemas.py
│ └── dependencies.py
├── analytics/ # Zone analytics, dwell time tracking
│ ├── service.py
│ ├── models.py
│ └── dwell.py
├── detection/ # YOLO11 detector, service
│ ├── service.py
│ └── models.py
├── visualization/ # Annotators for drawing
│ └── annotators.py
├── cli.py # CLI entrypoint
├── server.py # API server entrypoint
├── config.py # Pydantic settings
└── pipeline.py # Video processing pipeline
| Component | Technology |
|---|---|
| Detection | YOLO11 (Ultralytics) |
| Tracking | BoT-SORT |
| Deep Learning | PyTorch (MPS/CUDA) |
| API Framework | FastAPI |
| Computer Vision | OpenCV, Supervision |
| CLI | Typer |
| Logging | Structlog |
| Packaging | uv |
uv sync --dev# Format
uv run ruff format .
# Lint
uv run ruff check .
# Fix
uv run ruff check --fix .uv run pytest
uv run pytest -v --cov=sentinelThis project is licensed under the MIT License - see the LICENSE file for details.
- Ultralytics for YOLO11
- BoT-SORT for multi-object tracking
- Roboflow for Supervision library

