An MLOps workflow for training, inference, experiment tracking, model registry, and deployment.
-
Updated
Nov 24, 2025 - Python
An MLOps workflow for training, inference, experiment tracking, model registry, and deployment.
Computes CT contrast phase and GI tract contrast using TotalSegmentator and ML
A comprehensive .NET MAUI plugin for ML inference with ONNX Runtime, CoreML, and platform-native acceleration support
gRPC server for Machine Learning (ML) Model Inference in Rust.
EcoChain-ML is a hybrid energy-aware ML framework integrating a lightweight PoS blockchain layer and renewable-aware scheduling. Built to simulate green computing strategies on a single PC, it evaluates energy, latency, and sustainability trade-offs.
[TPDS 2025] EdgeAIBus: AI-driven Joint Container Management and Model Selection Framework for Heterogeneous Edge Computing
ML service for cats that actually learn stuff. PPO brains, personality drift, mood system.
Dockerized Django application for handwritten math expression recognition using a CNN model, with end-to-end ML pipeline and cloud-ready deployment.
Production-style real-time ML feature store with low-latency inference
🐱 Create a living cat AI that exhibits emotions, reactions, and realistic behavior for an engaging and interactive experience.
High-performance C++20 neural network framework powered by Intel oneAPI MKL 2025.2. Optimized for CPU-based deep learning inference and training.
🚀 Event-driven ML inference pipeline using AWS Step Functions and Lambda. Orchestrates a SageMaker image classification workflow with automated confidence-threshold filtering and state machine error handling.
AI recruitment intelligence platform with resume scoring, role matching, and inference workflow design.
PoC demonstrating distributed workload orchestration using Ray as the primary compute framework with Prefect for workflow orchestration, supporting cloud-native deployments (Kubernetes)
Machine learning system for on-device inference that analyzes patrol notes and predicts violation type and severity using NLP embeddings and trained classification models.
Submission of Project
Enterprise Data Warehouse & ML Platform - High-performance platform processing 24B records with <60s latency and 100K records/sec throughput, featuring 32 fact tables, 128 dimensions, and automated ML pipelines achieving 91.2% accuracy. Real-time ML inference serving 300K+ predictions/hour with ensemble models.
18 compute tools for AI agents: web scraping, code execution, ML inference. pip install, MCP server, REST API. 250 free credits.
Containerized ML inference service exposing a churn prediction model via FastAPI, with Docker-based deployment and AWS-ready architecture.
Microservice to digitalize a chess scoresheet
Add a description, image, and links to the ml-inference topic page so that developers can more easily learn about it.
To associate your repository with the ml-inference topic, visit your repo's landing page and select "manage topics."