Get latest Product updates
Website •
Authors
•
Discord Channel
An AI-Powered Meeting Assistant that captures live meeting audio, transcribes it in real-time, and generates summaries while ensuring user privacy. Perfect for teams who want to focus on discussions while automatically capturing and organizing meeting content without the need for external servers or complex infrastructure.
- Overview
- Features
- System Architecture
- Prerequisites
- Setup Instructions
- Development Setup
- Whisper Model Selection
- Known Issues
- LLM Integration
- Troubleshooting
- Uninstallation
- Development Guidelines
- Contributing
- License
- Introducing Subscription
- Contributions
- Acknowledgments
- Star History
An AI-powered meeting assistant that captures live meeting audio, transcribes it in real-time, and generates summaries while ensuring user privacy. Perfect for teams who want to focus on discussions while automatically capturing and organizing meeting content.
While there are many meeting transcription tools available, this solution stands out by offering:
- Privacy First: All processing happens locally on your device
- Cost Effective: Uses open-source AI models instead of expensive APIs
- Flexible: Works offline, supports multiple meeting platforms
- Customizable: Self-host and modify for your specific needs
- Intelligent: Built-in knowledge graph for semantic search across meetings
✅ Modern, responsive UI with real-time updates
✅ Real-time audio capture (microphone + system audio)
✅ Live transcription using locally-running Whisper
✅ Local processing for privacy
✅ Packaged the app for macOS and Windows
✅ Rich text editor for notes
🚧 Export to Markdown/PDF/HTML
🚧 Obsidian Integration
🚧 Speaker diarization
-
Audio Capture Service
- Real-time microphone/system audio capture
- Audio preprocessing pipeline
- Built with Rust (experimental) and Python
-
Transcription Engine
- Whisper.cpp for local transcription
- Supports multiple model sizes (tiny->large)
- GPU-accelerated processing
-
LLM Orchestrator
- Unified interface for multiple providers
- Automatic fallback handling
- Chunk processing with overlap
- Model configuration:
-
Data Services
- ChromaDB: Vector store for transcript embeddings
- SQLite: Process tracking and metadata storage
- Frontend: Tauri app + Next.js (packaged executables)
- Backend: Python FastAPI:
- Transcript workers
- LLM inference
- Node.js 18+
- Python 3.10+
- FFmpeg
- Rust 1.65+ (for experimental features)
- Cmake 3.22+ (for building the frontend)
- For Windows: Visual Studio Build Tools with C++ development workload
Option 1: Using the Setup Executable (.exe) (Recommended)
- Download the
meetily-frontend_0.0.4_x64-setup.exe
file - Double-click the installer to run it
- Follow the on-screen instructions to complete the installation
- The application will be available on your desktop
Note: Windows may display a security warning. To bypass this:
- Click
More info
and chooseRun anyway
, or - Right-click on the installer (.exe), select Properties, and check the Unblock checkbox at the bottom
Option 2: Using the MSI Installer (.msi)
- Download the
meetily-frontend_0.0.4_x64_en-US.msi
file - Double-click the MSI file to run it
- Follow the installation wizard to complete the setup
- The application will be installed and available on your desktop
Provide necessary permissions for audio capture and microphone access.
Option 1: Manual Setup
- Clone the repository:
git clone https://github.com/Zackriya-Solutions/meeting-minutes
cd meeting-minutes/backend
- Build dependencies:
.\build_whisper.cmd
- Start the backend servers:
.\start_with_output.ps1
Option 2: Docker Setup (including ARM64/Snapdragon)
# Clone the repository
git clone https://github.com/Zackriya-Solutions/meeting-minutes.git
cd meeting-minutes
# Run the Docker build script (interactive setup)
.\docker-build.bat
The Docker setup for both macOS and Windows allows you to configure:
- Whisper model selection (tiny, base, small, medium, large-v3, etc.)
- Language preference (auto-detection or specific language)
- Logging level
Go to the releases page and download the latest version.
Option 1: Using Homebrew (Recommended)
Note : This step installs the backend server and the frontend app. Once the backend and the frontend are started, you can open the application from the Applications folder.
# Install Meetily using Homebrew
brew tap zackriya-solutions/meetily
brew install --cask meetily
# Start the backend server
meetily-server --language en --model medium
Option 2: Manual Installation
- Download the
dmg_darwin_arch64.zip
file - Extract the file
- Double-click the
.dmg
file inside the extracted folder - Drag the application to your Applications folder
- Execute the following command in terminal to remove the quarantine attribute:
xattr -c /Applications/meetily-frontend.app
Provide necessary permissions for audio capture and microphone access.
Option 1: Using Homebrew (Recommended)
(Optional)
# If meetily is already installed in your system, uninstall the current versions
brew uninstall meetily
brew uninstall meetily-backend
brew untap zackriya-solutions/meetily
# Install Meetily using Homebrew
brew tap zackriya-solutions/meetily
brew install --cask meetily
# Start the backend server
meetily-server --language en --model medium
Option 2: Manual Setup
# Clone the repository
git clone https://github.com/Zackriya-Solutions/meeting-minutes.git
cd meeting-minutes/backend
# Create and activate virtual environment
python -m venv venv
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# Build dependencies
chmod +x build_whisper.sh
./build_whisper.sh
# Start backend servers
./clean_start_backend.sh
# Navigate to frontend directory
cd frontend
# Give execute permissions to clean_build.sh
chmod +x clean_build.sh
# run clean_build.sh
./clean_build.sh
When setting up the backend (either via Homebrew, manual installation, or Docker), you can choose from various Whisper models based on your needs:
-
Standard models (balance of accuracy and speed):
- tiny, base, small, medium
-
English-optimized models (faster for English content):
- tiny.en, base.en, small.en, medium.en
-
Advanced models (for special needs):
- large-v3, large-v3-turbo
- small.en-tdrz (with speaker diarization)
-
Quantized models (reduced size, slightly lower quality):
- tiny-q5_1, base-q5_1, small-q5_1, medium-q5_0
- Smaller LLMs can hallucinate, making summarization quality poor; Please use model above 32B parameter size
- Backend build process requires CMake, C++ compiler, etc. Making it harder to build
- Backend build process requires Python 3.10 or newer
- Frontend build process requires Node.js
The backend supports multiple LLM providers through a unified interface. Current implementations include:
- Anthropic (Claude models)
- Groq (Llama3.2 90 B)
- Ollama (Local models that supports function calling)
If you encounter issues with the Whisper model:
# Try a different model size
meetily-download-model small
# Verify model installation
ls -la $(brew --prefix)/opt/meetily-backend/backend/whisper-server-package/models/
If the server fails to start:
-
Check if ports 8178 and 5167 are available:
lsof -i :8178 lsof -i :5167
-
Verify that FFmpeg is installed correctly:
which ffmpeg ffmpeg -version
-
Check the logs for specific error messages when running
meetily-server
-
Try running the Whisper server manually:
cd $(brew --prefix)/opt/meetily-backend/backend/whisper-server-package/ ./run-server.sh --model models/ggml-medium.bin
If the frontend application doesn't connect to the backend:
- Ensure the backend server is running (
meetily-server
) - Check if the application can access localhost:5167
- Restart the application after starting the backend
If the application fails to launch:
# Clear quarantine attributes
xattr -cr /Applications/meetily-frontend.app
To completely remove Meetily:
# Remove the frontend
brew uninstall --cask meetily
# Remove the backend
brew uninstall meetily-backend
# Optional: remove the taps
brew untap zackriya-solutions/meetily
brew untap zackriya-solutions/meetily-backend
# Optional: remove Ollama if no longer needed
brew uninstall ollama
- Follow the established project structure
- Write tests for new features
- Document API changes
- Use type hints in Python code
- Follow ESLint configuration for JavaScript/TypeScript
- Fork the repository
- Create a feature branch
- Submit a pull request
MIT License - Feel free to use this project for your own purposes.
We are planning to add a subscription option so that you don't have to run the backend on your own server. This will help you scale better and run the service 24/7. This is based on a few requests we received. If you are interested, please fill out the form here.
Thanks for all the contributions. Our community is what makes this project possible. Below is the list of contributors:
We welcome contributions from the community! If you have any questions or suggestions, please open an issue or submit a pull request. Please follow the established project structure and guidelines. For more details, refer to the CONTRIBUTING file.
- We borrowes some code from Whisper.cpp
- We borrowes some code from Screenpipe