A QML desktop app for multi-model deliberation and discussion
Compare local and hosted models in one workflow, stream responses live, and keep the full decision trail.
PolyCouncil coordinates multiple LLMs across local and hosted providers, then lets you review the run as a structured workflow instead of a loose stack of chat outputs.
The current desktop app uses:
- a QML primary UI in
gui/main.qml - a Python bridge in
qml_bridge.py - a clean runtime entrypoint in
main.py
The legacy widgets-era council.py is still in the repo for compatibility and reference, but it is no longer the primary app entrypoint.
Multiple models answer the same prompt, score peer answers, and produce a weighted winner.
Multiple models collaborate across turns and generate a final synthesis.
- LM Studio
- Ollama
- OpenAI-compatible hosted APIs
- OpenAI
- OpenRouter
- Google Gemini
- Anthropic
- Groq
- Together AI
- Kimi
- MiniMax
- Z.AI
- Fireworks AI
- provider profiles and saved connection presets
- searchable large model lists
- persona assignment and persona editing
- file attachments and image attachments
- live streaming output when the provider supports it
- session replay and JSON export
- leaderboard/history tracking
- sanitized markdown rendering
- secure API key storage when the OS keychain is available
- session history stored under app data directories
- Download the latest
PolyCouncil.exefrom Releases. - Start LM Studio or Ollama, or prepare a hosted provider API key.
- Launch
PolyCouncil.exe. - Load models from the left workflow pane.
- Select models, write a prompt, and run the council.
git clone https://github.com/TrentPierce/PolyCouncil.git
cd PolyCouncil
pip install -r requirements.txt
python main.py- Python 3.11+
PySide6aiohttpqasyncpypdfpython-docxmarkdownbleachkeyring
main.py: QML app entrypointqml_bridge.py: QObject bridge exposed to QMLgui/main.qml: main application shellgui/components/: reusable QML controlscore/: provider routing, discussion logic, sessions, settings, rendering helperstests/: pytest coverage for providers, voting, personas, discussion flow, and rendering safety
python -m PyInstaller build_exe.spec --clean --noconfirmOutput:
dist/PolyCouncil.exeon Windowsdist/PolyCouncilon Linux and macOS runners
The build spec targets main.py and bundles the QML UI, QML components, personas config, and app icon.
git clone https://github.com/TrentPierce/PolyCouncil.git
cd PolyCouncil
pip install -r requirements-dev.txt
python main.pyRun tests:
pytest -q- Windows:
%APPDATA%\\PolyCouncil - macOS:
~/Library/Application Support/PolyCouncil - Linux:
${XDG_CONFIG_HOME:-~/.config}/PolyCouncil
- Windows:
%LOCALAPPDATA%\\PolyCouncil - macOS:
~/Library/Application Support/PolyCouncil - Linux:
${XDG_DATA_HOME:-~/.local/share}/PolyCouncil
- verify the provider base URL and API key
- confirm the provider actually exposes a compatible model-list endpoint
- use manual model entry for providers that do not expose a standard list endpoint
Some OpenRouter models, especially :free routes, may fail if your account privacy settings do not allow any available backend for that model. Adjust the privacy policy in your OpenRouter account or choose a different route.
Failed or cancelled answers are excluded from the voting pool. If every model fails, PolyCouncil will finish the run without forcing a winner.
Contributions are welcome in:
- provider compatibility
- voting quality
- streaming behavior
- UI polish
- documentation
- test coverage
Open an issue or submit a pull request.
This project is licensed under the Polyform Noncommercial License 1.0.0.
See LICENSE for details.
- Bug reports: GitHub Issues
- Feature requests: GitHub Issues
- Project page: GitHub Repository


