A lightweight, user-friendly AI translation system for OpenStack i18n.
This tool helps contributors translate .pot / .po files into 54 languages using CPU-friendly LLMs such as Ollama, as well as GPT, Claude, and Gemini.
If you're new to OpenStack i18n, see the official OpenStack i18n guide.
This repository provides two independent translation workflows:
- local/ - For local development and manual translation testing
- ci/ - For automated CI/CD pipeline integration
- Python 3.10 is needed
- Ollama (for local LLM) or API keys for GPT/Claude/Gemini
- Git (for CI workflow)
The fastest way to run your first translation on your local machine.
By default, this system translates the nova project files into Korean (ko_KR) and Japanese (ja) using the llama3.2:3b model via Ollama.
You can customize the target project, model, and language in config.yaml (see Choose Your Options below).
git clone https://github.com/openstack-kr/knu_i18n_2025.git
cd knu_i18n_2025/local# if you trouble in upgrading pip, we recommend to use venv
python -m pip install --upgrade pip
pip install tox
# Install Ollama
# For Linux:
curl -fsSL https://ollama.com/install.sh | sh
# For other operating systems (Windows, macOS):
# Please visit https://ollama.com/download and follow the installation instructions# if you trouble in upgrading pip, we recommend to use venv
python -m pip install --upgrade pip
# Install Ollama
# For Linux:
curl -fsSL https://ollama.com/install.sh | sh
# For other operating systems (Windows, macOS):
# Please visit https://ollama.com/download and follow the installation instructions
pip install -r requirements.txtThis will translate the file specified in config.yaml using the configured model and language.
tox -e i18n -vv
# or
bash scripts/local.shWhat's happening:
- The system reads your target
.potor.pofile from./data/target/{lang}directory - Uses the specified model (default:
llama3.2:3bvia Ollama) - Translates into your chosen language (default: ko_KR)
- Outputs translated
.pofiles to./po/{model}/{lang}/directory
After AI translation, human review is essential to ensure accuracy and context appropriateness. AI translations are drafts that require human verification before production use.
Open the generated .po file in ./po/{model}/{lang}/ directory and review the translations manually for technical accuracy, natural language flow, and consistency with existing translations.
After reviewing AI translation, merge your reviewed translations back to the original .po file:
tox -e i18n-merge -vv
# or
python src/merge_po.py --config config.yamlThis will merge your reviewed translations and save the final result to ./data/result/{lang} directory.
You can customize target file, model, language, and performance settings in local/config.yaml
- Place your target
.potor.pofile in the./data/target/{lang}directory - Specify the filename in
config.yaml:
# Set target_file to translate (must be placed under ./data/target/{lang})
target_file: "test.po"- Input:
./data/target/{lang}/{target_file}.poor./data/target/{lang}/{target_file}.pot - Intermediate outputs:
- Extracted POT:
./pot/{target_file}.pot - AI translations:
./po/{model}/{lang}/{target_file}.po
- Extracted POT:
- Final output:
./data/result/{lang}/{target_file}.po(merged translation)
You can manually download the latest translated POT or PO files directly from the Weblate interface.
Steps:
- Go to the Weblate translation dashboard for the project Example
- Select the project (e.g., Nova, Horizon, etc.)
- Navigate to:
project → languages → <Your Language> - Click "Download translation"
- Save the downloaded file to the
./data/target/{lang}/directory - Update the
target_filename inconfig.yaml
Please insert your language code from this link. We support 54 languages
languages:
# Please choose exactly ONE language for local translation.
- "ko_KR"Uses Ollama. Browse available models HERE.
When using closed-source model, edit the backend using llm.mode: [ollama (default), gpt, claude, gemini]
# You can tune these arguments for performance / partial translation:
llm:
model: "llama3.2:3b"
mode: "ollama" # Choose your LLM mode: `ollama` (default), `gpt`, `claude`, `gemini`
workers: 1 # number of parallel threads (default: 1)
start: 0 # entry index range to translate (default: 0 ~ all)
end: -1
batch_size: 5 # entries per LLM call (default: 5)For automated translation in OpenStack's Zuul CI environment.
cd ci/pip install -r requirements.txt
# Install Ollama
curl -fsSL https://ollama.com/install.sh | shbash scripts/ci.sh <project> <branch> [languages...]Examples:
# Single language
bash scripts/ci.sh neutron-lib master ko_KR
# Multiple languages
bash scripts/ci.sh nova master ko_KR ja zh_CN
All settings are hardcoded for CI consistency:
- Model:
llama3.2:3b - Mode:
ollama - Batch size: 5
- Workers: 1
To customize, edit the Python scripts in ci/src/ directly.
The system automatically:
- Loads the
.potfile - Splits text into batches
- Applies the general prompt or a language-specific prompt (if available)
- Adds few-shot examples when reference translations exist
- Generates draft
.potranslations
Draft translations are then pushed to Gerrit → reviewed → synced to Weblate. For full architecture details, see PAPER.md.
You can tune two major components:
- Few-shot examples (
/po-example/) - Language-specific prompts (
/prompts/)
See CONTRIBUTING.md to learn how you can contribute.
Run PEP8 style checks:
cd local/ # or cd ci/
tox -e pep8Auto-fix style issues:
autopep8 --in-place --aggressive --aggressive -r .- Lee Juyeong - Project Lead
- Oh Jiwoo
- Jo Taeho
- Chun Sihyeon
- Hwang Jiyoung