Language-agnostic framework for AI-text humanization. The runtime
detects, polishes, judges, and iterates on text — but it knows
nothing about Chinese, English, or any other language. Each language ships
as a separate plugin package that registers a LanguageProfile.
0.1.0a1 — published to TestPyPI. Framework feature-complete.
Two plugins ship against this version:
humanize-zh— Chinese AI-text humanizer (standalone UI, ZH rules + n-gram)humanize-en— English AI-text humanizer (M1–M11 complete, HTMX UI live)
The package was extracted from the humanize-zh v0.1.0a1 codebase so
sibling plugins can share the rule + n-gram + LLM-polish + iterative
loop + Web UI (HTMX factory) machinery without duplicating it.
humanize_core/
├── protocols.py # Detector / NgramEngine / ReplacementsTable / PromptPack / LanguageProfile
├── language_registry.py # register_language / get_language / entry-point discovery
├── _format.py # level-label formatting helper
├── combined.py # rule + ngram aggregator (combined_score)
├── prompt.py # framework prompt dispatcher + EN fallback templates
├── postprocess.py # LLM polish dispatcher (postprocess_humanize)
├── judge.py # LLM editorial review (judge, format_report)
├── iterative.py # closed-loop writer/judge polish (iterative_polish)
├── llm/ # provider layer (openai / anthropic / openai-compat / callable)
├── cli/ # `humanize` entry point (--lang routing)
└── web/ # FastAPI + HTMX app (lang routing, optional templates)
A real plugin (e.g. humanize-zh) ships:
humanize_zh/
├── _lang/zh/
│ ├── detector.py # implements humanize_core.Detector
│ ├── ngram.py # implements humanize_core.NgramEngine
│ ├── replacements.py # implements humanize_core.ReplacementsTable
│ ├── prompts.py # writer/judge templates
│ ├── profile.py # assembles LanguageProfile + registers
│ └── data/ # rules.json, replacements.json, ngram tables
├── web/templates/ # Jinja2 templates (Chinese UI)
└── pyproject.toml # entry-point: humanize_core.languages = "zh = humanize_zh._lang.zh.profile:zh_profile"
The framework discovers plugins via importlib.metadata.entry_points.
from humanize_core import (
register_language, get_language, list_languages,
combined_score, postprocess_humanize, judge, format_report,
iterative_polish,
)
# Plugins auto-register on import via their entry-point.
print(list_languages()) # ['zh']
# Detect — requires a registered plugin.
report = combined_score(article, lang="zh")
print(report) # rule + ngram + combined
# Polish — works for any registered lang, plus 'en' framework fallback.
polished, after, before = postprocess_humanize(article, lang="zh", scene="essay")
# Judge — 7-field editorial verdict dict.
verdict = judge(article, lang="zh", judge_provider="anthropic")
print(format_report(verdict))
# Iterate — closed-loop writer/judge until target AI score reached.
result = iterative_polish(
article, lang="zh", rounds=3, target_ai_score=25,
writer_provider="deepseek", judge_provider="anthropic",
)
print(result.final_text)pip install humanize-core installs the humanize binary:
humanize languages # list installed plugins
humanize providers # detect LLM providers from env
humanize detect article.md --lang zh # rule + ngram + combined
humanize detect article.md --lang zh --json
humanize polish article.md --lang zh -o out.md
humanize polish article.md --lang en --provider openai # EN framework fallback
humanize judge article.md --lang zh --judge anthropic
humanize ui --port 8765 # JSON-only API serverLanguage resolution: --lang is required unless exactly one plugin is
installed (then it auto-selects). Zero plugins / multi-plugin ambiguity
exit with a friendly hint.
The bundled humanize_core.web.app:app is a JSON-only FastAPI app. Plugin
packages compose their own templated app by calling create_app(templates_dir=...):
from pathlib import Path
from humanize_core.web import create_app
app = create_app(templates_dir=Path("/path/to/plugin/templates"))
# uvicorn my_plugin.web.app:app --port 8765Endpoints (all language-routed via the lang form field):
| Method | Path | Purpose |
|---|---|---|
GET |
/health |
liveness probe |
GET |
/api/languages |
list registered plugins |
GET |
/api/providers |
list detectable LLM providers |
POST |
/api/detect |
JSON: rule + ngram + combined |
POST |
/api/polish |
JSON: LLM polish |
POST |
/api/judge |
JSON: 7-field editorial verdict |
GET |
/ and /htmx/* |
HTMX fragments (requires templates_dir) |
Opt-in auth + rate limiting:
export HUMANIZE_CORE_WEB_TOKEN="secret" # bearer token / ?token=
export HUMANIZE_CORE_WEB_RATE_LIMIT_PER_MINUTE=60 # per-IP, per-worker| Env var | Purpose |
|---|---|
HUMANIZE_CORE_NO_DOTENV=1 |
disable the CLI's .env auto-loader |
HUMANIZE_CORE_WEB_TOKEN |
bearer token for the web layer |
HUMANIZE_CORE_WEB_RATE_LIMIT_PER_MINUTE |
per-IP rate limit, integer |
OPENAI_API_KEY / ANTHROPIC_API_KEY / DEEPSEEK_API_KEY / GROQ_API_KEY / OPENROUTER_API_KEY / MOONSHOT_API_KEY / GLM_API_KEY / DASHSCOPE_API_KEY / OLLAMA_BASE_URL |
LLM provider auto-detect (first set wins) |
./.env and ~/.humanize-core.env are loaded automatically; shell env
always wins over file values.
pip install humanize-core # framework only (no language)
pip install humanize-core[ui] # + FastAPI / Jinja2 / uvicorn
pip install humanize-core[openai] # + openai SDK
pip install humanize-core[anthropic] # + anthropic SDK
pip install humanize-core[all] # everything aboveYou'll typically install a plugin too:
pip install humanize-zh # ZH detection + rules + templatesuv sync --extra dev --extra ui --extra openai --extra anthropic
make test # 260 tests, ~92% coverage
make lint # ruff
make typecheck # mypyMIT.