Releases: exaforge/extropy
Releases · exaforge/extropy
v0.2.3
v0.2.2
Rename PyPI package to extropy-run and publish first extropy-run release.
v0.2.1
Publish extropy package to PyPI after repo/package rename.
v0.2.0
Highlights
- Added per-run token usage and cost tracking in simulation outputs (
meta.json). - Added schema-driven categorical null phrasing (
null_options/null_phrase) for persona rendering. - Added dependency auto-inference during constraint binding and related validator coverage.
- Removed checked-in study artifacts from the core package repo.
Validation
ruff check .ruff format --check .pytest -q(637 passed)
v0.1.4
What's New
Simulation Dynamics
- Added propagation damping controls (
decay_per_hop,max_hops) and bounded spread behavior in simulation propagation. - Added option-level friction support for categorical outcomes to better model behavior persistence under social pressure.
- Improved state handling for public/private dynamics and stabilization behavior in simulation engine.
Validation & Runtime Correctness
- Fixed scenario validation result construction so errors/warnings are preserved and surfaced correctly.
- Fixed scenario file-reference validation to resolve relative paths against the scenario file location.
- Fixed validator/runtime contract for spread modifiers (
edge_weightis now recognized in validation). - Fixed boolean expression consistency in safe evaluation (
true/falsenow handled consistently at runtime). - Fixed expression syntax false-positives for valid escaped apostrophes and string literals.
Network Config Reliability
- Fixed network config generation so degree multiplier condition values are typed correctly (boolean/number/string) instead of string-only.
- Removed legacy preset network config from runtime; network behavior is now fully config-driven.
CLI
- Improved scenario detection in
entropy validateto handlescenario.yamlfilenames directly.
Full Changelog: v0.1.3...v0.1.4
v0.1.3
What's New
- Chat Completions API support for Azure OpenAI models (DeepSeek-V3.2, Kimi-K2.5, gpt-5-mini)
simulation.api_formatconfig key (auto-defaults:chat_completionsfor Azure,responsesfor OpenAI)- Async reasoning timeouts (30s/20s) to prevent batch hangs
- Fix: rate limit overrides now applied to both pivotal and routine limiters
- Defensive input validation: rescale 0-1 conviction scores, clamp out-of-range sentiment
- DeepSeek-V3.2 and Kimi-K2.5 pricing added
v0.1.2
What's New
Azure OpenAI Support
- New provider:
azure_openai— works in both pipeline and simulation zones - Reuses
OpenAIProviderby swapping in Azure SDK clients at construction time - Configure via:
entropy config set simulation.provider azure_openai - Env vars:
AZURE_OPENAI_API_KEY,AZURE_OPENAI_ENDPOINT,AZURE_OPENAI_DEPLOYMENT - Default API version:
2025-03-01-preview(required for Responses API)
Progress Display Fixes
- Fix stale data in progress display when 0 agents to reason
- Cap position names at 40 chars to prevent layout overflow
- Type
AgentDoneCallbackwithReasoningResponseinstead ofAny - Remove redundant
avg_sentiment/avg_convictionproperties
Tests
- 600 tests across 15 test files
v0.1.1
What's Changed
New Features
entropy estimate- Predict simulation cost (LLM calls, tokens, USD) without running it- Adaptive network calibration - Binary search for target average degree
- Claude Code skill - Pipeline assistance integration
Bug Fixes
- Fix relative path resolution in scenario files - commands now work from any directory
- Fix
float_to_convictionreturning string instead of float - Fix rate limiter 429 storms - staggered task launches, per-model splitting, concurrency caps
- Fix async HTTP client cleanup before event loop shutdown
- Register missing
personacommand - Add missing simulation config keys (
pivotal_model,routine_model,rate_tier)
Improvements
- Code cleanup: performance, reliability, tests
- CI: enable uv cache, add workflow_dispatch triggers
- 158 new simulation validation tests
Full Changelog: v0.1.0...v0.1.1
v0.1.0
Initial public release of entropy-predict.
Predictive intelligence through agent-based population simulation.
- 7-step pipeline: spec → extend → sample → network → persona → scenario → simulate
- Two-pass LLM reasoning (role-play + classification) to eliminate central tendency bias
- Categorical conviction system with memory traces
- Token bucket rate limiter with provider-aware defaults
- Two-zone config: mix providers for pipeline (Claude) and simulation (OpenAI)
- Persona system: first-person agent narratives with z-score relative positioning
- Network propagation with edge-typed spread modifiers
pip install entropy-predict