Skip to content

Releases: exaforge/extropy

v0.2.3

13 Feb 04:36

Choose a tag to compare

Refresh PyPI project metadata/readme after package rename to extropy-run.

v0.2.2

13 Feb 04:24

Choose a tag to compare

Rename PyPI package to extropy-run and publish first extropy-run release.

v0.2.1

13 Feb 04:17

Choose a tag to compare

Publish extropy package to PyPI after repo/package rename.

v0.2.0

13 Feb 00:43

Choose a tag to compare

Highlights

  • Added per-run token usage and cost tracking in simulation outputs (meta.json).
  • Added schema-driven categorical null phrasing (null_options / null_phrase) for persona rendering.
  • Added dependency auto-inference during constraint binding and related validator coverage.
  • Removed checked-in study artifacts from the core package repo.

Validation

  • ruff check .
  • ruff format --check .
  • pytest -q (637 passed)

v0.1.4

12 Feb 23:03

Choose a tag to compare

What's New

Simulation Dynamics

  • Added propagation damping controls (decay_per_hop, max_hops) and bounded spread behavior in simulation propagation.
  • Added option-level friction support for categorical outcomes to better model behavior persistence under social pressure.
  • Improved state handling for public/private dynamics and stabilization behavior in simulation engine.

Validation & Runtime Correctness

  • Fixed scenario validation result construction so errors/warnings are preserved and surfaced correctly.
  • Fixed scenario file-reference validation to resolve relative paths against the scenario file location.
  • Fixed validator/runtime contract for spread modifiers (edge_weight is now recognized in validation).
  • Fixed boolean expression consistency in safe evaluation (true/false now handled consistently at runtime).
  • Fixed expression syntax false-positives for valid escaped apostrophes and string literals.

Network Config Reliability

  • Fixed network config generation so degree multiplier condition values are typed correctly (boolean/number/string) instead of string-only.
  • Removed legacy preset network config from runtime; network behavior is now fully config-driven.

CLI

  • Improved scenario detection in entropy validate to handle scenario.yaml filenames directly.

Full Changelog: v0.1.3...v0.1.4

v0.1.3

07 Feb 23:22

Choose a tag to compare

What's New

  • Chat Completions API support for Azure OpenAI models (DeepSeek-V3.2, Kimi-K2.5, gpt-5-mini)
  • simulation.api_format config key (auto-defaults: chat_completions for Azure, responses for OpenAI)
  • Async reasoning timeouts (30s/20s) to prevent batch hangs
  • Fix: rate limit overrides now applied to both pivotal and routine limiters
  • Defensive input validation: rescale 0-1 conviction scores, clamp out-of-range sentiment
  • DeepSeek-V3.2 and Kimi-K2.5 pricing added

v0.1.2

06 Feb 02:55

Choose a tag to compare

What's New

Azure OpenAI Support

  • New provider: azure_openai — works in both pipeline and simulation zones
  • Reuses OpenAIProvider by swapping in Azure SDK clients at construction time
  • Configure via: entropy config set simulation.provider azure_openai
  • Env vars: AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT, AZURE_OPENAI_DEPLOYMENT
  • Default API version: 2025-03-01-preview (required for Responses API)

Progress Display Fixes

  • Fix stale data in progress display when 0 agents to reason
  • Cap position names at 40 chars to prevent layout overflow
  • Type AgentDoneCallback with ReasoningResponse instead of Any
  • Remove redundant avg_sentiment/avg_conviction properties

Tests

  • 600 tests across 15 test files

v0.1.1

03 Feb 05:56

Choose a tag to compare

What's Changed

New Features

  • entropy estimate - Predict simulation cost (LLM calls, tokens, USD) without running it
  • Adaptive network calibration - Binary search for target average degree
  • Claude Code skill - Pipeline assistance integration

Bug Fixes

  • Fix relative path resolution in scenario files - commands now work from any directory
  • Fix float_to_conviction returning string instead of float
  • Fix rate limiter 429 storms - staggered task launches, per-model splitting, concurrency caps
  • Fix async HTTP client cleanup before event loop shutdown
  • Register missing persona command
  • Add missing simulation config keys (pivotal_model, routine_model, rate_tier)

Improvements

  • Code cleanup: performance, reliability, tests
  • CI: enable uv cache, add workflow_dispatch triggers
  • 158 new simulation validation tests

Full Changelog: v0.1.0...v0.1.1

v0.1.0

31 Jan 22:58

Choose a tag to compare

Initial public release of entropy-predict.

Predictive intelligence through agent-based population simulation.

  • 7-step pipeline: spec → extend → sample → network → persona → scenario → simulate
  • Two-pass LLM reasoning (role-play + classification) to eliminate central tendency bias
  • Categorical conviction system with memory traces
  • Token bucket rate limiter with provider-aware defaults
  • Two-zone config: mix providers for pipeline (Claude) and simulation (OpenAI)
  • Persona system: first-person agent narratives with z-score relative positioning
  • Network propagation with edge-typed spread modifiers
pip install entropy-predict